Data Engineer Engineering Poland
Listed on 2026-01-16
-
IT/Tech
Data Engineer
Data Engineer
Department: Engineering
Employment Type: Full Time
Location: Poland
DescriptionAbout Bizimply
Bizimply is a workforce management platform designed to streamline operations for businesses in the hospitality, retail, and leisure industries. The Bizimply platform provides a comprehensive suite of tools that enable businesses to efficiently manage their workforce, including scheduling, time and attendance tracking, task management, HR and performance reporting. By centralising these functions in one user-friendly interface, Bizimply helps businesses save time, reduce administrative overhead, and improve overall operational efficiency.
We’re looking for a Data Engineer to help us extend the foundational data infrastructure that will power the next phase of growth, including our AI initiatives. You’ll be joining a small, enthusiastic team so we’re looking for someone who can both build and support, excited by the challenge of laying down best practices and helping shape our new data stack.
Your primary focus will be on extending Bizimply’s Data Store (ODS) and laying the groundwork for a future Data Warehouse
. You'll work closely with our senior data engineer, as well as software engineering and product teams to enable better decision‑making and intelligent automation.
What You’ll Do
- Build and maintain robust, scalable data pipelines using Airflow
, Python
, and SQL
. - Design, build, optimize data pipelines using dbt
, or sqlmesh
. - Develop and manage the ODS and support the eventual rollout of a modern Data Warehouse
. - Integrate data from internal systems and external APIs to create clean, reliable datasets.
- Work closely with engineers to operationalize machine learning workflows.
- Ensure high data quality through monitoring, validation, and error handling.
- Provide guidance to less experienced team members and champion data engineering best practices.
- Deploy and manage infrastructure in the cloud (AWS, GCP, or Azure) using modern Dev Ops tooling.
- Implement monitoring and alerting to ensure data pipelines are reliable and maintainable.
- 3-5 years of experience in data engineering or related roles.
- Strong skills in Python
, SQL
, and Airflow or similar orchestration tools. - Experience working with cloud infrastructure and data warehousing tools (e.g.,
Snowflake
, Big Query
, Redshift
). - Exposure to ML pipelines or collaboration with ML/AI teams.
- Ability to work independently while supporting a less-experienced team.
- Strong communication skills and an eagerness to mentor and share knowledge.
- Experience building an ODS or Data Warehouse from scratch.
- Familiarity with event-driven systems or streaming tools (e.g., Kafka, Pub/Sub).
- Dev Ops experience or infrastructure-as-code (e.g., Terraform, Cloud Formation).
- Competitive compensation aligned with relevant experience.
- Remote-friendly, flexible work environment.
- Budget for learning, courses, and conferences.
- A supportive, mission-driven team eager to grow and learn together.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).