Data Engineer
Listed on 2026-01-13
-
IT/Tech
Data Engineer
Wrike is the most powerful work management platform. Built for teams and organizations looking to collaborate, create, and exceed every day, Wrike brings everyone and all work into a single place to remove complexity, increase productivity, and free people up to focus on their most purposeful work.
Our vision: A world where everyone is free to focus on their most purposeful work, together.
Ready to become a Wriker? We're looking for an enthusiastic Data Engineer to organize reliable and scalable infrastructure, and help in designing and maintaining clean data sources for analysis of business purposes. In this role, you will be responsible for conceptualizing, architecting, and constructing data pipelines and services, while helping to enhance our data platform as a key driver of business decision‑making across Wrike.
Based on your experience and interests you may work on organizing data warehousing, integrating SaaS applications, developing tools, or implementing practical AI solutions. We encourage candidates with a wide range of interests and are happy to support you in expanding your expertise into new areas.
- Pipelines / ETL – architecting and building advanced streaming and batch data pipelines
- DWH – creating, developing, and overseeing robust data warehouse components
- Data Quality – designing and implementing frameworks for validation, monitoring, and alerting
- Data Governance – managing data catalogs and data lineage for development and operations
- Data Protection – researching, developing, and deploying solutions and techniques to enhance the protection of PII and sensitive information
- Good level of written and verbal English
- Work experience building & maintaining data pipelines on data‑heavy environments (Data Engineering, Backend with emphasis on data processing, Data Science with emphasis on infrastructure)
- Strong communication and analytical skills
- Solid understanding of SQL
- Experience with Python (Data transformation pipelines, API & database integrations)
- Hands‑on experience with Data Warehousing platforms (Big Query, Redshift, Snowflake, Vertica or similar)
- Familiarity with data pipeline orchestration tools (Airflow, Dagster, Prefect, or similar)
- Understanding of CI/CD and containerization
- Good understanding of Database architecture & Data modelling experience
- Development, testing, deployment, and assurance of the reliability of data‑intensive applications
- Familiarity with Data Streaming and CDC (Pub/Sub, Data Flow, Kafka, Flink, Spark Streaming or similar)
- Understanding of Kubernetes
- Experience with major B2B vendor integrations (Salesforce/CPQ, Net Suite, Marketo, etc.)
- Experience with Data Quality Tools, Monitoring and Alerting
We offer a hybrid work model with 2–3 in‑office days per week in hubs located in San Diego, Prague, Dublin, Nicosia, and Tallinn, and a culture that values collaboration, creative thinking, ownership and customer focus.
How to ApplyInterested in this position? Please submit your resume and cover letter through the application portal.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).