×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Pipeline Engineer - Freelance remote Berlin

Remote / Online - Candidates ideally in
8058, Zurich, Kanton Zürich, Switzerland
Listing for: Decentriq
Contract, Remote/Work from Home position
Listed on 2026-01-10
Job specializations:
  • IT/Tech
    Data Engineer, Data Scientist, Data Science Manager, AI Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 125000 CHF Yearly CHF 100000.00 125000.00 YEAR
Job Description & How to Apply Below
Position: Senior Data Pipeline Engineer - Freelance (80-100%, remote Berlin)

Decentriq is the rising leader in data-clean-room technology. With Decentriq, advertisers, retailers, and publishers securely collaborate on 1st-party data for optimal audience targeting and campaign measurement. Headquartered in Zürich, Decentriq is trusted by renowned institutions in the DACH market and beyond, such as RTL Ad Alliance, Publicis Media, and Post Finance.

Our analytics & ML pipelines are the heartbeat of this platform. Built in Python and Apache Spark, they run in Databricks work spaces. We are looking for a Senior Data Pipeline Engineer
freelancer (≥ 80 %, start as soon as possible) for 6 months (possible conversion into FTE) to support our team during a crunch time driven by customer demand. The role can be fully remote (± 4 h CET) or based in our Zürich/Berlin office.

Would you like to help us make the advertising industry ready for the 1st-party era? Then we’d love to hear from you!

Tasks
  • Own, Design, Build & Operate Data Pipelines – Take responsibility for our Spark-based pipeline, from development through production and monitoring.
  • Advance our ML Models – Improve and product ionise models for AdTech use-cases such as lookalike modelling and demographics modeling.
  • AI-Powered Productivity – Leverage LLM-based code assistants, design generators, and test-automation tools to move faster and raise the quality bar. Share your workflows with the team
  • Drive Continuous Improvement – Profile, benchmark, and tune Spark workloads, introduce best practices in orchestration & observability, and keep our tech stack future-proof.
Requirements
  • (Must have) Bachelor/Master/PhD in Computer Science, Data Engineering, or a related field and 5+ years of professional experience.
  • (Must have) Expert-level Python and PySpark/Scala Spark experience
  • (Must have) Proven track record building resilient, production-grade data pipelines with rigorous data-quality and validation checks.
  • (Must have) Data-platform skills: operating Spark clusters, job schedulers, or orchestration frameworks (Airflow, Dagster, custom schedulers).
  • (Plus) Working knowledge of ML lifecycle and model serving; familiarity with techniques for audience segmentation or look-a-like modelling is a big plus.
  • (Plus) Rust proficiency (we use it for backend services and compute-heavy client-side modules).
Benefits
  • Competitive rates
  • Ownership instead of simply an executor
  • An amazing and fun team that is distributed all over Europe.

No need for a formal motivational letter. Just send your CV along with a few bullet points about why you're excited to work with us. We look forward to your application!

#J-18808-Ljbffr
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary