Senior Engineer Data, AI & Analytics
Listed on 2026-01-12
-
IT/Tech
Data Engineer, AI Engineer, Data Analyst
Location: Welcome
Your Skills and Qualifications Must-Haves
- Strong communication and stakeholder management in both English and German (written + spoken)
- Production-level experience in Python for data and AI engineering (pipelines, APIs, orchestration)
- Solid SQL and data modeling expertise (including incremental strategies)
- Hands‑on experience with a major cloud provider (AWS, GCP, or Azure)
- Terraform/IaC experience for provisioning cloud infrastructure
- Experience using at least one modern BI tool (e.g., Metabase, Power BI, Looker)
- Deep AWS experience (S3, Lambda, Glue, Redshift, Open Search)
- Hands‑on experience deploying AI/LLM‑based systems into production
- Experience using dbt Cloud for transformation pipelines
- Familiarity with tracing and observability (e.g., Langfuse, Open Telemetry)
- Experience preparing datasets and running supervised fine‑tuning (SFT) of LLMs
- Exposure to reverse ETL tools (e.g., Census, Hightouch) or building custom syncs to Hub Spot, Slack, APIs
We’re looking for a senior‑level engineer to take full ownership of our data and AI systems, from ingestion and modeling to embedding pipelines and LLM-based applications. You’ll operate across domains (data infrastructure, BI, and AI), working closely with stakeholders in product, operations, and engineering.
This is a
full‑stack data role with a dual mandate:
- Build and scale our AI‑native data platform
- Help shape and lead a growing Data & AI team as we scale
We’re early‑stage, so you’ll move between strategy and code, long‑term design and fast iteration. You’ll lay the groundwork not just for the system, but for the team that will own it.
Your Responsibilities AI & Application Engineering- Architect and deploy AI‑powered systems into production (e.g., FastAPI apps with RAG architecture using Open Search and Langfuse)
- Optimize agentic workflows, prompt & embedding pipelines, and retrieval quality through experimentation and tracing
- Extend LLM capabilities with supervised fine‑tuning (SFT) for in‑domain data distributions where RAG alone under performs
- Build scalable ingestion and transformation pipelines for both proprietary and external data (using Lambda, Glue, Terraform)
- Own embedding pipelines for retrieval‑augmented generation systems
- Manage infrastructure‑as‑code for all core components (Redshift, S3, VPC, IAM, Open Search)
- Maintain and evolve a clean, documented data model (dbt Cloud ⇒ leverage Fusion)
- Develop and maintain BI dashboards in Quick Sight and/or Metabase
- Provide ad‑hoc analytical support for product, sales, and ops teams
- Build event‑driven automation and reverse ETL pipelines to serve data or AI outputs back into operational systems (e.g., Hub Spot)
- Work closely with stakeholders across engineering, product, and operations to define the right data products and abstractions
- Lay the foundation for a high‑performing Data & AI team. Help hire, mentor, and establish best practices as we grow
Become part of one of Europe’s fastest‑growing Climate Tech start‑ups.
Supported by leading US and EU investors, we are shaping the future of the sustainable construction and real estate industry.
Modern headquarters in the west of Berlin: conveniently located and well connected.
Hybrid work model with flexible working hours.
30 vacation days plus one additional day per year, as well as time off on December 24 and 31.
We reward referrals for our open positions through a thoughtful and appreciative referral program.
After the probation period: up to five days of educational leave per year with a budget of €1,500.
BVG public transport ticket.
Membership at Urban Sports Club.
Don’t just work with us, but help shape the change and drive decarbonization forward.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).