×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Principal Engineer - Data Platforms

Remote / Online - Candidates ideally in
Tempe, Maricopa County, Arizona, 85285, USA
Listing for: Dutch Bros Coffee
Remote/Work from Home position
Listed on 2026-02-28
Job specializations:
  • Software Development
    Data Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 125000 USD Yearly USD 100000.00 125000.00 YEAR
Job Description & How to Apply Below

It's fun to work in a company where people truly believe in what they are doing. At Dutch Bros Coffee, we are more than just a coffee company. We are a fun-loving, mind-blowing company that makes a difference one cup at a time.

Position Overview

As the Principal Engineer in the Enterprise Data Platform, you will lead the modernization and optimization of Dutch Bros’ foundational data ecosystem. In this high-impact, hands‑on individual contributor role, you will design and operate scalable, resilient infrastructure to power distributed analytics, machine learning, and AI‑driven workflows. You are a builder at heart—responsible for writing proof‑of‑concepts, defining rigorous coding standards, and championing AI‑assisted development tools that reduce toil and accelerate team velocity.

By partnering with other principal engineers and mentoring the broader team, you will bridge the gap between high‑level architecture and seamless execution to deliver a world‑class, self‑service data experience for the entire organization.

Job Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
  • 15+ years of software engineering experience, with a minimum of 10 years specializing in backend distributed systems or data infrastructure at scale.
  • Expert‑level Python and SQL proficiency (10+ years) rooted in a career of building production‑grade software.
  • Proven track record of designing large‑scale data platforms with a deep understanding of CAP theorem, eventual consistency, and the trade‑offs between batch and streaming architectures.
  • Hands‑on mastery of Snowflake (internals and clustering), dbt (macro design and Jinja), Airflow (scheduler internals), and Power BI (Import vs. Live connection).
  • Comprehensive knowledge of AWS services, including IAM, VPC, Glue, S3, SFTP, Lambda, Cloud Watch, and SNS.
  • Experience implementing Gitlab CI/CD and Data Dog for robust system monitoring and alerting.
  • Ability to design RAG architectures, manage vector databases, and integrate LLMs into complex data pipelines.
  • Skilled in writing persuasive RFCs and ADRs that drive consensus among architects and engineering leadership.
  • Proven ability to influence technical strategy and facilitate cross‑functional alignment across organizational levels without direct managerial authority.
Location Requirement

This role is located in Tempe, Arizona. This position is required to be in office 4 days per week (Mon‑Thurs);
Fridays are optional remote work days.

Key Result Areas (KRAs) Technical Strategy & Platform Architecture
  • Define the long‑term technical architecture for the Enterprise Data Platform, translating business strategy into scalable Data Mesh and domain‑oriented specifications.
  • Implement automated CI/CD pipelines and Infrastructure as Code (IaC) to foster a unified engineering culture across disciplines.
  • Design robust APIs that enable seamless data consumption across operations, finance, and product teams.
AI‑Assisted Development & Integration
  • Lead the integration of AI‑assisted development (Cursor, MCP, Copilot) to accelerate developer velocity and reduce cognitive load.
  • Scale LLM‑driven code generation for AWS data pipelines, including automated test creation, documentation, and semantic schema generation.
  • Leverage Amazon Bedrock, Snowflake Cortex, and AI‑enabled IDEs to optimize the data lifecycle and reduce delivery lead times.
Data Engineering at Scale
  • Build resilient ELT/ETL pipelines utilizing S3, Lambda, Glue, dbt, and Airflow (MWAA).
  • Establish data quality, observability, lineage, and SLAs as core, first‑class features of the data platform.
  • Standardize enterprise‑wide schema design, modeling patterns, and deployment workflows.
Machine Learning & Infrastructure
  • Design and product ionize end‑to‑end ML infrastructure, including feature stores, model experimentation frameworks, and deployment monitoring.
  • Build optimized ETL/ELT workflows for training data and model deployment leveraging Snowflake ML (Snowpark) and Amazon Sage Maker.
Engineering Standards & Technical Influence
  • Enforce high standards for code quality through rigorous PR reviews, unit testing, and automated schema validation within CI/CD…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary