Backend Engineer
Listed on 2026-01-12
-
IT/Tech
Data Engineer
About the Team
This role sits within a simulation-focused engineering organization responsible for building the systems that evaluate complex, large-scale autonomous technologies. Metrics are a critical part of this work: they power analysis across development, quality, and analytics teams and must operate reliably over massive datasets.
The team builds platforms that allow metrics to be defined, computed, validated, and consumed efficiently at scale.
About the RoleWe are looking for a Backend Engineer to design, build, and operate a metrics platform that supports statistical evaluation will own core components of the system, including data models, storage layouts, compute pipelines, and developer-facing frameworks for writing and testing metrics.
This role involves close collaboration with metric authors and metric consumers across engineering, analytics, and QA, ensuring that metric results are reliable, performant, and easy to use end to end.
What You’ll Do- Own and evolve the metrics platform, including schemas, storage layouts optimized for high-volume writes and fast analytical reads, and clear versioning strategies
- Build and maintain a framework for writing and running metrics, including interfaces, examples, local execution, and CI compatibility checks
- Design and implement testing systems for metrics and pipelines, including unit, contract, and regression tests using synthetic and sampled data
- Operate compute and storage systems in production, with responsibility for monitoring, debugging, stability, and cost awareness
- Partner with metric authors and stakeholders across development, analytics, and QA to plan changes and roll them out safely
- Strong experience using Python in production
, including asynchronous programming (e.g., asyncio, aiohttp, FastAPI) - Advanced SQL skills, including complex joins, window functions, CTEs, and query optimization through execution plan analysis
- Solid understanding of data structures and algorithms
, with the ability to make informed performance trade-offs - Experience with databases
, especially PostgreSQL (required); experience with Click House is a strong plus - Understanding of OLTP vs. OLAP trade-offs and how schema and storage decisions affect performance
- Experience with workflow orchestration tools such as Airflow (used today), Prefect, Argo, or Dagster
- Familiarity with data libraries and validation frameworks (Num Py, pandas, Pydantic, or equivalents)
- Experience building web services (FastAPI, Flask, Django, or similar)
- Comfort working with containers and orchestration tools like Docker and Kubernetes
- Experience working with large-scale datasets and data-intensive systems
- Ability to read and make small changes in C++ code
- Experience building ML-adjacent metrics or evaluation infrastructure
- Familiarity with Parquet and object storage layout/partitioning strategies
- Experience with Kafka or task queues
- Exposure to basic observability practices (logging, metrics, tracing)
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).