More jobs:
Technical Lead, Data & Inference
Job in
San Francisco, San Francisco County, California, 94199, USA
Listed on 2026-01-14
Listing for:
Primer
Full Time
position Listed on 2026-01-14
Job specializations:
-
IT/Tech
Data Engineer
Job Description & How to Apply Below
Our mighty, small team ofachieved 3.8× growth in 2025 with everyone making an outsized impact.
We’re refreshingly novel in the crowded go-to-market landscape: our work is patent‑pending across real-time data, graphics, inference, and AI recommendations—so this isn’t “maintain the dashboards” energy. And we run with trust: engineers spend >90% of their time outside meetings, aiming for outcomes over theatrics.
Role Description
As Technical Lead, Data & Inference, you’ll lead the design, delivery, and scaling of our end‑to‑end data platform—from ingestion to insights—so the rest of the company can move fast without guessing. You’ll turn diverse third‑party APIs and heterogeneous sources into trusted, low‑latency pipelines, uphold data quality + SLAs, and raise the standard through mentorship and hands‑on technical leadership.
What you’ll do:
• Build and ship batch + streaming pipelines; prototype, profile, and harden the critical paths. Mentor, review, and lift the bar.
• Own reliability, cost, and SLOs: observability + capacity planning, ≥99.9% uptime, minutes‑level latency, and cost/TB efficiency—plus RCAs that end in fixes that stick.
• Run pragmatic inference pipelines that augment data flows (enrichment, scoring, QA with LLMs/RAG) with versioning, canaries, caching, and eval loops.
• Partner cross‑functionally to build “data as a product” a la Netflix: contracts, ownership, lifecycle management, and usage‑based impact.
• Drive pipeline + data lake architecture: crisp design reviews, documented trade‑offs/lineage/reversibility, and clear build‑vs‑buy calls others can trust.
This is in‑person in the SF Bay Area, with most days near Embarcadero. We like collaborating in person, we’re looking for A‑players, and we’re willing to pay for it.
Qualifications
• 6+ years building and scaling production‑grade data systems, with leadership experience on data engineering/platform teams (architecture, modeling, pipeline design).
• Expert SQL (optimization on large datasets) + strong Python; hands‑on with distributed data tech (e.g., Spark/Flink/Kafka) and orchestration (Airflow/Dagster/Prefect).
• Experience scaling integrations with third‑party APIs/internal services; strong instincts around concurrency/parallelization and classic processing patterns.
• Familiarity with dbt, columnar databases (DuckDB, Click House), and modern data stack fundamentals;
IaC/CI/CD/observability basics (Kubernetes exposure is a plus).
• Strong fundamentals (BS/MS in CS/CE/EE or similar) and strong async communication across distributed teams.
• Bonus: very good Node.js—it helps you ramp faster in our stack.
More details can be found at
#JLjbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×