×
Register Here to Apply for Jobs or Post Jobs. X

Technical Lead, Data & Inference

Job in Mission, Johnson County, Kansas, 66201, USA
Listing for: Primer
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 100000 - 125000 USD Yearly USD 100000.00 125000.00 YEAR
Job Description & How to Apply Below

This role is in-person in the San Francisco Bay Area. We are looking for A-players who want to work on groundbreaking technology, love to collaborate in person, and we’re willing to pay for it. This person should expect to spend most days in our office close to Embarcadero station.

OVERVIEW

Our product lets users join together disparate datasets, connect siloed marketing and sales SaaS tools and construct automated “playbooks” for customer acquisition. The orchestration of these playbooks is complex and can only be done by mastering scalable APIs, ETLs, Reverse ETLs, distributed systems, and directed acyclic graphs (DAGs). We work with a modern technology stack (React/Node, Python) as well as cutting edge data technologies such as Prefect and dbt .

Our systems processes terabytes daily and aim to provide sub-second response times to our customers’ queries and make batch processing behave like real-time.

After bootstrapping our way to almost $1M in revenue in our first year, Craft Ventures swooped in to lead Seed and Series A rounds as we continued to demonstrate consistent 15-20% MoM growth. Early team members of Carta, Okta, Figma, Masterclass, Segment, , People.

AI, and Skyhigh have also joined our investor network.

We are a small and dedicated, distributed team (~20 people), working on making advanced data infrastructure developed at large tech companies accessible to marketing and sales teams around the world. If you are fascinated by the software that powers large technology companies such as Dropbox, but want the challenges and freedom that come with working in a small startup, then a working at Primer might be just the thing for you!

Your

Mission

Lead the design, delivery, and scaling of Primer’s end-to-end data platform—owning the full lifecycle from ingestion to insights to make data reliable, fast, and useful across the company. You’ll turn diverse, third-party APIs and heterogeneous sources into trusted, low-latency pipelines; uphold data quality and SLAs; and amplify impact by spreading best practices and data-driven thinking across teams.

What you’ll do
  • Be a force multiplier: design and ship scalable batch + streaming pipelines; prototype, profile, and harden critical paths. Code review, mentor, and raise the overall bar.
  • Own reliability, cost, and SLOs: instrument observability and capacity; drive ≥99.9% uptime, minutes-level latency, and cost/TB efficiency; lead RCAs and land durable fixes.
  • Operate pragmatic inference pipelines that augment data flows (enrichment, scoring, QA with LLMs/RAG); manage versioning, canarying, caching, and evaluation loops.
  • Partner cross-functionally to build “data as a product” a la Netflix : define audience and contracts, ensure reliability and ownership, manage lifecycle (versioning→sun setting), and validate impact through usage.
  • Drive pipeline and data lake architecture: run design reviews, document trade-offs/lineage/reversibility, and make clear build‑vs‑buy calls others can safely build on.
  • Scale integrations with third‑party APIs and internal services; resolve data conflicts, ensure quality, and support both sub‑second query paths and large batch workflows.
What you’ll need
  • 6+ years building and scaling production‑grade data systems, with a track record in data architecture, modeling, and pipeline design. Leadership experience on data engineering/platform teams.
  • Expert SQL (query optimization on large datasets) and strong Python
    ; hands‑on with distributed data tech (e.g., Spark/Flink/Kafka) and modern orchestration (Airflow/Dagster/Prefect).
  • Experience building and maintaining integrations and data services; solid understanding of concurrency/parallelization and classical processing patterns.
  • Familiarity with dbt
    , columnar databases (DuckDB, Clickhouse), and the modern data stack;
    IaC, CI/CD, and observability fundamentals. (Kubernetes exposure is a plus.)
  • BS/MS in CS/CE/EE (or similar) with strong fundamentals; excellent async communication in distributed teams.
  • Bonus:
    Very good Node.js skills, which will help you onboard quickly.
You’ll succeed by having
  • A founder‑level bias for action
    : turn bottlenecks into automated workflows; ship fast, iterate, and…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary