Senior/Backend Engineer
Listed on 2026-03-01
-
IT/Tech
AI Engineer, Systems Engineer, Machine Learning/ ML Engineer
Location:
Remote (North America) or Austin, TX
Employment Type:
Full-time (no contractors)
Department:
Engineering
Hamming automates QA for voice AI agents. Everyone is building voice agents. We secure them. In fact, we invented this category. With one click,
thousands of our agents call our customers’ agents across accents, background noise, and personalities—then we generate crisp bug reports and production-grade analytics. Reliability is the moat in voice AI, and that’s our whole job.
We are one of the fastest engineering teams in the world. We prod deploy 4x / day.
I’m looking for someone who can own reliability and scale across our LLM-enabled platform, shipping precise, outcome-driven improvements to high-availability systems.
—
Sumanyu (CEO)
Previously: grew Citizen 4× and scaled an AI sales program to $100
Ms/yr at Tesla.
Own core services in Type Script/Node.js and Python that orchestrate Live Kit
, Temporal
, STT/TTS, and LLM tooling for real-time voice agents.Scale 1 → N → 100×: take what works today and harden it for 10K parallel calls with 99.99% uptime. Turn human playbooks into productized systems.
Harden pipelines for ingestion, evaluation, and analytics so telephony events, recordings, and outcomes propagate reliably across services.
Level-up observability
: deepen Open Telemetry/Sig Noz and trace-first practices to shrink mean-time-to-truth in prod.Prototype → test → prod
: partner with product to ship new LLM-driven behaviors with clear success metrics, guardrails, and regressions blocked in CI.Infrastructure readiness
: CI/CD, environment automation, incident response playbooks—customer conversations stay online.
Have senior/staff experience running distributed backends with real-time/streaming constraints.
Are fluent in Type Script/Node.js and comfortable jumping into Python for ML/audio jobs.
Know Temporal (or similar workflow engines), queues, Redis, and PostgreSQL
.Have shipped production LLM apps and understand prompt/tool design, evals, and guardrail instrumentation.
Operate cloud-native on AWS with Terraform
; k8s doesn’t scare you.Are a power user of Cursor/Zed/Devin and were using code-gen before it was cool.
Have intuition for what current-gen LLMs can/can’t do—and what tomorrow’s models will unlock.
Think independently,
grind with customers
, and do whatever it takes—without dropping the quality bar.Bonus: built 0→1 real-time systems in Telecom/Networking, Autonomous Vehicles, or HFT; founded something; built AI voice apps.
Voice simulations that feel real
: accents, overlapping speech, crosstalk, background noise, barge-ins.Massive concurrency
: 10,000+ parallel calls with deterministic behavior and graceful degradation.Temporal-driven orchestration for long-running, interruptible call flows.
Closed-loop reliability
: turn prod failures into auto-generated tests and blocked deploys.Trace-everything culture: make “what happened?” a 30-second question, not a war room.
Outcomes over output
: we adjust roadmaps when new data lands.Demo early and document decisions so context moves fast.
Own incidents
: lead the investigation, write crisp notes, land durable fixes.Direct, candid, respectful communication keeps remote teammates in lockstep with Austin HQ.
App
:
Next.js, Type Script, TailwindAI
:
OpenAI, Anthropic, STT/TTS providersRealtime/Orchestration
:
Live Kit, Pipecat/Daily, TemporalInfra/DB
: AWS, k8s, Postgre
SQL, Redis, TerraformObservability
:
Open Telemetry, Sig Noz
If you want to make AI voice agents reliable at scale
, let’s talk.
Send a short note (links to work > resumes) and tell us about something reliability-critical you shipped: what broke, what you fixed, and how you knew it worked.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).