ML/AI Research Engineer — Agentic AI Lab; Founding Team
Listed on 2026-03-01
-
Software Development
AI Engineer, Machine Learning/ ML Engineer, Data Scientist
ML/AI Research Engineer — Agentic AI Lab (Founding Team)
Location: San Francisco Bay Area
Type: Full-Time
Compensation: Competitive salary + meaningful equity (founding tier)
Backed by 8VC, we're building a world‑class team to tackle one of the industry’s most critical infrastructure problems.
About the RoleWe’re designing the future of enterprise AI infrastructure — grounded in agents, retrieval‑augmented generation (RAG), knowledge graphs, and multi‑tenant governance.
We’re looking for an ML/AI Research Engineer to join our AI Lab and lead the design, training, evaluation, and optimization of agent‑native AI models. You'll work at the intersection of LLMs, vector search, graph reasoning, and reinforcement learning — building the intelligence layer that sits on top of our enterprise data fabric.
This isn’t a prompt engineer role. It’s full‑cycle ML: from data curation and fine‑tuning to evaluation, interpretability, and deployment — with cost‑awareness, alignment, and agent coordination all in scope.
Core ResponsibilitiesFine‑tune and evaluate open‑source LLMs (e.g. LLaMA 3, Mistral, Falcon, Mixtral) for enterprise use cases with both structured and unstructured data
Build and optimize RAG pipelines using Lang Chain, Lang Graph, Llama Index, or Dust — integrated with our vector DBs and internal knowledge graph
Train agent architectures (ReAct, AutoGPT, BabyAGI, Open Agents) using enterprise task data
Develop embedding‑based memory and retrieval chains with token‑efficient chunking strategies
Create reinforcement learning pipelines to optimize agent behaviors (e.g. RLHF, DPO, PPO)
Establish scalable evaluation harnesses for LLM and agent performance, including synthetic evals, trace capture, and explainability tools
Contribute to model observability, drift detection, error classification, and alignment
Optimize inference latency and GPU resource utilization across cloud and on‑prem environments
Deep experience fine‑tuning open‑source LLMs using Hugging Face Transformers, Deep Speed, vLLM, FSDP, LoRA/QLoRA
Worked with both base and instruction‑tuned models; familiar with SFT, RLHF, DPO pipelines
Comfortable building and maintaining custom training datasets, filters, and eval splits
Understand trade‑offs in batch size, token window, optimizer, precision (FP16, bfloat
16), and quantization
Experience building enterprise‑grade RAG pipelines integrated with real‑time or contextual data
Familiar with Lang Chain, Lang Graph, Llama Index, and open‑source vector DBs (Weaviate, Qdrant, FAISS)
Experience grounding models with structured data (SQL, graph, metadata) + unstructured sources
Bonus:
Worked with Neo4j, Puppy graph, RDF, OWL, or other semantic modeling systems
Experience training or customizing agent frameworks with multi‑step reasoning and memory
Understand common agent loop patterns (e.g. Plan→Act→Reflect), memory recall, and tools
Familiar with self‑correction, multi‑agent communication, and agent ops logging
Strong background in token cost optimization, chunking strategies, reranking (e.g. Cohere, Jina), compression, and retrieval latency tuning
Experience running models under quantized (int4/int8) or multi‑GPU settings with inference tuning (vLLM, TGI)
LLM Training & Inference
:
Hugging Face Transformers, Deep Speed, vLLM, Flash Attention, FSDP, LoRAAgent Orchestration
:
Lang Chain, Lang Graph, ReAct, Open Agents, Llama IndexVector DBs
:
Weaviate, Qdrant, FAISS, Pinecone, ChromaGraph Knowledge Systems
:
Neo4j, Puppy graph, RDF, Gremlin, JSON-LDStorage & Access
:
Iceberg, DuckDB, Postgres, Parquet, Delta LakeEvaluation
:
OpenLLM Evals, Trulens, Ragas, Lang Smith, Weight & BiasesCompute
:
Ray, Kubernetes, TGI, Sagemaker, Lambda Labs, ModalLanguages
:
Python (core), optionally Rust (for inference layers) or JS (for UX experimentation)
Startup DNA: resourceful, fast‑moving, and capable of working in ambiguity
Deep curiosity about agent‑based architectures and real‑world enterprise complexity
Comfortable owning model performance end‑to‑end: from dataset to deployment
Strong instincts around explainability, safety, and continuous improvement
Enjoy pair‑designing with product and UX to shape capabilities, not just APIs
This role is foundational to our thesis: that agents + enterprise data + knowledge modeling can create intelligent infrastructure for real‑world, multi‑billion‑dollar workflows. Your work won’t be buried in research reports — it will be productionized and activated by hundreds of users and hundreds of thousands of decisions. If this is your dream role - we would love to hear from you.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).