×
Register Here to Apply for Jobs or Post Jobs. X

ML​/AI Research Engineer — Agentic AI Lab; Founding Team

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Fabrion
Full Time position
Listed on 2026-03-01
Job specializations:
  • Software Development
    AI Engineer, Machine Learning/ ML Engineer, Data Scientist
Salary/Wage Range or Industry Benchmark: 200000 - 250000 USD Yearly USD 200000.00 250000.00 YEAR
Job Description & How to Apply Below
Position: ML/AI Research Engineer — Agentic AI Lab (Founding Team)

ML/AI Research Engineer — Agentic AI Lab (Founding Team)

Location: San Francisco Bay Area
Type: Full-Time
Compensation: Competitive salary + meaningful equity (founding tier)

Backed by 8VC, we're building a world‑class team to tackle one of the industry’s most critical infrastructure problems.

About the Role

We’re designing the future of enterprise AI infrastructure — grounded in agents, retrieval‑augmented generation (RAG), knowledge graphs, and multi‑tenant governance.

We’re looking for an ML/AI Research Engineer to join our AI Lab and lead the design, training, evaluation, and optimization of agent‑native AI models. You'll work at the intersection of LLMs, vector search, graph reasoning, and reinforcement learning — building the intelligence layer that sits on top of our enterprise data fabric.

This isn’t a prompt engineer role. It’s full‑cycle ML: from data curation and fine‑tuning to evaluation, interpretability, and deployment — with cost‑awareness, alignment, and agent coordination all in scope.

Core Responsibilities
  • Fine‑tune and evaluate open‑source LLMs (e.g. LLaMA 3, Mistral, Falcon, Mixtral) for enterprise use cases with both structured and unstructured data

  • Build and optimize RAG pipelines using Lang Chain, Lang Graph, Llama Index, or Dust — integrated with our vector DBs and internal knowledge graph

  • Train agent architectures (ReAct, AutoGPT, BabyAGI, Open Agents) using enterprise task data

  • Develop embedding‑based memory and retrieval chains with token‑efficient chunking strategies

  • Create reinforcement learning pipelines to optimize agent behaviors (e.g. RLHF, DPO, PPO)

  • Establish scalable evaluation harnesses for LLM and agent performance, including synthetic evals, trace capture, and explainability tools

  • Contribute to model observability, drift detection, error classification, and alignment

  • Optimize inference latency and GPU resource utilization across cloud and on‑prem environments

Desired Experience Model Training
  • Deep experience fine‑tuning open‑source LLMs using Hugging Face Transformers, Deep Speed, vLLM, FSDP, LoRA/QLoRA

  • Worked with both base and instruction‑tuned models; familiar with SFT, RLHF, DPO pipelines

  • Comfortable building and maintaining custom training datasets, filters, and eval splits

  • Understand trade‑offs in batch size, token window, optimizer, precision (FP16, bfloat
    16), and quantization

RAG + Knowledge Graphs
  • Experience building enterprise‑grade RAG pipelines integrated with real‑time or contextual data

  • Familiar with Lang Chain, Lang Graph, Llama Index, and open‑source vector DBs (Weaviate, Qdrant, FAISS)

  • Experience grounding models with structured data (SQL, graph, metadata) + unstructured sources

  • Bonus:
    Worked with Neo4j, Puppy graph, RDF, OWL, or other semantic modeling systems

Agent Intelligence
  • Experience training or customizing agent frameworks with multi‑step reasoning and memory

  • Understand common agent loop patterns (e.g. Plan→Act→Reflect), memory recall, and tools

  • Familiar with self‑correction, multi‑agent communication, and agent ops logging

Optimization
  • Strong background in token cost optimization, chunking strategies, reranking (e.g. Cohere, Jina), compression, and retrieval latency tuning

  • Experience running models under quantized (int4/int8) or multi‑GPU settings with inference tuning (vLLM, TGI)

Preferred Tech Stack
  • LLM Training & Inference
    :
    Hugging Face Transformers, Deep Speed, vLLM, Flash Attention, FSDP, LoRA

  • Agent Orchestration
    :
    Lang Chain, Lang Graph, ReAct, Open Agents, Llama Index

  • Vector DBs
    :
    Weaviate, Qdrant, FAISS, Pinecone, Chroma

  • Graph Knowledge Systems
    :
    Neo4j, Puppy graph, RDF, Gremlin, JSON-LD

  • Storage & Access
    :
    Iceberg, DuckDB, Postgres, Parquet, Delta Lake

  • Evaluation
    :
    OpenLLM Evals, Trulens, Ragas, Lang Smith, Weight & Biases

  • Compute
    :
    Ray, Kubernetes, TGI, Sagemaker, Lambda Labs, Modal

  • Languages
    :
    Python (core), optionally Rust (for inference layers) or JS (for UX experimentation)

Soft Skills & Mindset
  • Startup DNA: resourceful, fast‑moving, and capable of working in ambiguity

  • Deep curiosity about agent‑based architectures and real‑world enterprise complexity

  • Comfortable owning model performance end‑to‑end: from dataset to deployment

  • Strong instincts around explainability, safety, and continuous improvement

  • Enjoy pair‑designing with product and UX to shape capabilities, not just APIs

Why This Role Matters

This role is foundational to our thesis: that agents + enterprise data + knowledge modeling can create intelligent infrastructure for real‑world, multi‑billion‑dollar workflows. Your work won’t be buried in research reports — it will be productionized and activated by hundreds of users and hundreds of thousands of decisions. If this is your dream role - we would love to hear from you.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary