×
Register Here to Apply for Jobs or Post Jobs. X

Backend LLM​/MCP Engineer

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Recruiting From Scratch
Full Time position
Listed on 2026-03-01
Job specializations:
  • Software Development
    Backend Developer, Cloud Engineer - Software, AI Engineer, Software Engineer
Salary/Wage Range or Industry Benchmark: 160000 - 220000 USD Yearly USD 160000.00 220000.00 YEAR
Job Description & How to Apply Below
Position: Backend LLM / MCP Engineer

Who is Recruiting from Scratch:

Recruiting from Scratch is a specialized talent firm dedicated to helping companies build exceptional teams. We partner closely with our clients to deeply understand their needs, then connect them with top‑tier candidates who are not only highly skilled but also the right fit for the company’s culture and vision. Our mission is simple: place the best people in the right roles to drive long‑term success for both clients and candidates.

Title of Role: Backend LLM / MCP Engineer
Location: San Francisco, CA
Company Stage of Funding: Seed Stage (YC-backed), Profitable
Office Type: Onsite (5–6 Days per Week)
Salary: $160,000 – $220,000 + Equity

Company Description

Our client is a fast‑growing, venture‑backed AI infrastructure startup building one of the most widely adopted open‑source LLM gateways in the ecosystem. Their platform standardizes and unifies 100+ LLM APIs into a single, consistent OpenAI‑compatible interface—powering developers and enterprises to seamlessly integrate across providers.

Backed by leading early‑stage investors and generating multi‑million ARR with strong growth, the company is profitable and scaling quickly. With a small, high‑caliber team based in San Francisco, they are defining the interoperability layer for the modern AI stack.

This is a rare opportunity to join an early team shaping core AI infrastructure used by thousands of developers worldwide.

What You Will Do

As a Backend LLM / MCP Engineer, you will play a foundational role in building and scaling the interoperability layer for large language models.

You will:

  • Design and implement transformations that map OpenAI‑compatible API requests to provider‑specific LLM APIs
  • Add and maintain support for new LLM providers and evolving API specifications
  • Handle provider‑specific edge cases, performance considerations, and streaming constraints
  • Build scalable logging, cost tracking, and spend aggregation systems across millions of API calls
  • Improve reliability and performance of high‑throughput backend services
  • Contribute directly to a widely adopted open‑source project
  • Collaborate closely with the founding team on architecture and product direction
  • Engage with users and developers to understand real‑world integration needs

This role combines deep backend engineering, API design, distributed systems thinking, and direct impact on AI developer tooling.

Ideal Candidate Background
  • 1+ years of backend engineering experience building production systems
  • Strong proficiency in Python and experience with modern backend frameworks (e.g., FastAPI)
  • Experience designing and integrating APIs at scale
  • Exposure to distributed systems, performance optimization, or high‑throughput services
  • Comfortable working in small, fast‑moving teams with high ownership
  • Experience maintaining or contributing to open‑source projects is a plus
  • Background in AI/ML infrastructure, developer tooling, or API platforms is highly valued
  • Startup experience (founder or early employee) is a strong plus

You thrive in environments where speed, ownership, and accountability matter. You enjoy solving infrastructure‑level problems and care deeply about clean abstractions and developer experience.

Preferred
  • Experience working with LLM APIs (OpenAI, Anthropic, Bedrock, Azure, Vertex, etc.)
  • Familiarity with asynchronous networking (e.g., httpx, aiohttp)
  • Experience with Postgres, Redis, cloud storage (S3, GCS), or observability tools
  • Background in API standardization, data pipelines, or developer SDKs
  • Experience scaling systems handling millions of events or logs
Compensation, Benefits & Additional Details
  • Base Salary: $160,000 – $220,000
  • Equity:
    Meaningful early‑stage equity (0.5% – 3% range depending on experience)
  • Full‑time position
  • Onsite collaboration in San Francisco
  • High ownership and direct exposure to leadership
  • Opportunity to shape foundational AI infrastructure used globally

This role is ideal for engineers who want to work at the core of AI infrastructure, influence open standards, and build systems that power the next generation of AI products.

If you're excited about shaping the interoperability layer of the LLM ecosystem and working alongside a highly technical founding team, we’d love to connect.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary