×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

AI Engineer- Gen AI​/SWE

Job in Bellevue, King County, Washington, 98009, USA
Listing for: Weights & Biases
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    AI Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: AI Engineer- Gen AI/SWE- Weights & Biases

AI Engineer – Gen AI/SWE – Weights & Biases

Core Weave, the AI Hyperscaler™, acquired Weights & Biases to create the most powerful end-to-end platform to develop, deploy, and iterate AI faster. Since 2017, Core Weave has operated a growing footprint of data centers covering every region of the US and across Europe, and was ranked as one of the TIME
100 most influential companies of 2024. By bringing together Core Weave’s industry‑leading cloud infrastructure with the best‑in‑class tools AI practitioners know and love from Weights & Biases, we’re setting a new standard for how AI is built, trained, and scaled.

What You’ll Do

The AI team is a hands‑on applied AI group at Weights & Biases that turns frontier research into teachable workflows. We collaborate with leading enterprises and the OSS community. We are the team that took W&B from a few hundred users to millions of users and one of the most beloved tools in the ML community. A senior applied role at the research‑to‑product boundary.

You will design, implement, and evaluate LLM applications and agents with cutting‑edge techniques from the latest research, then document and teach them to our community and customers. The focus is application, not novel research: rapid prototyping, careful evaluation, and production‑grade reference implementations with clear trade‑offs. We prioritize responsible, safe deployment and reproducibility.

About

The Role
  • Ship end‑to‑end GenAI workflows (prompting → RAG → tools/agents → eval → serve) with reproducible repos, W&B Reports, and dashboards others can run.
  • Build agentic systems (tool use, function calling, multi‑step planners) with MCP servers/clients and secure tool/resource integrations.
  • Design evaluation harnesses (RAG/agent evals, golden sets, regression tests, telemetry) and drive continuous improvement via offline + online metrics.
  • Build in public:
    Publish engineering artifacts (code, docs, talks, tutorials) and engage with OSS and customer engineers; turn repeated patterns into reusable templates.
  • Partner with product/solutions to launch LLM‑powered features with clear latency/cost/SLO targets and safety/guardrail checks.
  • Run growth experiments to track the usage of the Weights & Biases suite of products from the artifacts built.
Who You Are
  • Software engineering: 6+ years building production systems; strong Python or Type Script + system design, testing, CI/CD, observability.
  • GenAI apps: shipped LLM‑powered features (tools/agents/function calling), with measurable impact (latency/cost/reliability).
  • Agentic patterns: implemented planners/executors, tool orchestration, sandboxing, and failure taxonomies; familiarity with agent infra concerns.
  • RAG: pragmatic mastery of chunking, embeddings, vector/hybrid search, rerankers; experience with vector DBs/search indices and retrieval policy design.
  • Evaluation: designed LLM/RAG/agent evals (offline golden sets, counterfactuals, user studies, guardrail tests); stats literacy (variance, CIs, power).
  • Serving & productization: comfortable with queueing, caching, streaming, and cost controls; can debug latency at model, retrieval, and network layers.
  • Public signal: 2+ substantial OSS repos/blog posts/talks/videos with adoption (stars, forks, downloads, views) and reproducible artifacts.
Preferred
  • Experience building with AI SDKs / agent frameworks (e.g., Type Script/Python SDKs, planning libraries) and shipping developer‑facing examples.
  • Production agent security/sandboxing, red‑teaming, and policy/PII enforcement.
  • Operated eval platforms or built judge models/heuristics; experience leading metrics reviews with product/UX.
  • Customer‑facing enablement: templates or reference implementations adopted by external teams at scale.
Wondering if you’re a good fit?
  • You love turning ambiguous product ideas into clean, benchmarked, production‑ready LLM/agent workflows others can use today.
  • You’re curious about the latest research from AI labs and excited to apply it to real‑world problems.
  • You’re an expert in at least one of: RAG quality at scale, agent/tool orchestration, or GenAI DX (SDKs, examples, docs)—and you can demonstrate it with code and metrics.
Why Us?

We work hard, have fun, and move…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary