Senior Software Engineer, Cloud Inference San Francisco, CA NY | Seatt
Listed on 2026-03-01
-
IT/Tech
Cloud Computing, Systems Engineer, Data Engineer
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the RoleThe Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform—from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.
Our engineers are extremely high leverage: we simultaneously drive multiple major revenue streams while optimizing one of Anthropic's most precious resources—compute. As we expand to more cloud platforms, the complexity of managing inference efficiently across providers with different hardware, networking stacks, and operational models grows significantly. We need engineers who can navigate these platform differences, build robust abstractions that work across providers, and make smart infrastructure decisions that keep us cost-effective at massive scale.
Your work will increase the scale at which our services operate, accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms, and ensure our LLMs meet rigorous safety, performance, and security standards.
What You’ll Do- Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models
- Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms
- Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions
- Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity
- Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads
- Optimize inference cost and performance across providers—designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region
- Contribute to inference features that must work consistently across all platforms
- Analyze observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads
- Have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users
- Have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration
- Have strong interest in inference
- Thrive in cross-functional collaboration with both internal teams and external partners
- Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems
- Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work
- Pick up slack, even when it goes outside your job description
- Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings
- A background in building platform-agnostic tooling or abstraction layers that work across cloud providers
- Hands-on experience with capacity management, cost optimization, or resource planning at scale across heterogeneous environments
- Strong familiarity with LLM inference optimization, batching, caching,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).