Description
Drive forward-looking security strategy and engineering solutions for Generative AI and LLM platforms while specializing in leveraging AI security capabilities to augment and fortify existing enterprise solutions. You will act as a key technical leader, bridging the gap between cutting-edge AI innovation and core infrastructure security.
This role is primarily focused on researching, evaluating, and conducting proof-of-concepts for new security technologies and protocols that protect our assets deployed in Azure, Google Cloud, or On-Premises. You will focus on high-impact areas, including Agentic AI protocols (A2A, MCP), API security, Identity and Access Management, and third-party Integration for LLMs, AI models, and RAG applications.
You will partner closely with AI Development teams to provide essential infrastructure security expertise to support broader security initiatives, as well as the Dev-Sec-Ops and Platform Engineering teams to translate successful security PoC’s into robust, productive-ready solutions and infrastructure controls.
DETAILED
Key Responsibilities:
Research, Evaluation, and Design
This role is primarily focused on providing AI Security Infrastructure solutions, researching, evaluating, and designing solutions that mitigate gaps in security controls, and support leadership strategy and road maps. You will be responsible for conducting proof-of-concepts (PoC's) for new security technologies and protocols, and support hardening efforts to protect our mission-critical assets deployed across Azure, Google Cloud, and On-Premises environments.
1. Advanced Protocol and Application Security
Generative AI Protocols:
Evaluate and secure emerging standards for multi-agent workflows, such as the Agent-to-Agent (A2A) and Model Context Protocol (MCP).
Third-Party Security:
Conduct deep security assessments and validation of all infrastructure and connection points for third-party LLM and RAG (Retrieval-Augmented Generation) applications.
Threat Modeling:
Support threat modeling exercises for new AI applications and pipelines to proactively identify design flaws and adversarial attack vectors (e.g., prompt injection paths).
Mitigation Solutions:
Support the design, build, and testing of security controls to mitigate common AI/ML attacks as outlined by frameworks like the OWASP Top 10 for LLM Applications, Mitre Atlas.
2. Access, Identity, and Cloud Controls
IAM Design:
Define and implement security designs for Identity and Access Management (IAM), specializing in securing non-human identities, service principles, and cross-cloud access.
API Security:
Own the security strategy for all AI service consumption, including hardening of API Gateways and securing authentication flows (e.g., OAuth 2.0/OIDC) for model endpoints.
Secrets Management:
Design and PoC the secure storage, injection, and rotation of confidential data (API keys, model weights, database credentials) using solutions like Azure Key Vault and GCP Secret Manager in support of AI Security Infrastructure initiatives.
AI Cloud Hardening:
Establish security configuration baselines and network segmentation (e.g., Private Link, VPC Service Controls) for AI-specific cloud resources on Azure and GCP.
3. Collaboration and Strategy Translation
AI Red Team Support:
Provide essential infrastructure security expertise and tooling to support the AI Red Team program, helping them build secure testing environments and validate attack findings.
Translation to Production:
Collaborate with Dev Ops, Governance, Vulnerability Management, and Platform Engineering partners to translate successful security PoC's and designs into robust, production-ready solutions and Infrastructure as Code (IaC) controls.
Ideal Candidate Profile
Technical Skills
Hands-on experience securing platforms and services in Microsoft Azure and Google Cloud Platform (GCP), with an understanding of hybrid security models.
In-depth knowledge of Identity and Access Management (IAM) concepts, including implementation experience with OAuth 2.0/OIDC and modern token-based authentication systems.
Solid background in designing and testing the security of REST APIs and associated middleware (e.g., API Gateways, WAFs).
Practical experience designing or implementing solutions for secure secret storage and retrieval (e.g., Azure Key Vault, GCP Secret Manager, Hashi Corp Vault, Hardware Security Modules)
Ability to script in Python, Go Power Shell, or similar languages (Python preferred) for security tool evaluation, PoC implementation, and security automation scripting.
AI and Emerging Protocols
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: