More jobs:
AI Security Engineer
Job in
Austin, Travis County, Texas, 78716, USA
Listed on 2026-01-12
Listing for:
Crowe
Full Time
position Listed on 2026-01-12
Job specializations:
-
IT/Tech
Cybersecurity, AI Engineer, Systems Engineer
Job Description & How to Apply Below
AI Security Engineer I (Senior Staff)
Serves as a senior technical expert responsible for securing enterprise AI and machine learning systems across their full lifecycle, including data ingestion, model training, inference pipelines, retrieval‑augmented generation (RAG) systems, and generative AI applications. This role leads advanced security assessments, identifies vulnerabilities unique to AI‑enabled platforms, and architects secure‑by‑design solutions for cloud and hybrid environments.
Responsibilities- Architecting secure deployment and operating models for AI, ML, and generative AI systems across cloud and hybrid environments.
- Conducting advanced AI security testing, including adversarial ML attacks, prompt injection simulations, and RAG manipulation assessments.
- Identifying and mitigating vulnerabilities in model‑serving infrastructure, feature stores, embedding pipelines, and vector databases.
- Designing guardrails, safety filters, access controls, and secure interaction patterns for LLM‑ and RAG‑based applications.
- Developing automated tooling to detect misconfigurations, insecure endpoints, and data exposure risks within AI pipelines.
- Collaborating with cloud and Dev Ops teams to secure Kubernetes clusters, GPU workloads, and infrastructure‑as‑code deployments.
- Analyzing logs, telemetry, and model outputs to detect anomalies, abuse patterns, model degradation, or malicious activity.
- Implementing encryption, secrets management, IAM policies, and network segmentation for AI workloads.
- Leading secure design and architecture reviews for AI features, APIs, and platform components.
- Documenting threat models, attack surfaces, risk assessments, mitigations, and compliance artifacts.
- Participating in AI‑specific incident response, investigation, and post‑incident analysis.
- Evaluating emerging AI security technologies, including model fingerprinting, inference protection, and secure execution environments.
- Supporting enterprise adoption of responsible AI, data protection, and regulatory compliance standards.
- Mentoring junior engineers, ML engineers, and security practitioners on AI security best practices.
- Contributing to cloud security posture management capabilities for AI‑enabled platforms.
- 4+ years of experience in cybersecurity, cloud security, ML engineering, or Dev Sec Ops roles.
- Demonstrated experience securing AI/ML or generative AI systems in production environments.
- Strong understanding of ML pipelines, model architectures, and AI system components.
- Deep knowledge of adversarial ML attack vectors and mitigation techniques.
- Proficiency in Python, security testing tools, and cloud security frameworks.
- Ability to assess risk across distributed services, storage systems, inference APIs, and data pipelines.
- Strong communication skills and sound technical judgment in security decision‑making.
- Hands‑on experience with Microsoft Azure and M365 security environments.
- Willingness to travel occasionally for cross‑functional planning and collaboration.
- Bachelor’s degree in Cybersecurity, Computer Science, Engineering, or a related technical field, or equivalent experience.
- Master’s degree or advanced training in cybersecurity, AI, or related discipline.
- Security and cloud certifications such as SC‑100, SC‑900, SC‑200, SC‑300, AZ‑500, AI‑102, or equivalent AWS certifications.
- CISSP, CKS, or CompTIA Cloud certifications.
- Advanced experience securing AI platforms on Azure, including Kubernetes security (RBAC, network policies) and multi‑tenant GPU workloads.
- Experience securing container pipelines using image scanning, signing, and policy enforcement.
- Expertise with secrets management solutions (e.g., Azure Key Vault, Hashi Corp Vault).
- Experience implementing zero‑trust architecture and securing CI/CD pipelines for AI systems.
- Deep knowledge of generative AI and RAG security, including prevention of prompt injections, jailbreaks, context poisoning, and embedding leakage.
- Experience designing safe‑output rendering patterns, guardrails, and red‑teaming processes for generative systems.
- Familiarity with emerging generative AI defense techniques such as model watermarking, inference integrity…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×