Security Engineer, AI Security
Locations
:
- Location:
Orlando - Country:
United States of America
- Location:
Vancouver - Country:
Canada
- Location:
Austin - Country:
United States of America
- Location:
Kirkland - Country:
United States of America
- Location:
Redwood City - Country:
United States of America
Role
Worker Type:
Regular Employee
Studio/Department: CT - Security
Work Model:
Hybrid
Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.
Security Engineer, AI SecurityEA Security is seeking an offensive-minded Security Engineer to help secure AI-enabled systems, agents, and LLM-integrated workflows across EA’s games, services, and enterprise platforms. This role focuses on identifying real-world security risks in both commercial and internally developed AI platforms, and on building scalable testing, automation, and AI-driven security agents that extend the team’s impact.
You will work closely with Application Security and Red Team engineers, applying an attacker’s mindset to AI systems while building scalable security testing, automation, and guardrails that meaningfully reduce risk. This role is hands‑on, technical, and impact‑driven, with an emphasis on practical exploitation, adversarial testing, and scalable security outcomes.
This role is ideal for security engineers who enjoy breaking complex systems, reasoning about abuse paths, and turning deep technical findings into scalable and durable AI security improvements.
This position reports into the Application Security and Red Teaming organization.
ResponsibilitiesPerform security testing and reviews of AI-enabled applications, agents, and workflows, including architecture, design, and implementation analysis.
Identify and validate vulnerabilities in LLM-based systems such as data leakage, insecure tool use, authentication gaps, and abuse paths.
Evaluate AI systems for prompt injection (direct, indirect, conditional, and persistent), including risks introduced through retrieval‑augmented generation and agentic workflows.
Conduct adversarial testing of commercial AI platforms such as Microsoft Copilot, Google Agent Space, and OpenAI ChatGPT, as well as internally developed AI systems.
Assess agentic and multi‑agent workflows for privilege escalation, unsafe action chaining, cross‑agent abuse, and unintended side effects.
Design, build, and operate AI‑driven security agents and automation, including multi‑agent workflows, that scale application security, red teaming, and AI security efforts.
Develop tooling, test harnesses, and repeatable validation frameworks to expand AI security coverage across teams.
Partner with application engineers to translate findings into actionable mitigations, secure design patterns, and engineering guidance.
Collaborate with Red Team and App Sec engineers to integrate AI attack techniques and agent‑based testing into broader offensive security activities.
Contribute reusable insights, documentation, and guardrails that help teams adopt AI securely and reduce future systemic risk.
Required QualificationsStrong background in application security, offensive security, or a combination of both.
Hands‑on experience identifying and exploiting security weaknesses in modern applications and services.
Experience testing or securing AI‑enabled systems, LLM integrations, or agent‑based workflows.
Ability to reason about attacker misuse, abuse scenarios, and emergent behavior beyond traditional vulnerability classes.
Familiarity with source code review and security tooling such as CodeQL, Semgrep, or equivalent.
Strong collaboration and communication skills, with the ability to work directly with engineers and security partners.
Preferred QualificationsExperience assessing commercial AI platforms or enterprise AI services.
Familiarity with agent orchestration, tool calling, function execution, or multi‑agent systems.
Experience with traditional red team tooling or adversary simulation techniques.
Exposure to…
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: