×
Register Here to Apply for Jobs or Post Jobs. X

Security Engineer, AI Security

Job in Vancouver, BC, Canada
Listing for: Electronic Arts
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    AI Engineer, Cybersecurity, Security Manager, Systems Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 130000 CAD Yearly CAD 100000.00 130000.00 YEAR
Job Description & How to Apply Below

Locations
:

  • Location:

    Orlando
  • Country:
    United States of America
  • Location:

    Vancouver
  • Country:
    Canada
  • Location:

    Austin
  • Country:
    United States of America
  • Location:

    Kirkland
  • Country:
    United States of America
  • Location:

    Redwood City
  • Country:
    United States of America

Role

Worker Type:
Regular Employee

Studio/Department: CT - Security

Work Model:
Hybrid

Description & Requirements

Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.

Security Engineer, AI Security

EA Security is seeking an offensive-minded Security Engineer to help secure AI-enabled systems, agents, and LLM-integrated workflows across EA’s games, services, and enterprise platforms. This role focuses on identifying real-world security risks in both commercial and internally developed AI platforms, and on building scalable testing, automation, and AI-driven security agents that extend the team’s impact.

You will work closely with Application Security and Red Team engineers, applying an attacker’s mindset to AI systems while building scalable security testing, automation, and guardrails that meaningfully reduce risk. This role is hands‑on, technical, and impact‑driven, with an emphasis on practical exploitation, adversarial testing, and scalable security outcomes.

This role is ideal for security engineers who enjoy breaking complex systems, reasoning about abuse paths, and turning deep technical findings into scalable and durable AI security improvements.

This position reports into the Application Security and Red Teaming organization.

Responsibilities

Perform security testing and reviews of AI-enabled applications, agents, and workflows, including architecture, design, and implementation analysis.

Identify and validate vulnerabilities in LLM-based systems such as data leakage, insecure tool use, authentication gaps, and abuse paths.

Evaluate AI systems for prompt injection (direct, indirect, conditional, and persistent), including risks introduced through retrieval‑augmented generation and agentic workflows.

Conduct adversarial testing of commercial AI platforms such as Microsoft Copilot, Google Agent Space, and OpenAI ChatGPT, as well as internally developed AI systems.

Assess agentic and multi‑agent workflows for privilege escalation, unsafe action chaining, cross‑agent abuse, and unintended side effects.

Design, build, and operate AI‑driven security agents and automation, including multi‑agent workflows, that scale application security, red teaming, and AI security efforts.

Develop tooling, test harnesses, and repeatable validation frameworks to expand AI security coverage across teams.

Partner with application engineers to translate findings into actionable mitigations, secure design patterns, and engineering guidance.

Collaborate with Red Team and App Sec engineers to integrate AI attack techniques and agent‑based testing into broader offensive security activities.

Contribute reusable insights, documentation, and guardrails that help teams adopt AI securely and reduce future systemic risk.

Required Qualifications

Strong background in application security, offensive security, or a combination of both.

Hands‑on experience identifying and exploiting security weaknesses in modern applications and services.

Experience testing or securing AI‑enabled systems, LLM integrations, or agent‑based workflows.

Ability to reason about attacker misuse, abuse scenarios, and emergent behavior beyond traditional vulnerability classes.

Familiarity with source code review and security tooling such as CodeQL, Semgrep, or equivalent.

Strong collaboration and communication skills, with the ability to work directly with engineers and security partners.

Preferred Qualifications

Experience assessing commercial AI platforms or enterprise AI services.

Familiarity with agent orchestration, tool calling, function execution, or multi‑agent systems.

Experience with traditional red team tooling or adversary simulation techniques.

Exposure to…

Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary