×
Register Here to Apply for Jobs or Post Jobs. X

Information Security Specialist; Information Security Specialist – AI Penetration Tester; B361

Job in Toronto, Ontario, C6A, Canada
Listing for: TD Securities
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    Cybersecurity, AI Engineer
Salary/Wage Range or Industry Benchmark: 114000 - 136800 CAD Yearly CAD 114000.00 136800.00 YEAR
Job Description & How to Apply Below
Position: Information Security Specialist (Information Security Specialist – AI Penetration Tester) (B361[...]

Work Location: Toronto, Ontario, Canada

Hours: 37.5

Line Of Business: Technology Solutions

Pay Details: $114,000 - $136,800 CAD
This role is temporarily eligible for a pay premium above the posted salary range that is reassessed annually. You are encouraged to have an open dialogue with your recruiter who can provide more specific pay details for this role.
TD is committed to providing fair and equitable compensation opportunities to all colleagues. Growth opportunities and skill development are defining features of the colleague experience  compensation policies and practices have been designed to allow colleagues to progress through the salary range over time as they progress in their role. The base pay actually offered may vary based upon the candidate's skills and experience, job-related knowledge, geographic location, and other specific business and organizational needs.
As a candidate, you are encouraged to ask compensation related questions and have an open dialogue with your recruiter who can provide you more specific details for this role.

Job Description

Role

Summary:

The Information Security Specialist – AI Penetration Tester is responsible for conducting advanced offensive security testing across AI/ML systems, LLM integrations, GenAI platforms, and associated infrastructure. This role serves as a subject‑matter expert in AI/LLM security, partnering with engineering, cyber, cloud, and architecture teams to identify vulnerabilities, improve controls, and ensure safe and compliant deployment of AI capabilities across the enterprise.

Responsibilities
  • AI/LLM Offensive Security & Vulnerability Testing
    • Conduct Penetration Tests:
      Design and execute comprehensive penetration tests targeting AI/ML models, LLM applications, model pipelines, retrieval systems, data agents, and AI-enabled business workflows.
    • AI/LLM Vulnerability Analysis:
      Identify vulnerabilities such as jail breaking, prompt injection, model extraction, adversarial ML attacks, data poisoning, RAG bypasses, and safety guardrail circumvention.
    • Tooling & Automation:
      Evaluate and develop tooling (including internal utilities and open‑source frameworks) to automate and scale AI/LLM security testing.
  • Security Architecture, Hardening & Risk Assessment
    • Assess Security Posture:
      Analyze training data governance, guardrail design, inference endpoints, system prompts, agent autonomy, model monitoring, and model‑ops pipelines.
    • Risk Assessments:
      Perform security and safety risk analyses on new and existing AI/ML deployments, including cloud‑based services, APIs, model marketplaces, and third‑party LLM integrations.
    • Model Supply Chain Security:
      Assess AI supply chain risks, dependency integrity, and alignment with enterprise standards and regulatory obligations.
  • Documentation, Reporting & Communication
    • Report Findings:
      Deliver clear, actionable findings to both technical and non‑technical stakeholders. Produce detailed reporting including:
    • Executive summaries
    • Technical proof‑of‑concepts
    • Prioritized remediation recommendations
    • Stakeholder Engagement:
      Collaborate with Engineering, Data Science, Cloud, Cyber Defense, Architecture, and Risk to remediate findings and improve AI security posture.
  • Governance, Standards & Continuous Improvement
    • Develop Best Practices:
      Contribute to organization‑wide AI security standards, policies, control objectives, and hardening practices.
    • Regulatory Compliance:
      Ensure AI penetration testing aligns with regulatory, privacy, model safety, and internal policy requirements.
    • Continuous Learning:
      Maintain deep expertise in emerging AI threats, industry frameworks, evaluation methodologies, and global safety standards.
  • Incident Response & Audit Support
    • Participate in AI/ML–related security incident investigations, providing subject‑matter expertise on root cause analysis and exploitation methods.
    • Support audit preparation and assist in drafting management responses, remediation plans, and risk treatment documentation.
Requirements

Technical

Skills:

  • 5+ years in application security or penetration testing, with hands‑on experience in AI/ML environments preferred.
  • 7+ years of experience using penetration testing tools (Metasploit, Burp Suite, Nmap,…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary