Information Security Specialist; Information Security Specialist – AI Penetration Tester; B361
Work Location: Toronto, Ontario, Canada
Hours: 37.5
Line Of Business: Technology Solutions
Pay Details: $114,000 - $136,800 CAD
This role is temporarily eligible for a pay premium above the posted salary range that is reassessed annually. You are encouraged to have an open dialogue with your recruiter who can provide more specific pay details for this role.
TD is committed to providing fair and equitable compensation opportunities to all colleagues. Growth opportunities and skill development are defining features of the colleague experience compensation policies and practices have been designed to allow colleagues to progress through the salary range over time as they progress in their role. The base pay actually offered may vary based upon the candidate's skills and experience, job-related knowledge, geographic location, and other specific business and organizational needs.
As a candidate, you are encouraged to ask compensation related questions and have an open dialogue with your recruiter who can provide you more specific details for this role.
Job Description
Role
Summary:
The Information Security Specialist – AI Penetration Tester is responsible for conducting advanced offensive security testing across AI/ML systems, LLM integrations, GenAI platforms, and associated infrastructure. This role serves as a subject‑matter expert in AI/LLM security, partnering with engineering, cyber, cloud, and architecture teams to identify vulnerabilities, improve controls, and ensure safe and compliant deployment of AI capabilities across the enterprise.
- AI/LLM Offensive Security & Vulnerability Testing
- Conduct Penetration Tests:
Design and execute comprehensive penetration tests targeting AI/ML models, LLM applications, model pipelines, retrieval systems, data agents, and AI-enabled business workflows. - AI/LLM Vulnerability Analysis:
Identify vulnerabilities such as jail breaking, prompt injection, model extraction, adversarial ML attacks, data poisoning, RAG bypasses, and safety guardrail circumvention. - Tooling & Automation:
Evaluate and develop tooling (including internal utilities and open‑source frameworks) to automate and scale AI/LLM security testing.
- Conduct Penetration Tests:
- Security Architecture, Hardening & Risk Assessment
- Assess Security Posture:
Analyze training data governance, guardrail design, inference endpoints, system prompts, agent autonomy, model monitoring, and model‑ops pipelines. - Risk Assessments:
Perform security and safety risk analyses on new and existing AI/ML deployments, including cloud‑based services, APIs, model marketplaces, and third‑party LLM integrations. - Model Supply Chain Security:
Assess AI supply chain risks, dependency integrity, and alignment with enterprise standards and regulatory obligations.
- Assess Security Posture:
- Documentation, Reporting & Communication
- Report Findings:
Deliver clear, actionable findings to both technical and non‑technical stakeholders. Produce detailed reporting including: - Executive summaries
- Technical proof‑of‑concepts
- Prioritized remediation recommendations
- Stakeholder Engagement:
Collaborate with Engineering, Data Science, Cloud, Cyber Defense, Architecture, and Risk to remediate findings and improve AI security posture.
- Report Findings:
- Governance, Standards & Continuous Improvement
- Develop Best Practices:
Contribute to organization‑wide AI security standards, policies, control objectives, and hardening practices. - Regulatory Compliance:
Ensure AI penetration testing aligns with regulatory, privacy, model safety, and internal policy requirements. - Continuous Learning:
Maintain deep expertise in emerging AI threats, industry frameworks, evaluation methodologies, and global safety standards.
- Develop Best Practices:
- Incident Response & Audit Support
- Participate in AI/ML–related security incident investigations, providing subject‑matter expertise on root cause analysis and exploitation methods.
- Support audit preparation and assist in drafting management responses, remediation plans, and risk treatment documentation.
Technical
Skills:
- 5+ years in application security or penetration testing, with hands‑on experience in AI/ML environments preferred.
- 7+ years of experience using penetration testing tools (Metasploit, Burp Suite, Nmap,…
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: