×
Register Here to Apply for Jobs or Post Jobs. X

Principal AI Quality Developer

Job in Toronto, Ontario, C6A, Canada
Listing for: Autodesk
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    AI Engineer, Machine Learning/ ML Engineer, Data Scientist, Data Analyst
Salary/Wage Range or Industry Benchmark: 80000 - 100000 CAD Yearly CAD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Position Overview

We are hiring a principal quality engineering leader to design, build, and operationalize testing and evaluation for applied AI and ML systems across Autodesk products, with a focus on the Fusion Industry Cloud.

This role is responsible for validating the quality, safety, reliability, and performance of AI features and agentic workflows shipped by the applied AI engineering team. You will create repeatable evaluation approaches for predictive and generative AI systems, build automated test harnesses integrated into CI/CD, and partner closely with Applied AI engineers, Product, Security, and Platform teams to ensure AI capabilities meet defined acceptance thresholds and customer expectations.

You will lead the quality strategy for applied AI experiences that directly impact product experience and customer outcomes. This role offers the opportunity to define how we test and trust AI features at scale, partner with senior engineering leaders, and raise the bar for reliability and responsible AI in a high-impact product environment.

Responsibilities
  • Define and drive the quality strategy for applied AI and agentic features within Autodesk products and services with a focus on Fusion Industry Cloud
  • Test planning, acceptance criteria, and release readiness for AI-powered capabilities
  • Design and build automated evaluation frameworks for LLM and ML outputs, including curated gold sets, synthetic test sets, adversarial test cases, and regression suites that run in CI/CD
  • Create repeatable evaluation metrics and scoring approaches for AI behavior, including relevance, faithfulness, hallucination rate, toxicity, bias, robustness, latency, and cost, and partner with Engineering and Product to define pass or fail thresholds
  • Validate retrieval-augmented generation workflows, tool use, and agent behavior across edge cases, including prompt sensitivity, context recall, and retrieval accuracy
  • Lead AI safety and abuse testing in partnership with Security, including prompt injection and jailbreak scenarios, data leakage risks, and misuse cases relevant to product integrations
  • Establish production monitoring and post-deployment validation for AI features, including drift detection, failure taxonomy, and alerting tied to customer impact and SLAs
  • Perform deep issue triage and root-cause analysis for AI-related defects across data, prompts, retrieval, model versions, and integration logic, and drive fixes with Applied AI engineers
  • Mentor engineers and QA practitioners on AI testing approaches, improve team testing standards, and contribute to hiring and interview loops for quality roles
  • Collaborate with cross-functional partners to ensure responsible AI practices are embedded into development workflows, including documentation of test methodology, evaluation results, and release sign-off criteria
Minimum Qualifications
  • 7+ years of experience in quality engineering, test automation, or software engineering, with demonstrated ownership of test strategy and automation for complex systems, including services, platforms, or product integrations
  • Strong programming skills in Python and experience with test frameworks such as pytest or unittest and building automated test harnesses integrated with CI/CD
  • Hands‑on experience testing AI or ML systems, including model output evaluation, error analysis, regression testing, and monitoring of quality over time
  • Practical experience validating LLM-based systems, including prompt testing, retrieval‑augmented generation workflows, hallucination detection, and robustness testing across edge cases
  • Experience defining measurable acceptance criteria and working with product and engineering teams to translate requirements into evaluation metrics and release gates
  • Proven ability to debug complex cross‑stack issues spanning application logic, APIs, data flows, model behavior, and evaluation pipelines
  • Strong communication skills and ability to influence without direct authority in a collaborative, matrixed environment
  • Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent experience
Preferred Qualifications
  • Experience with adversarial testing or AI security testing,…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary