×
Register Here to Apply for Jobs or Post Jobs. X

Principal AI Research Scientist

Job in London, Greater London, EC1A, England, UK
Listing for: Autodesk, Inc.
Full Time position
Listed on 2026-01-11
Job specializations:
  • IT/Tech
    AI Engineer, Data Scientist, Machine Learning/ ML Engineer, Artificial Intelligence
Job Description & How to Apply Below
Position: Principal Responsible AI Research Scientist

Position Overview

Principal Responsible AI Research Scientist :
London UK As a Responsible AI Research Scientist at Autodesk Research, you will be at the forefront of developing safe, reliable, and robust AI systems that help our customers imagine, design, and make a better world. You will join a diverse team of scientists, researchers, engineers, and designers working on cutting‑edge projects in AI safety, trustworthy AI, and responsible AI practices across domains including design systems, computer vision, graphics, robotics, human‑computer interaction, sustainability, simulation, manufacturing, architectural design, and construction.

As a member of the AI Lab in Autodesk Research, you will leverage your expertise in responsible, safe, and reliable AI to ensure that our AI systems are not only innovative but also aligned with principles of safety, reliability, and societal benefit. Your research will play a crucial role in shaping the future of AI applications at Autodesk and beyond.

Research

Community & Collaboration

We are active in the wider research community, targeting publications at top‑tier conferences such as CVPR,
NeurIPS
, ICML
, ICLR
, SIGGRAPH, FAccT, AIES, and others focused on AI safety and responsibility. We collaborate with leading academic and industry labs, combining the best of an academic environment with product‑guided research. We are a global team located in San Francisco, Toronto, London, and remotely.

Responsibilities
  • Conduct original research on responsible, safe, reliable, robust, and trustworthy AI
  • Develop new ML models and AI techniques with a strong emphasis on safety, reliability, and responsible implementation
  • Collaborate with cross‑functional teams and stakeholders to integrate responsible AI practices into Autodesk products and services
  • Review and synthesize relevant literature on AI safety, reliability, and responsibility to identify emerging methods, technologies, and best practices
  • Work towards long‑term research goals in AI safety and responsibility while identifying intermediate milestones
  • Build collaborations with academics, institutions, and industry partners in the field of responsible and safe AI
  • Explore new data sources and discover techniques for leveraging data in a responsible and reliable manner
  • Publish papers and present at conferences on topics related to responsible, safe, and reliable AI
  • Contribute to the strategic thinking on research directions that align with Autodesk’s commitment to safe and responsible AI development
Minimum Qualifications
  • PhD in Computer Science, AI / ML, or a related field with a focus on responsible, safe, and reliable AI
  • Strong publication track record in responsible AI, safe AI, reliable AI, robust AI, or similar domains in top‑tier conferences and / or journals
  • Extensive knowledge and hands‑on experience with generative AI, including Large Language Models (LLMs), diffusion models, and multi-modal models such as Vision‑Language Models (VLMs)
  • Strong foundation in machine learning and deep learning fundamentals
  • Excellent coding skills in Python, with experience in Py Torch ,
    Tensor Flow
    , or JAX
  • Demonstrated ability to work collaboratively in cross‑functional teams and communicate complex ideas to both technical and non‑technical stakeholders
  • Experience in applying responsible AI principles to real‑world problems and products
Preferred Qualifications
  • Experience with 3D
    machine learning techniques and their applications in design, engineering, or manufacturing
  • Familiarity with AI governance frameworks and regulatory landscapes
  • Knowledge of fairness, accountability, transparency, and safety in AI systems
  • Experience in developing or implementing AI safety measures, such as robustness to distribution shift, uncertainty quantification, or AI alignment techniques
  • Track record of contributing to open‑source projects related to responsible AI or AI safety
#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary