More jobs:
Job Description & How to Apply Below
The Opportunity
We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products deployed to some of the world’s largest organizations.
Security testing is only part of enterprise AI assurance.
We are seeking an AI Risk & Responsible AI Lead to design and operationalize structured evaluation frameworks across safety, bias, robustness, explainability, and data governance.
This role ensures our AI systems are secure, trustworthy, measurable, and enterprise-ready.
What You’ll Do
Design and implement model evaluation frameworks across AI products
Develop methodologies for:
Bias and fairness testing
Hallucination and reliability assessment
Robustness and stress testing
Safety benchmarking
Evaluate training data governance practices
Review RAG systems for retrieval accuracy and exposure risks
Establish measurable risk metrics across AI deployments
Align evaluation outputs with:
NIST AI Risk Management Framework
ISO 27701 privacy requirements
Enterprise governance standards
Produce structured, executive-ready documentation
Partner with product and engineering teams to integrate risk mitigation strategies
This role bridges AI engineering, governance, risk quantification and enterprise accountability.
Requirements
What We’re Looking For
Core AI Evaluation Expertise
Experience designing model evaluation frameworks
Familiarity with bias detection methodologies
Understanding of hallucination testing and reliability measurement
Experience stress-testing LLM-based systems
Strong Python and experimentation capabilities
Governance & Risk Fluency
Knowledge of NIST AI RMF
Familiarity with privacy-by-design principles
Experience operating within ISO 27001 / SOC 2 environments
Understanding of enterprise AI risk posture expectations
Analytical & Communication Strength
Ability to translate model risk into business impact
Strong documentation skills
Comfortable presenting findings to executive audiences
Systems-thinking mindset
Who You Are
Structured and methodical
Deeply curious about model behavior
Pragmatic about risk
Comfortable challenging assumptions
Independent and decisive
Able to operate at executive altitude without losing technical depth
You understand that AI risk is not just about being hacked — it’s about predictability, fairness, resilience, and trust.
Benefits
Comprehensive Private Medical Coverage
Support for Mental Health Expenses
Life Insurance Options
Attractive Compensation Package
#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×