AI Risk & Governance Analyst
Job in
Boston, Suffolk County, Massachusetts, 02298, USA
Listed on 2026-03-10
Listing for:
Optomi
Full Time
position Listed on 2026-03-10
Job specializations:
-
IT/Tech
Data Security, Cybersecurity, Data Analyst, Information Security
Job Description & How to Apply Below
Optomi, in partnership with a leading provider in the Healthcare industry is seeking an AI Risk & Governance Analyst to join their team. You will be responsible for performing compliance reviews of AI applications to ensure alignment with internal policies and governance standards. The role involves conducting structured risk assessments across the AI system lifecycle, identifying risks related to bias, privacy, security, and regulatory noncompliance.
The analyst will work collaboratively with AI development teams to gather information for assessments and prepare clear findings and recommendations for leadership.
- Performs compliance reviews of AI applications and products to assess alignment with internal policies, governance standards, and standard operating procedures, including verification of required documentation, approvals, and controls prior to production deployment.
- Conducts structured risk assessments of AI systems across their lifecycle, identifying and documenting risks related to bias, privacy, security, safety, model behavior, and regulatory noncompliance; evaluate risk likelihood, impact, and adequacy of mitigation controls.
- Reviews model development practices, data handling procedures, deployment controls, and technical artifacts (e.g., model cards, system architecture documentation) to identify compliance gaps and discrepancies between documented capabilities and actual system behavior.
- Investigates AI system incidents, complaints, or governance concerns by analyzing system behavior, data flows, and decision logic; document investigative methods, evidence reviewed, and conclusions reached.
- Conducts hands on testing and probing of AI systems to validate documented claims regarding performance and behavior, and support ongoing monitoring of deployed systems.
- Tracks compliance and risk findings, remediation actions, and residual risk through maintained risk registers and supporting documentation; verify corrective actions are implemented and documented.
- Partners with AI development teams, product owners, and subject matter experts to gather information for assessments and investigations, and prepare clear findings, executive summaries, and recommendations for leadership and governance stakeholders.
- Monitors trends in compliance and risk findings to identify systemic issues and support continuous improvement of AI governance practices; stay current with evolving AI regulations, standards, and industry best practices.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×