VP, Model Validation and Validation COE
Phoenix, Maricopa County, Arizona, 85003, USA
Listed on 2026-01-12
-
IT/Tech
Data Scientist, AI Engineer, Data Analyst
VP, Model Validation and Validation COE
Join to apply for the VP, Model Validation and Validation COE role at Synchrony.
Role SummaryThe VP, Fraud/GEN AI Validation COE is responsible for performing model validation for all the fraud models and ensuring they meet the related MRM policies, standards, procedures and regulations (SR 11‑7). In addition, this role will establish and maintain a validation center of excellence to support the model governance team in designing the quality assurance process and leading the execution across all validations, act as an incubation center to test and run innovation, provide standardized training and staff development, and support the improvement of the model risk professional practice to improve the model stakeholder experience.
This role requires a high level of expertise with minimal technical supervision to serve as project lead and be accountable for validation results. The COE will closely partner with the model governance lead and other validation leads to drive tangible improvements to the model risk practice.
Way of WorkingWe’re proud to offer you choice and flexibility. At Synchrony, you can work from home near one of our hubs or come into one of our offices. Occasionally you may be required to commute to our nearest office for in‑person engagement activities such as business or team meetings, training, and culture events.
Essential Responsibilities- GEN AI Model
Risk Management:
Lead the creation and implementation of a comprehensive, end‑to‑end governance framework for Generative AI models, establishing clear standards, procedures, documentation templates, and process to effectively manage model risks such as hallucination, accuracy, and bias. This framework will enable the organization to consistently monitor and control these critical aspects throughout the model lifecycle, ensuring reliable and trustworthy AI outputs with disciplined and transparent oversight. - Quality assurance and Capacity Planning:
Establish and maintain a quality assurance process to thoroughly review and assess validation practices. Proactively challenge the status quo to identify gaps or improvement opportunities in validation efforts. Provide guidance on best practices, support capacity planning, and collaborate with the Model Governance team to recommend and implement enhancements that strengthen the overall validation framework. - Strategy & Innovation:
Serve as an incubation center to explore, test, and implement innovative approaches—leveraging Generative AI capabilities—to accelerate and improve the speed, efficiency, and quality of model validation processes. - Professional Practice:
Support the Model Governance team to improve 1
LOD model owner experience and bring value‑focused validation practice. - Accountable for all fraud model risk management and drive the timeline and completion of the projects with minimal guidance.
- Supervisory role working with junior reviewers in validation projects.
- Handle escalation of issues and dispute with model owner level independently. See through the issues remediation, root‑cause analysis, and potential risk acceptance.
- Support regulatory examinations and internal audits of the modeling process and selected models samples.
- Perform other duties and/or special projects as assigned.
- 5+ years of experience in acquisition/transaction fraud model development or model validation in financial services, with experience in CI/CD frameworks preferred.
- Experience in generative AI model validation, framework development, or complex use case development.
- Proven experience automating validation processes and reducing cycle times using AutoML, generative AI, and related tools, including the ability to design and build necessary supporting infrastructure.
- Master’s degree in Statistics, Mathematics, Data Science, or a related quantitative field; or 9+ years of equivalent experience in model development/validation within financial services, banking, or retail.
- 4+ years hands‑on experience with data science and statistical tools such as Python, SPARK, Data Lake, AWS Sage Maker, H2O, and SAS.
- 4+ years of machine learning experience, including…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).