×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Risk Analytics Product Development​/Data Scientist, VP

Job in Bengaluru, 560001, Bangalore, Karnataka, India
Listing for: State Street
Full Time position
Listed on 2026-03-03
Job specializations:
  • IT/Tech
    Data Engineer
Job Description & How to Apply Below
Position: Risk Analytics Product Development / Data Scientist, VP
Location: Bengaluru

This job is with State Street, an inclusive employer and a member of my Gwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly.

Role Description   We are recruiting a VP-level Data Scientist and ETL Developer to design, build, and operate robust data integration and analytics pipelines that power State Street's risk analytics and reporting products. The role focuses on extending our data transformation and delivery architecture to onboard new client and vendor data sources, standardize them to enterprise data models, and deliver high‑quality, timely information to downstream risk, regulatory, and management reporting platforms.

In addition to integration, the role will lead advanced data preparation for model‑ready datasets, support feature engineering, and enable model deployment and monitoring in partnership with data science teams.
Function   This role sits within Risk Services and works closely with product managers and engineering leads to: (1) execute the risk data integration roadmap, (2) modernize legacy data movements to event‑driven/streaming and cloud patterns, and (3) enable seamless migration of clients from legacy to target‑state architectures while maintaining regulatory and control standards.
Responsibilities    
• Design & build data integrations:  Develop resilient ingestion, mapping, validation, and publishing processes to bring client and market data from multiple custodians and vendors into standardized schemas supporting risk analytics and reporting.

• Own ETL/ELT workflows:  Translate business and data analysis into production‑grade pipelines (batch and streaming), including transformation logic, data quality rules, lineage, and exception handling.

• Model‑ready data & MLOps enablement:  Partner with data scientists to design feature pipelines, curate training and inference datasets, implement feature‑store patterns, and operationalize model scoring and monitoring.

• Integration patterns & architecture:  Contribute to capability models and reference patterns (API, file, message/stream, CDC) that simplify and standardize integration across risk platforms; document and review designs.

• Environment readiness & releases:  Automate build, test, and deploy processes; ensure non‑prod/prod environments, secrets, and dependencies are correctly configured; support blue/green and canary releases where applicable.

• Production reliability:  Partner with production support to implement monitoring, alerting, run‑books, and on‑call rotations; lead incident triage and root‑cause analysis; continuously harden pipelines for resiliency and cost.

• Data quality & controls:  Implement reconciliation, validation, and auditability controls aligned to internal policies and external regulations for risk data.

• Stakeholder engagement:  Work with product managers, client service, and operations to prioritize backlog, groom user stories, and align technical plans with client deliverables and regulatory deadlines.

• Documentation & knowledge transfer:  Produce clear technical specifications, mappings, and run‑books; coach junior team members and enable handoffs to global support teams.

Continuous improvement:  Identify opportunities to rationalize tech stacks, retire redundant feeds, and evolve toward metadata‑driven pipelines and self‑service data delivery.

• Adaptability & Continuous Learning:  The ability to keep up with the fast-paced, evolving AI landscape.

Critical Thinking & Evaluation:  The ability to verify AI outputs, check for hallucinations, and identify bias.
Skills (What we're looking for)    Essential

• Strong hands‑on experience building ETL/ELT pipelines and data mappings for financial services, ideally in risk, performance, or regulatory reporting contexts.

• Proficiency with SQL (SQL Server/Oracle), Python/Scala, and a workflow/orchestration tools.

• Integration patterns across file‑based, API, and message/stream (Kafka/Event Hubs); comfort with schema/version management, idempotency, and backfills.

• Data modelling & quality: dimensional/relational modelling, DQ rules, reconciliation, lineage/metadata cataloguing.

• Applied data…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary