×
Register Here to Apply for Jobs or Post Jobs. X

AI Security Architect

Job in Pittsburgh, Allegheny County, Pennsylvania, 15289, USA
Listing for: BNY Mellon
Full Time position
Listed on 2026-03-03
Job specializations:
  • IT/Tech
    Cybersecurity, AI Engineer, Systems Engineer, Data Security
Job Description & How to Apply Below
AI, Security Architect

At BNY, our culture allows us to run our company better and enables employees' growth and success. As a leading global financial services company at the heart of the global financial system, we influence nearly 20% of the world's investible assets. Every day, our teams harness cutting-edge AI and breakthrough technologies to collaborate with clients, driving transformative solutions that redefine industries and uplift communities worldwide.

Recognized as a top destination for innovators, BNY is where bold ideas meet advanced technology and exceptional talent. Together, we power the future of finance - and this is what #LifeAtBNY is all about. Join us and be part of something extraordinary.

We're seeking a future team member for the role of AI Security Architect to join our Cybersecurity team. This role can be in Pittsburgh, PA or Lake Mary, FL or NYC, NY.

Overview

BNY is seeking a AI Security Architect to lead the design, implementation, and governance of security controls for AI/ML systems across the enterprise. This role will define the target architecture and security patterns for AI-enabled products and platforms, ensuring resilient, compliant, and trustworthy AI. The ideal candidate combines deep expertise in cybersecurity and cloud with hands-on knowledge of modern AI/ML infrastructure, data protection, adversarial threat models, and secure MLOps.

Primary Responsibilities
  • Define enterprise AI security architecture: develop reference architectures, guardrails, and standards for secure data pipelines, model training/inference, and AI-integrated applications across on-prem and cloud.
  • Secure MLOps/ML platforms: architect identity, secrets management, network segmentation, and least-privilege access for feature stores, model registries, orchestration, and deployment pipelines.
  • Data protection by design: establish controls for sensitive data ingestion, anonymization/pseudonymization, encryption (at rest/in transit), tokenization, and lineage across AI workflows.
  • Adversarial ML defense: design controls and tests for model poisoning, evasion, model theft/exfiltration, prompt injection, jail breaking, data leakage, and output manipulation.
  • AI supply chain security: govern third-party models, APIs, and datasets; enforce SBOMs for AI components; evaluate provenance, licensing, and dependency risk.
  • Policy and governance integration: translate AI security requirements into actionable standards and control evidence; align with enterprise risk, compliance, and model governance processes.
  • Threat modeling and security testing: lead threat modeling for AI systems; design red-teaming and secure evaluation methods for models and agents; integrate chaos/resilience testing.
  • Secure development lifecycle: embed AI-specific security checks (static/dynamic scans, IaC policy-as-code, data quality gates, bias/robustness checks) into CI/CD and change management.
  • Runtime protection: implementing guardrails, content filters, output validation, rate limiting, anomaly detection, and monitoring for AI services and agentic workflows.
  • Observability and incident response: define logging/telemetry (model inputs/outputs, drift, performance, safety events); integrate AI-specific playbooks into SOC operations.
  • Zero Trust for AI: design identity-aware access, micro-segmentation, and continuous verification for data scientists, services, and agents.
  • Privacy and ethics controls: partner with privacy and legal to operationalize consent, minimization, purpose limitation, and responsible AI guardrails, including human-in-the-loop where appropriate.
  • Resilience and continuity: design disaster recovery, backup/restore, model reproducibility, and contingency plans for AI platforms and critical use cases.
  • Vendor/platform assessments: evaluate cloud AI services, open-source frameworks, and commercial tools for security posture, compliance, and fit-for-purpose.
  • Risk management:

    lead control testing and risk assessments for AI initiatives; document residual risks and remediation plans; support audits and regulatory queries.
  • Reference implementations: deliver secure patterns, sample code, and automation (e.g., reusable Terraform/Policy-as-Code, secrets patterns, logging schemas) to accelerate adoption.
  • Stakeholder leadership: partner with platform engineering, data science, enterprise architecture, cyber operations, and product teams to drive end-to-end secure outcomes.
  • Coaching and enablement: build education and guidance for architects, data scientists, and engineers on secure AI practices, design patterns, and common pitfalls.
  • Continuous improvement: track emerging threats, standards, and best practices; lead updates to architecture and controls; measure effectiveness via KPIs and control health.
Required Qualifications
  • 12+ years in cybersecurity/enterprise security architecture with 3+ years focused on AI/ML or data platform security at scale.
  • Expertise in cloud security (AWS/Azure/GCP) including identity, secrets management, key management (KMS/HSM),…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary