Principal Platform Engineer – Data Ops Engineer
Listed on 2026-01-25
-
IT/Tech
Data Engineer, Cloud Computing, Data Science Manager, Systems Engineer
Overview
DIRECTV Data Analytics and Operations Team is looking for a curious, talented, highly motivated Data Ops Engineer to lead the automation, orchestration, and optimization of our cloud-based data workflows across Snowflake, Databricks, and AWS. The ideal candidate will have deep expertise in Apache Airflow, CI/CD, automation, and performance monitoring, with a passion for building scalable, efficient, and high-performance data operations solutions.
In this role, you will work closely with data engineering, Dev Ops, security, and business teams to design and implement next-generation orchestration, automation, and data deployment strategies that drive efficiency, reliability, and cost-effectiveness. This is the perfect opportunity to become a part of a fast-paced and innovative team that solves real world problems and drives business value.
Experience Requirements – The candidate should possess a strong background in data operations, orchestration, and cloud-native data platforms to deliver scalable and reliable data workflows.
Responsibilities- Platform Architecture & Data Orchestration Strategy (30%)
- Define the long-term orchestration strategy and architectural standards for workflow management across Snowflake, Databricks, and AWS.
- Lead the design, implementation, and optimization of complex workflows using Apache Airflow and related tools.
- Mentor teams in best practices for DAG design, error handling, and resilience patterns.
- Champion cross-platform orchestration that supports data mesh and modern data architecture principles.
- Engineering Excellence & Automation Frameworks (25%)
- Architect and guide the development of reusable automation frameworks in Python, Spark, and Shell that streamline data workflows and platform operations.
- Lead automation initiatives across data platform teams, setting coding and modularization standards.
- Evaluate and introduce emerging technologies and scripting tools to accelerate automation and reduce toil.
- Enterprise CI/CD Governance & Dev Ops Leadership (20%)
- Define and maintain enterprise-wide CI/CD standards for data pipelines and platform deployments using Jenkins, Git Lab, and AWS Code Pipeline.
- Drive adoption of Infrastructure as Code (IaC) and Git Ops practices to enable scalable and consistent environment provisioning.
- Provide technical leadership for Dev Ops integration across Data, Security, and Cloud Engineering teams.
- Performance Engineering & Platform Optimization (15%)
- Lead performance audits and capacity planning efforts across Snowflake, Databricks, and orchestrated workflows.
- Build frameworks for proactive monitoring, benchmarking, and optimization using Datadog, AWS Cloud Watch, and JMeter.
- Partner with platform teams to implement self-healing systems and auto-scaling capabilities.
- Operational Resilience & Leadership Collaboration (10%)
- Oversee complex incident resolution, lead post-mortems, and implement systemic preventive measures.
- Develop standardized runbooks, incident response frameworks, and training programs to elevate Tier 2/3 capabilities.
- Act as a liaison between engineering leadership, security, and business teams to drive platform roadmaps and risk mitigation.
- 4 – 5 years required, 10+ years preferred of overall software engineering experience, including 10+ years preferred as a Big Data Architect, focused on end-to-end data infrastructure design using Spark, PySpark, Kafka, Databricks, and Snowflake.
- 4 – 5 years required, 8+ years preferred of hands-on programming experience with Python, PySpark, JavaScript, and Shell scripting, with demonstrated expertise in building reusable and configuration-driven frameworks for Databricks.
- 5+ years of experience designing and implementing configuration-driven frameworks in PySpark on Databricks, enabling scalable and metadata-driven data pipeline orchestration.
- 4 – 5 years required, 8+ years preferred of experience in CI/CD pipeline development and automation using Git Lab, Jenkins, and Databricks REST APIs, including infrastructure provisioning and deployment at scale.
- 4 – 5 years required, 8+ years preferred of deep expertise in Snowflake, Databricks, and AWS, including migration, optimization,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).