Senior Data Platform/DevOps Engineer; Agile, Digital Platforms
Listed on 2026-02-28
-
IT/Tech
Data Engineer, Cloud Computing, Systems Engineer
Job Description
We are looking for a skilled and motivated Data Platform / Dev Ops Engineer to operate and evolve our Global Data Platform. In this role, you will work with modern distributed systems and data technologies, ensuring reliable, secure, and scalable data pipelines. You will apply Agile and Dev Sec Ops principles to continuously improve platform stability, automation, and delivery efficiency while collaborating closely with security, engineering, and cloud operations teams.
Key ResponsibilitiesOperate and maintain Global Data Platform components, including:
VM servers, Kubernetes clusters, and Kafka
Data and analytics applications such as Apache stack, Collibra, Dataiku, and similar tools
Design and implement automation for:
Infrastructure provisioning
Security components
CI/CD pipelines supporting ELT/ETL data workflows
Build resiliency into data pipelines through:
Platform health checks
Monitoring and alerting mechanisms
Proactive issue prevention and recovery strategies
Apply Agile and Dev Sec Ops practices to deliver integrated solutions in iterative increments
Collaborate and liaise with:
Enterprise Security
Digital Engineering
Cloud Operations teams
Review system issues, incidents, and alerts to:
Perform root cause analysis
Drive long-term fixes and platform improvements
Stay current with industry trends, emerging technologies, and best practices in data platforms and Dev Ops
Minimum 5 years of experience designing and supporting large-scale distributed systems
Hands-on experience with:
Streaming and file-based ingestion (e.g., Kafka, Control-M, AWA)
Dev Ops and CI/CD tooling (e.g., Jenkins, Octopus; Ansible, Chef, XL tools are a plus)
Experience with on-premises Big Data architectures; cloud migration experience is an advantage
Integration of Data Science workbenches (e.g., Dataiku or similar)
Practical experience working in Agile environments (Scrum, SAFe)
Supporting enterprise reporting and data science use cases
Strong knowledge of modern data architectures:
Data lakes, Delta Lakes, Data Meshes, and data platforms
Experience with distributed and cloud-native technologies:
S3, Parquet, Kafka, Kubernetes, Spark
Programming and scripting skills:
Python (required)
Java / Scala / R
Linux scripting, Jinja, Puppet
Infrastructure and platform engineering:
VM setup and administration
Kubernetes scaling and operations
Docker, Harbor
CI/CD pipelines
Firewall rules and security controls
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: