Job Description & How to Apply Below
Our engineering team covers front end (UI), back end (Java/Node.js APIs), Big Data, and Dev Ops/SRE , working together to build world-class platforms for digital advertising and marketing.
Role Overview
As a Dev Ops Engineer , you will be responsible for managing cloud-native infrastructure, supporting CI/CD pipelines, and ensuring system reliability and scalability. You will automate deployment processes, maintain server environments, monitor performance, and collaborate with development and QA teams across the product lifecycle.
Objectives
Design and manage scalable, cloud-native infrastructure using GCP, Kubernetes, and Argo CD.
Implement observability and monitoring tools (ELK, Prometheus, Grafana) for complete system visibility.
Enable real-time data pipelines using Apache Kafka and GCP Data Proc.
Automate CI/CD pipelines using Git Hub Actions with Git Ops best practices.
Responsibilities
Build, manage, and monitor Kubernetes (GKE) clusters and containerized workloads with Argo CD.
Design and maintain CI/CD pipelines using Git Hub Actions + Git Ops.
Configure and manage real-time streaming pipelines with Apache Kafka and GCP Data Proc.
Manage logging/observability infrastructure with Elasticsearch, Logstash, and Kibana (ELK stack).
Secure and optimize GCP services including Artifact Registry, Compute Engine, Cloud Storage, VPC, and IAM.
Implement caching and session stores using Redis for scalability.
Monitor system health, availability, and performance with Prometheus, Grafana, ELK.
Collaborate with engineering and QA teams to streamline deployments.
Automate infrastructure provisioning using Terraform, Bash, or Python.
Maintain backup, failover, and disaster recovery strategies for production.
Qualifications
Bachelor's degree in Computer Science, Engineering, or related technical field.
4–8 years' experience in Dev Ops, Cloud Infrastructure, or Site Reliability Engineering.
Strong experience with Google Cloud Platform (GCP) including GKE, IAM, VPC, Artifact Registry, and Data Proc.
Hands-on with Kubernetes, Argo CD, and Git Hub Actions for CI/CD workflows.
Proficiency with Apache Kafka for real-time data streaming.
Experience managing ELK Stack (Elasticsearch, Logstash, Kibana) in production.
Working knowledge of Redis for distributed caching/session management.
Skilled in scripting/automation with Bash, Python, Terraform.
Strong understanding of containerization, infrastructure-as-code, and monitoring.
Familiarity with cloud security, IAM policies, and compliance best practices.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×