INTL - LATAM - Python Engineer
Listed on 2026-02-28
-
Software Development
Cloud Engineer - Software, Software Engineer, Backend Developer, DevOps
Key Responsibilities
- Design and develop Python-based microservices (REST/async services) with strong API contracts and clean service boundaries
- Build and integrate event‑driven services using the Kafka ecosystem (including Kafka Streams concepts where applicable) and schema‑based messaging (Avro)
- Implement batch and streaming workloads that support downstream systems (Spark / Spark Streaming), leveraging Databricks for job execution and notebooks when needed
- Collaborate with product and engineering partners to evaluate architecture, define requirements, and deliver scalable features
- Write and optimise SQL for data access, transformations, validation, and QA workflows
- Build reliable delivery pipelines and deployments using Docker, Kubernetes, and CI/CD tooling (e.g., Jenkins, ArgoCD)
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal‑opportunity/affirmative‑action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations and ordinances.
If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to
- 8‑10 years engineering experience building and deploying backend services
- Expert‑level Python (microservices, APIs, async patterns, testing)
- Strong experience with microservices architecture and distributed systems patterns
- Experience working with Databricks platform, using it to run Spark jobs, notebooks, etc.
- Experience with Spark / Spark Streaming for batch + streaming jobs
- Experience with Kafka and streaming/event‑based integrations; familiarity with Avro schemas
- Hands‑on experience with SQL and data‑driven applications
- Cloud experience in GCP and/or Azure
- Experience with containerization and orchestration:
Docker / docker‑compose, Kubernetes - CI/CD experience with Jenkins and/or ArgoCD
- Strong Git‑based workflow experience (Git)
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).