×
Register Here to Apply for Jobs or Post Jobs. X

AWS Data Engineer Big Data & Python; GautengHybrid ISB

Remote / Online - Candidates ideally in
Midrand, Gauteng, South Africa
Listing for: iSanqa Resourcing
Full Time, Contract, Remote/Work from Home position
Listed on 2026-01-11
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Job Description & How to Apply Below
Position: AWS Data Engineer Big Data & Python (Contract) GautengHybrid ISB1201155

Build and maintain enterprise-scale Big Data pipelines using AWS cloud services and Group Data Platforms delivering innovative data solutions that power analytics, operational processes and business intelligence across global automotive operations!

Become the data engineering expert behind mission-critical data infrastructure where your expertise will drive data ingestion, transformation and provisioning while ensuring compliance, security and data quality across enterprise-wide data assets!

Expert-level data engineering with Python, PySpark, Terraform, AWS and Big Data expertise.

Hybrid and remote working flexibility with 1960 flexible annual hours.

Technical leadership role with mentoring responsibilities and enterprise data platform ownership.

Position

Contract:

01 February 2026 – 31 December 2028

Experience

8 years related experience.

Commencement

01 February 2026

Location

Hybrid:
Midrand / Menlyn / Rosslyn / Home Office rotation.

Team

Data Science and Engineering – Enterprise Data & Analytics.

The product focuses on creation and provisioning of enterprise-wide data spanning DGOs and Data Assets including data protection and other compliance and security aspects. This includes data ingests for the Enterprise D&A Use Cases (TOP
20) and data provisioning for operational processes.

Qualifications / Experience Minimum mandatory qualifications
  • Relevant IT / Business / Engineering Degree
Certifications (Preferred)
  • AWS Certified Cloud Practitioner
  • AWS Certified Sys Ops Associate
  • AWS Certified Developer Associate
  • AWS Certified Architect Associate
  • AWS Certified Architect Professional
  • Hashi Corp Certified Terraform Associate
Minimum mandatory experience
  • Above average experience/understanding in data engineering and Big Data pipelines
  • Experience in working with Enterprise Collaboration tools such as Confluence JIRA
  • Experience developing technical documentation and artefacts
  • Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV
  • Knowledge of the Agile Working Model
Essential Skills Requirements Programming & Scripting
  • Python 3.x (above average experience)
  • Py Spark
  • Power Shell / Bash
  • Boto3
Infrastructure as Code
  • Terraform (above average experience)
Databases & Data Processing
  • SQL – Oracle/Postgre

    SQL (above average experience)
  • ETL (above average experience)
  • Big Data (above average experience)
  • Technical data modelling and schema design (not drag and drop)
AWS Cloud Services
  • Group Cloud Data Hub (CDH)
  • Group CDEC Blueprint
  • AWS Glue
  • Cloud Watch
  • SNS (Simple Notification Service)
  • Athena
  • S3
  • Kinesis Streams (Kinesis Firehose)
  • Lambda
  • DynamoDB
  • Step Function
  • Param Store
  • Secrets Manager
  • Code Build/Pipeline
  • Cloud Formation
  • AWS EMR
  • Redshift
Big Data Technologies
  • Kafka
Containerization & Operating Systems
  • Docker
  • Linux / Unix
Analytics
  • Business Intelligence (BI) Experience
Soft Skills
  • Self-driven team player with ability to work independently and multi-task
  • Strong written and verbal communication skills with precise documentation
  • Strong organizational skills
  • Ability to work collaboratively in a team environment
  • Problem-solving capabilities
  • Above-board work ethics
Advantageous Skills Requirements
  • Demonstrate expertise in data modelling Oracle SQL
  • Exceptional analytical skills analysing large and complex data sets
  • Perform thorough testing and data validation to ensure the accuracy of data transformations
  • Strong written and verbal communication skills with precise documentation
  • Self-driven team player with ability to work independently and multi-task
  • Experience building data pipeline using AWS Glue or Data Pipeline or similar platforms
  • Familiar with data store such as AWS S3 and AWS RDS or DynamoDB
  • Experience and solid understanding of various software design patterns
  • Experience preparing specifications from which programs will be written designed coded tested and debugged
  • Strong organizational skills
  • Experience working with Data Quality Tools such as Great Expectations
  • Experience developing and working with REST APIs
  • Basic experience in Networking and troubleshooting network issues
Role Requirements Data Pipeline Development
  • Build and maintain Big Data Pipelines using Group Data Platforms
  • Design develop and optimize ETL processes for large-scale data ingestion and transformation
  • Implement data…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary