More jobs:
Job Description & How to Apply Below
Key Responsibilities:
Design, develop, and maintain robust ETL/ELT pipelines to collect, clean, transform, and load data from diverse sources.
Build and optimize data warehouses and data lakes using modern cloud platforms (e.g., GCP, AWS, Azure).
Work with structured and unstructured data from various systems (databases, APIs, flat files, etc.).
Collaborate with data scientists and analysts to prepare data for analytical models and dashboards.
Monitor and improve pipeline performance, scalability, and reliability.
Ensure data quality, data governance, and compliance across the data ecosystem.
Automate data workflows, implement data versioning, and handle orchestration using tools like Apache Airflow , Cloud Composer , or equivalent.
Document data sources, models, and pipelines for maintainability and transparency.
Mandatory
Skills:
Proficiency in SQL and performance tuning for large-scale datasets.
Strong programming skills in Python or Scala for data processing.
Solid understanding of ETL/ELT processes and data modeling concepts.
Experience with cloud data platforms such as:
GCP (Big Query, Cloud Storage, Dataflow)
or AWS (Redshift, S3, Glue)
or Azure (Data Factory, Synapse)
Familiarity with tools like Apache Spark , Kafka , Informatica , or dbt .
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×