More jobs:
Job Description & How to Apply Below
Lead Data Engineer (Databricks & AWS)
Employment Type:
Full Time
Position Location:
Hyderabad
Reports to:
Delivery Head
Qualifications:
BE/B.Tech/MCA Degree in Computer Science, Engineering, or similar relevant field
Total
Experience:
10 to 15 Years
Working Model:
Hybrid
Shift Timing:
5:30 AM-2:30 PM IST/10:30 AM-7:30 PM IST
About the Role
We are seeking a Lead Data Engineer with strong experience in data engineering, analytical data modeling, and cloud-based data platforms. This role will lead the offshore technical support and engineering activities for enterprise data pipelines built on AWS and Databricks.
The position combines hands-on engineering expertise with operational leadership, ensuring reliable execution of data pipelines, effective troubleshooting of complex issues, and delivery of analytics-ready datasets for reporting, dashboards, and advanced analytics.
The ideal candidate will demonstrate strong analytical thinking, technical problem-solving, and operational ownership, guiding the offshore team in maintaining platform reliability while continuously improving Databricks-based data pipelines and data models.
Key Responsibilities
Design and maintain scalable data pipelines and transformation workflows using Databricks Lakehouse architecture
Build and maintain analytics-ready data models to support reporting and insights
Develop data transformations using SQL and Python
Work with Databricks-based data Lakehouse environments to process and manage large datasets
Ensure reliability and performance of data ingestion, transformation, and processing pipelines
Collaborate with engineering and analytics teams to support data quality, data governance, and analytics delivery
Troubleshoot pipeline issues and support improvements in platform stability and performance
Required Skills
Strong data engineering experience building ETL / ELT pipelines
Expertise in SQL and Python
Strong understanding of data modeling and analytics data structures
Strong working experience with Databricks, including:
Pyspark
Delta Lake
Databricks SQL
Data Lakehouse architecture
Experience working with AWS data services (S3, Lambda, Glue, Redshift, etc.)
Familiarity with dbt or similar transformation frameworks
Experience with Git Hub, Terraform, and Jira
Nice to Have
Experience with Tableau
Exposure to Kubernetes or containerized workloads
Experience with workflow orchestration tools such as MWAA or Airflow
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×