More jobs:
Data Engineer
Job in
500016, Prakāshamnagar, Telangana, India
Listed on 2026-02-05
Listing for:
Confidential
Full Time
position Listed on 2026-02-05
Job specializations:
-
IT/Tech
Data Engineer, Big Data, Data Science Manager, Cloud Computing
Job Description & How to Apply Below
About the Role
We are looking for a highly skilled Data Engineer to design, build, and maintain robust data pipelines and scalable data infrastructure. You will work closely with analytics, product, and engineering teams to enable high-quality data availability for business insights, machine learning, and operational processes. This role is ideal for an engineer who enjoys solving complex data problems, optimizing systems, and ensuring clean, reliable data flows.
Key Responsibilities
Design, develop, and maintain scalable ETL/ELT pipelines using modern data engineering tools.
Build and optimize data lakes, data warehouses, and streaming architectures .
Work with large and complex datasets to ensure quality, consistency, and availability.
Collaborate with Data Scientists and Analysts to understand data needs and deliver solutions.
Develop data models, schemas, and transformations to support analytics and reporting.
Implement data governance, security, cataloging, and lineage processes.
Optimize data pipeline performance, reduce cost, and improve reliability.
Integrate data from various sources including APIs, databases, and third-party systems.
Work with CI/CD and Dev Ops tools for data pipeline deployments.
Troubleshoot production issues and perform root cause analysis.
Required Skills
Strong programming skills in Python, SQL , and data processing frameworks.
Hands-on experience with big data technologies :
Apache Spark, Kafka, Hadoop, Hive, Airflow, etc.
Experience with cloud data platforms (AWS, Azure, or GCP):
AWS Glue, Redshift, S3, EMR
Azure Data Factory, Databricks
GCP Big Query, Dataflow, Pub/Sub
Experience with data warehousing concepts and modeling (Star/Snowflake schema).
Solid understanding of ETL/ELT development , distributed systems, and performance tuning.
Good knowledge of version control (Git) and CI/CD practices.
Preferred Skills
Experience with dbt, Snowflake, Databricks or similar modern data tools.
Understanding of real-time streaming architectures.
Knowledge of containerization (Docker, Kubernetes).
Experience with data security and compliance frameworks.
Exposure to machine learning pipelines (ML Ops) is a plus.
Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or related field.
Relevant certifications (AWS Data Engineer, Databricks, Snowflake) are an added advantage.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×