×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer; Pyspark

Job in 600001, Chennai, Tamil Nadu, India
Listing for: Confidential
Full Time position
Listed on 2026-02-03
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Salary/Wage Range or Industry Benchmark: 300000 - 600000 INR Yearly INR 300000.00 600000.00 YEAR
Job Description & How to Apply Below
Position: Data Engineer (Pyspark)
Data Pipeline Development:
Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
Data Ingestion:
Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
Data Transformation and Processing:
Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
Performance Optimization:
Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
Data Quality and Validation:
Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
Automation and Orchestration:
Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.

Education and Experience
Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field.
3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.
Technical Skills
PySpark:  Advanced proficiency in PySpark, including working with RDDs, Data Frames, and optimization techniques.
Cloudera Data Platform:  Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
Data Warehousing:  Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
Big Data Technologies:  Familiarity with Hadoop, Kafka, and other distributed computing tools.
Orchestration and Scheduling:  

Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
Scripting and Automation:  Strong scripting skills in Linux.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary