More jobs:
Data Engineer - Corporate Functions
Job in
500016, Prakāshamnagar, Telangana, India
Listed on 2026-02-03
Listing for:
Confidential
Full Time
position Listed on 2026-02-03
Job specializations:
-
IT/Tech
Data Engineer, Big Data
Job Description & How to Apply Below
In this vital role, you will be a Data Engineer responsible for designing, building, and maintaining data solutions that provide actionable insights to drive business decisions. The ideal candidate will have strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. You will work with large datasets, develop reports, and implement data governance initiatives to ensure data is reliable and efficiently managed.
Roles & Responsibilities Data Pipeline Development: Design, develop, and maintain data solutions for data generation, collection, and processing. You will create data pipelines, implement ETL processes to migrate and deploy data, and take ownership of projects from inception to deployment.
Collaboration & Communication: Collaborate with cross-functional teams, including Data Architects, Business SMEs, and Data Scientists, to understand data requirements and design solutions that meet business needs. You will participate in sprint planning meetings and provide estimations on technical implementation.
Data Quality & Governance: Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. You will also implement data security and privacy measures to protect sensitive data and adhere to standard methodologies for coding, testing, and designing reusable code.
Technology &
Innovation: Leverage cloud platforms ( AWS preferred) to build scalable and efficient data solutions. You will explore new tools and technologies to improve ETL platform performance.
Qualifications A Master's degree with 1-3 years of experience, a Bachelor's degree with 3-5 years of experience, or a Diploma with 7-9 years of experience in Computer Science, IT, or a related field.
Proficiency in Python, PySpark , and Scala for data processing and ETL workflows.
Hands-on experience with Databricks for building ETL pipelines and handling big data processing.
Experience with data warehousing platforms such as Amazon Redshift or Snowflake .
Strong knowledge of SQL and experience with relational databases ( Postgre
SQL, MySQL ).
Familiarity with big data frameworks like Apache Hadoop, Spark , and Kafka .
Certifications such as AWS Certified Data Engineer or Databricks Certified are preferred.
Soft Skills Problem-Solving: Excellent critical-thinking and problem-solving skills.
Collaboration:
Strong communication and collaboration skills, with a demonstrated awareness of how to function in a team setting.
Presentation: Demonstrated presentation skills for communicating insights and solutions effectively.
Initiative: The ability to explore new tools and technologies to improve platform performance.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×