More jobs:
Job Description & How to Apply Below
Location: Bengaluru
Job Description:
Job Description :
- Build and maintain all facets of Data Pipelines for Data Engineering team.
- Build the pipelines required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, Python and Airflow.
- Work with internal and external stakeholders to assist with data-related technical issues and data quality issues.
- Engage in proof of concepts, technical demos and interaction with customers and other technical teams
- Participate in the agile ceremonies
- Ability to solve complex data-driven scenarios and triage towards defects and production issues
Technical Skills
- Must Have
Skills:
- Proficient with Python, PySpark and Airflow
- Strong understanding of Object-Oriented Programming and Functional Programming paradigm
- Must have experience working with Spark and its architecture
- Knowledge of Software Engineering best practices
- Advanced SQL knowledge (preferably Oracle)
- Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.
- Good to Have
Skills:
- Knowledge of ‘Data’ related AWS Services
- Knowledge of Git Hub and Jenkins
- Automated testing
Position Requirements
7+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×