More jobs:
Job Description & How to Apply Below
We are seeking skilled Data Engineers to design, build, and optimize large-scale analytics data structures for US enterprise clients. Work onsite in Lucknow, Noida, Bangalore, or Hyderabad to implement high-performance ETL pipelines using AWS Redshift, Spark, Python, and SQL for advanced reporting and business intelligence solutions.
Key Responsibilities
Design/develop large-scale data structures for analytics and reporting
Implement ETL/ELT pipelines using SQL, Python, Spark on AWS Redshift
Build data models and metadata for ad-hoc/pre-built reporting solutions
Collaborate with product teams to create scalable data integration pipelines
Automate reporting processes and enable self-service analytics
Interface with business customers to gather requirements and deliver solutions
Work with Analysts/BI Engineers on statistical analysis and ML algorithms
Participate in strategic planning and communicate with cross-functional teams
Required Technical Skills
Data Engineering: SQL, PL/SQL, DDL, Spark
SQL, data modeling, ETL/ELT pipelines, data warehousing
Programming: Python, Scala, Korn Shell scripting
Big Data Technologies: Hadoop, Hive, Spark, AWS EMR
Cloud/Data Warehouse: AWS Redshift, OLAP technologies
ETL Tools: Informatica, ODI, SSIS, BODI, Datastage
Analytics: MDX, HiveQL, reporting automation
Skills:
sql,data structures,python,aws,etl,spark
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×