×
Register Here to Apply for Jobs or Post Jobs. X

EY-GDS Consulting-AI And DATA-AWS Databricks-Manager

Job in 641001, Coimbatore, Tamil Nadu, India
Listing for: Confidential
Full Time position
Listed on 2026-02-04
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Big Data, Data Science Manager
Job Description & How to Apply Below
At EY, we're all in to shape your future with confidence.

We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go.

Join EY and help to build a better working world.

EY GDS – Data and Analytics (D And

A) – AWS DBX - Manager

As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance.

The opportunity

We're looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and presales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team.

Your

Key Responsibilities

Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data.
Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 8 - 11 years]
Need to understand current & Future state enterprise architecture.
Need to contribute in various technical streams during implementation of the project.
Provide product and design level technical best practices
Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions
Define and develop client specific best practices around data management within a Hadoop environment or cloud environment
Recommend design alternatives for data ingestion, processing and provisioning layers
Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark
Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies

Tech Stack:

- AWS

Experience building on AWS using S3, EC2, Redshift, Glue, EMR, Dynamo

DB, Lambda, Quick Sight, etc.
Experience in Pyspark/Spark / Scala
Experience using software version control tools (Git, Jenkins, Apache Subversion)
AWS certifications or other related professional technical certifications

Experience with cloud or on-premises middleware and other enterprise integration technologies.
Experience in writing Map Reduce and/or Spark jobs.
Demonstrated strength in architecting data warehouse solutions and integrating technical components.

Good analytical skills with excellent knowledge of SQL.
8+ years of work experience with very large data warehousing environment
Excellent communication skills, both written and verbal
3+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT, and reporting/analytic tools.
3+ years of experience data modelling concepts
3+ years of Python and/or Java development experience
3+ years' experience in Big Data stack environments (EMR, Hadoop, Map Reduce, Hive

Skills And Attributes For Success

Architect in designing highly scalable solutions AWS.
Strong understanding & familiarity with all AWS/GCP /Bigdata Ecosystem components
Strong understanding of underlying AWS/GCP Architectural concepts and distributed computing paradigms
Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming
Hands on experience with major components like cloud ETLs, Spark, Databricks
Experience working with No

SQL in at least one of the data stores - HBase, Cassandra, MongoDB
Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions
Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks.
Strong understanding of underlying…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary