Job Description & How to Apply Below
Location: Bengaluru
Job Description:
Leading the development of complex data pipelines independently and improving key delivery metrics through innovation and optimization
Experience in Requirement Gathering, Estimating, Managing large scale Data Engineering Projects.
Requirement Analysis:
Understand and translate business needs into data models supporting long-term solutions.
Data Modeling:
Work with the Business team to implement data strategies, build data flows and develop conceptual data models.
Data Pipeline Design:
Create robust and scalable data pipelines and data products in a variety of domains.
Data Integration:
Develop and maintain data lakes by acquiring data from primary and secondary sources and build scripts that will make our data evaluation process more flexible or scalable across data sets.
Testing:
Define and manage the data load procedures to reject or discard datasets or data points that do not meet the defined business rules and quality thresholds.
Deployment:
Implement data strategies and develop physical data models, along with the development teams, data analyst teams and information system team to ensure robust operational data management systems.
Understanding of and ability to Implement Data Engineering principles and best practices.
Coordinate with other teams to manage and prioritize projects. Drive strategy, roadmap, and execution of key data engineering initiatives.
Stakeholder Management:
Collaborate with stakeholders across the organization to understand their data needs and deliver solutions.
Continuous Integration and Continuous Deployment (CI/CD):
Implement and maintain CI/CD pipelines for data solutions, ensuring rapid, reliable, and streamlined updates to the data environment.
Qualifications
Basic Qualifications:
7+ years of work experience with a bachelor’s degree or at least 5+ years of work experience with an Advanced degree (e.g. Master’s, MBA)
Exposure to Financial Services/ Payments Industry
Proven experience of working in a team of senior data engineers and leading a team of junior engineers.
Technical
Skills:
Extensive experience in big data tools:
Hadoop, Hive, and Spark.
Proficiency in Python, SQL, and PySpark.
Experience with Unix/Linux systems with scripting experience in Bash.
Experience with data pipeline and workflow management tools like Airflow, etc.
Experience with relational SQL and No
SQL databases
Experience with cloud and platform services like Databricks, AWS or Azure.
Experience with stream-processing systems:
Kafka, Spark-Streaming, etc.
Strong analytic skills related to working with structured & unstructured datasets.
Proficiency in managing and communicating data warehouse plans to internal clients.
Experience with building processes supporting data transformation, data structures, metadata, dependency, and workload management.
A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
Exposure in leveraging GenAI tools and platforms for enabling efficiency.
Other
Skills:
Strong problem-solving skills.
Excellent communication skills and negotiation skills.
Ability to lead and work in a team.
Detail-oriented and excellent organizational skills.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×