More jobs:
Job Description & How to Apply Below
Required technical and professional expertise
4-7 years of experience in big data engineering, data integration, and distributed computing.
Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP).
Proficiency in Python or Scala for data processing.
Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM).
Understanding of data security, encryption, and compliance frameworks
Preferred technical and professional experience
Experience in banking or financial services data platforms.
Exposure to Denodo for data virtualization and DGraph for graph-based insights.
Familiarity with cloud data platforms (AWS, Azure, GCP).
Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×