More jobs:
Job Description & How to Apply Below
Key Responsibilities
Design and build scalable data pipelines using Databricks and Py Spark
Process live/streaming data and batch workloads with optimal performance
Implement data management solutions following industry best practices
Collaborate with business units for efficient data delivery to analytics
Handle high volume, velocity, variety of real-world data challenges
Contribute to CoE/CoP initiatives and technology evangelism
Required Technical Skills
Big Data: Databricks, PySpark, Scala, Spark Streaming, batch processing
Query & Processing: Advanced SQL, data pipeline optimization
Data Engineering: ETL/ELT design, data modeling, pipeline orchestration
Cloud: Azure Databricks, AWS EMR (preferred)
Skills:
data,processing,live,pipelines,sql,azure,design,pipeline
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×