Design and develop Databricks solutions leveraging Lakehouse architecture for enterprise data processing and analytics
Develop and optimize ETL/ELT pipelines
Create and manage structured streaming pipelines for real-time data processing
Configure and optimize Databricks clusters and Spark jobs for optimal performance
Utilize Delta Live Tables for data ingestion and transformations
Apply Unity Catalog features and IAM best practices for security governance and access control
Support infrastructure and resource management using Terraform
Participate in Agile/Scrum development process and collaborate with team members
Implement monitoring solutions for pipeline performance and data quality
Contribute to code reviews and knowledge-sharing sessions
What you must have:
6+ years of experience in data engineering
2+ years of hands-on experience with Databricks platform
Strong expertise in Python and Spark programming
Demonstrable experience in using AI in development
Proven experience with AWS or other similar cloud services
Deep understanding of data modeling and SQL
Experience with Delta Lake and Lakehouse architecture
Strong knowledge of ETL/ELT principles and patterns
Experience with version control systems (Git)
Demonstrated ability to optimize data pipelines
Strong problem-solving and analytical skills
Excellent communication and collaboration abilities
Nice to have:
Financial services industry experience
Experience with multiple cloud providers
Knowledge of AI/ML implementation patterns
API development experience
Experience with real-time data processing
Data governance framework experience
Salary/Rate Range: $ - $
Thank you for your interest in this opportunity. If you are selected to move forward in the process, we will contact you directly. If you do not hear from us, we encourage you to continue visiting our website for other roles that may be a good fit.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: