· Lead the architectural design for migration project, leveraging Azure services, SQL, Databricks, and PySpark to create scalable, efficient, and reliable solutions.
· Utilize Databricks to design and implement advanced data transformation and processing tasks using Databricks, PySpark and ADF
· Design, deploy, and manage Databricks clusters for data processing, ensuring performance and cost efficiency, and troubleshoot cluster performance issues as needed.
· Mentor and guide developers on the use of PySpark for data transformation and analysis, sharing best practices and reusable code patterns.
· Document architectural designs, migration plans, and best practices to ensure alignment and reusability within the team and across the organization.
· Collaborate with stakeholders to translate business requirements into technical specifications and design robust data pipelines, storage solutions, and transformation workflows.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: