Azure Data Engineer Full remote - Contractor in USD
Job Description & How to Apply Below
For an international project in Chennai, we are urgently looking for a Full Remote Senior Azure Data Engineer.
We are looking for a motivated contractor. Candidates need to be fluent in English.
Tasks and responsibilities:
Engage with cross-functional business teams to gather data and reporting requirements, data insights, document data flows, and analyze usage patterns and exceptions.
Design, employ and Champion next-generation data and AI-driven products and capabilities when designing the data flow, data repository, dashboarding/reporting/analytics and exception handling;
Promote a culture of data-driven decision-making and foster adoption of modern data architecture and AI use cases;
Write technical specifications for planned work;
As needed contribute to the development of testing artifacts;
Architect and design robust, scalable, and high-performing data solutions using modern cloud-native and hybrid platforms.
Lead solution development efforts with high-quality, maintainable, and well-documented code when necessary;
Create blueprint for how data is stored, accessed and managed;
Write complex queries and optimize them for performance across relational and non-relational databases;
Designing workflows to extract, transform, and load data between systems accurately and efficiently;
Work with a variety of data stores including relational databases, Azure Data Lake, Azure Blob Storage, and Azure Synapse Analytics;
Analyze and optimize complex data models and large datasets to improve performance and scalability;
Build data repository for analytical and reporting purposes;
Profile:
Bachelor or Master degree;
+6 years of relevant experience;
Very good experience in Data Modeling and designing ETL processes, good experience designing and implementing data repositories and ETL workflows,
Experience with data discovery and data mapping;
Technologies: SQL, Oracle, Azure Data Lake, (preferred - Azure Data Stack), Python, ETL tools;
Familiarity with big data technologies like Spark, and Kafka;
Shows an understanding of data management concepts – data quality, data lineage, data lifecycle, master/reference data management, metadata management;
Excellent skills in writing PL/SQL and ability to read data models, infer data flows and write complex queries;
Has strong analytic, problem solving and logical reasoning skills with high attention to detail and the ability to multi-task;
Familiarity in using agile tools such as Azure Dev Ops;
Fluent in English;
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×