Job Description
Design, develop, and maintain scalable and reliable data engineering pipelines to support analytics and business intelligence needs
• Architect, design, and implement solutions that meet stakeholder's needs
• Architect, implement, and optimize Snowflake data warehouse solutions, including data modeling, performance tuning, and cost optimization
• Lead and support data migration initiatives, including migration from on-premises or legacy data platforms to Snowflake or other cloud-based solutions
• Develop and manage ELT/ETL processes using modern data integration tools and frameworks
• Participate actively in requirements gathering, data modeling, and design sessions
• Prepare high-level and detailed technical specifications for the projects in accordance with security and architecture documentation objectives
• Develop detailed plans and accurate estimates for the completion of the build, system testing, and implementation phases of a project
• Collaborate with data architects, analytics teams, and business stakeholders to translate requirements into technical solutions
• Running and optimizing SQL queries on RDBMS like MS SQL server, MySQL/MariaDB
• Ensure data quality, security, governance, and compliance throughout the data lifecycle
• Develop, code, document, and execute unit tests, systems, integration and acceptance tests, and testing tools for functions of high complexity
• Troubleshoot and resolve performance, data integrity, and pipeline reliability issues
• Document architecture, data flows, and operational procedures
What do you need to succeed?
Must-have
• Strong hands-on experience in Data Engineering, including data pipeline development and large-scale data processing
• Deep expertise in Snowflake architecture, including Virtual warehouses, Micro-partitioning, Clustering and performance optimization, Security and access control
• Proven experience with data migration projects, including assessment, planning, execution, and validation
• Minimum 5+ years of experience in software engineering or analytics in creating enterprise data architectures, distributed and microservice software architectures and design patterns
• Strong SQL skills and experience with data modeling (dimensional and/or data vault)
• Core SQL database concepts, including creating DDL, DML scripts, normalization, running, and optimizing SQL queries on RDBMS
• Experience working with cloud platforms (AWS, Azure, or GCP)
• 2+ years of application development experience in Hadoop, No
SQL databases like Mongo
DB, Cassandra, or HBase
• Familiarity with orchestration and data integration tools (e.g., Airflow, dbt, Informatica, Fivetran, or similar)
• Prior experience with Liquibase, Git code repos like Git Hub
• Bachelor’s degree in information technology, Computer Science
• Strong problem-solving skills and ability to work independently in a fast-paced environment
RequirementsAndroid and iOS
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: