Job Description & How to Apply Below
We are looking for a Senior Data Engineer to join our data engineering team. The ideal candidate will be a hands-on technical expert with strong experience in Snowflake and dbt , capable of designing, building, and maintaining modern data pipelines and architectures. You will play a key role in shaping our data infrastructure, enabling analytics, and ensuring scalable and efficient data solutions across the organization.
Key Responsibilities
Design, develop, and optimize data pipelines and data models in Snowflake using dbt.
Collaborate with data analysts, Product, and application teams to ensure high-quality, reliable data delivery.
Implement data lake architectures and manage ingestion, transformation, and orchestration workflows.
Work with cloud infrastructure (preferably AWS) to design and manage scalable, secure, and cost-effective data solutions.
Integrate and manage data from various transactional systems (e.g., SQL Server, Oracle, Postgre
SQL, etc.).
Optimize data performance and query efficiency, ensuring scalability and maintainability.
Maintain data governance, security, and compliance best practices.
Provide technical mentorship and contribute to data engineering standards and best practices.
Required Qualifications
7+ years of experience in data engineering, with a strong focus on Snowflake (data modeling, optimization, and security).
3+ years of hands-on experience with AWS cloud services (S3, Glue, Lambda, Redshift, etc.).
Proficiency with dbt for transformation, testing, and documentation.
Mid-level expertise in at least one transactional RDBMS such as SQL Server, Oracle, or Postgre
SQL.
Experience designing and managing data lake architectures.
Deep expertise in SQL and Python (or similar scripting language).
Strong understanding of the software development lifecycle and Devops practices, including CI/CD pipelines.
Solid understanding of ETL/ELT pipelines, data orchestration, and data quality frameworks.
Proficiency in data modeling techniques (e.g., dimensional, denormalized) and the ability to apply them appropriately.
Excellent communication, problem-solving, and collaboration skills.
Preferred Qualifications
Experience with CI/CD pipelines for data workflows.
Knowledge of data cataloging, governance, and security frameworks.
Exposure to streaming data technologies (Kafka, Kinesis, etc.).
Experience in a modern data stack environment.
You can find more about us here
- About Our HRMS Systems For Public Sector | NEOGOV
NGV Software India | NEOGOV
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×