×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Remote / Online - Candidates ideally in
201301, Noida, Uttar Pradesh, India
Listing for: Confidential
Remote/Work from Home position
Listed on 2026-02-03
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Job Description & How to Apply Below
Location:

Sector 63, Noida (WFO)

Timings:
Mon - Fri; 10:30 AM to 7:30 PM

About

The Role

We are seeking a skilled Data Engineer with hands-on experience in building and maintaining scalable data pipelines and analytics solutions. The ideal candidate will be highly proficient in PySpark, Databricks, and Azure, with strong experience in managing large datasets, data warehousing, and modern data stack technologies.

Key Responsibilities

Design, build, and maintain ETL/ELT pipelines for efficient data ingestion, transformation, and storage.
Collaborate with data analysts, data scientists, and stakeholders to understand data requirements and deliver actionable solutions.
Implement and maintain data models and transformation workflows using Databricks.
Optimize performance of large-scale distributed data processing jobs using PySpark.
Ensure data quality, consistency, and integrity through validation and monitoring.
Work with Azure cloud data platforms (e.g., Azure Data Lake, Azure Data Factory, Azure Databricks, Azure Synapse) to manage data storage and pipelines.
Troubleshoot and resolve data-related technical issues.
Participate in code reviews, documentation, and best practice enforcement for data engineering workflows.

Required Skills And Qualifications

Minimum 2 years of experience in Data Engineering or a related role.
Experience writing code in Python language.
Proficiency in PySpark for distributed data processing.
Expertise in Databricks for data engineering, transformation, and workflow development.
Strong knowledge of SQL and relational databases.
Familiarity with data warehousing solutions such as Snowflake, Big Query, or Redshift.

Experience with cloud platforms, with strong preference for Azure.
Understanding of data pipelines, ETL/ELT processes, and data architecture principles.
Knowledge of version control systems such as Git.
Strong problem-solving skills and attention to detail.

Preferred Skills

Working knowledge of DBT for data modeling and transformation.

Experience with orchestration tools like Airflow or Prefect.
Knowledge of streaming data pipelines such as Kafka or Kinesis.
Familiarity with containerization using Docker or Kubernetes.
Understanding of data governance, security, and compliance practices.

Perks And Benefits Of Working At Algoscale

Opportunity to collaborate with leading companies across the globe.
Opportunity to work with the latest and trending technologies.
Competitive salary and performance-based bonuses.
Comprehensive group health insurance.
Flexible working hours and remote work options (for some positions only).
Generous vacation and paid time off.
Professional learning and development programs and certifications.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary