More jobs:
Data Engineer
Job in
Herndon, Fairfax County, Virginia, 22070, USA
Listed on 2026-03-01
Listing for:
Bespoke Technologies, Inc.
Full Time
position Listed on 2026-03-01
Job specializations:
-
Software Development
Data Engineer
Job Description & How to Apply Below
BT-152 – Data Engineer
Skill Level: Mid
Location:
Chantilly/Herndon
- MUST HAVE AN ACTIVE TS OR TS/SCI CLEARANCE TO APPLY. Those without an active security clearance will not be considered.
As a Data Engineer, you will be a hands‑on builder and a key member of the team creating and sustaining the data lifeblood of the platform. You will apply your technical skills to develop, deploy, and maintain resilient and efficient data pipelines. This role is perfect for a practitioner who is passionate about leveraging modern tools and automation to solve complex data challenges and deliver high‑quality data solutions.
Responsibilities- Design, develop, and maintain robust and scalable data pipelines for both new development and ongoing operations & maintenance (O&M).
- Build new pipelines using modern, modular patterns like Databricks Delta Live Tables, adhering to established governance standards.
- Implement reusable pipeline templates and automated monitoring patterns to ensure consistency and scalability across all data flows.
- Utilize Infrastructure-as-Code (IaC) with tools like Terraform to create consistent and repeatable CI/CD deployments.
- Integrate automated data quality checks and profiling using frameworks like Great Expectations to ensure data integrity and validate SLAs.
- Troubleshoot pipeline issues, optimize performance, and contribute to the continuous improvement of O&M processes.
- Participate in code reviews and create clear documentation for ETL mappings, code, and deployment processes.
- 4+ years of experience in data engineering.
- Experience with the development and maintenance of extract, transform, and load (ETL) tools and services.
- Proficiency in Python, SQL, and Spark/PySpark.
- Experience with cloud data platforms (e.g., AWS, Azure) and data engineering platforms like Databricks, Palantir, or Snowflake.
- Experience working in an Agile/Scrum environment.
- Direct experience with the data platform.
- Experience working in high‑security environments (e.g., SIPR, JWICS).
- Hands‑on experience with Databricks Delta Live Tables and Terraform.
- Familiarity with data orchestration tools (e.g., Airflow) and containerization (Docker, Kubernetes).
- Knowledge of COTS and open-source data engineering tools such as NiFi or Elastic Search.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×