×
Register Here to Apply for Jobs or Post Jobs. X

Senior Databricks Platform Engineer

Job in Arlington, Arlington County, Virginia, 22201, USA
Listing for: Sev1tech, Inc.
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Sev1tech, Inc. Senior Databricks Platform Engineer

Location: US-VA-Arlington

Job :

Type: Full Time W/Benefits Ret Match

# of Openings: 1

Arlington, VA

Overview

We are seeking a highly experienced and skilled Data Lake Engineer to join our team. As the Senior Data Lake Engineer, you will play a critical role in establishing and configuring an enterprise‑level Databricks solution to support our federal customer organization’s data lake initiatives. This position offers a unique opportunity to work with cutting‑edge technologies and shape the future of our federal customer’s data infrastructure.

If you are a highly skilled and experienced Senior Data Lake Engineer with expertise in Databricks and passion for building scalable and secure data lake solutions, we would like to hear from you.

Responsibilities
  • Design, implement, and configure enterprise Data Lake solutions on AWS using Databricks, ensuring scalability, reliability, and performance.
  • Work directly with cross‑functional to gather requirements, analyze data integration needs, and translate them into actionable Data Lake architecture components.
  • Build and configure Databricks work spaces, clusters, notebooks, jobs, and Delta Lake storage, integrating with AWS services such as S3, IAM, and KMS.
  • Develop end‑to‑end data ingestion pipelines to extract, transform, and load data using Databricks, Apache Spark, and AWS services.
  • Implement and maintain security controls including IAM policies, cluster security configurations, KMS‑based encryption, and data masking to protect sensitive data.
  • Create and optimize data transformation workflows, Delta Lake tables, and ETL/ELT processes to ensure high data quality and performance.
  • Monitor and tune Databricks clusters and job workloads for reliability, cost optimization, and performance, leveraging autoscaling and workload management.
  • Apply data governance best practices including cataloging, metadata management, and data lineage using Databricks Unity Catalog and AWS‑native capabilities.
  • Collaborate with cloud infrastructure teams to configure supporting AWS components such as S3 storage, networking, logging, monitoring, and access controls.
  • Maintain detailed technical documentation including solution designs, data flow diagrams, configuration standards, and operational procedures.
  • Stay up to date with advancements in Databricks, Delta Lake, Spark, and AWS services, integrating new features that improve automation and efficiency.
  • Partner with data engineers, analysts, and scientists to implement data models and reusable transformation patterns within Databricks and AWS.
  • Troubleshoot and resolve ingestion failures, cluster performance issues, pipeline errors, and data quality problems across AWS and Databricks environments.
  • Ensure compliance with data governance, privacy, and security requirements by applying secure architecture patterns and validating controls throughout the data lifecycle.
  • Support the evaluation and hands‑on testing of new AWS and Databricks features or services that enhance the Data Lake environment.
Qualifications
  • Bachelor's degree in computer science, information technology, or a related field. Equivalent experience will also be considered.
  • Proven experience in building and configuring enterprise‑level data lake solutions using Databricks in an AWS or Azure environment.
  • In‑depth knowledge of Databricks architecture, including work spaces, clusters, storage, notebook development, and automation capabilities.
  • Strong expertise in designing and implementing data ingestion pipelines, data transformations, and data quality processes using Databricks.
  • Experience with big data technologies such as Apache Spark, Apache Hive, Delta Lake, and Hadoop.
  • Solid understanding of data governance principles, data modeling, data cataloging, and metadata management.
  • Hands‑on experience with cloud platforms like AWS or Azure, including relevant services such as S3, EMR, Glue, Data Factory, etc.
  • Proficiency in SQL and one or more programming languages (Python, Scala, or Java) for data manipulation and transformation.
  • Knowledge of data security and privacy best practices, including data access controls, encryption, and data masking…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary