×
Register Here to Apply for Jobs or Post Jobs. X

Databricks Administrator​/Cloud Systems Engineer – Onsite

Job in Washington, District of Columbia, 20022, USA
Listing for: VIVA USA INC
Full Time position
Listed on 2026-01-27
Job specializations:
  • IT/Tech
    Cloud Computing, Data Engineer, Cybersecurity, IT Support
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below
Position: Databricks Administrator / Cloud Systems Engineer – Onsite

Overview

Title:

Databricks Administrator / Cloud Systems Engineer – Onsite

Description:

BACKGROUND:
Databricks Administrator is the hands-on technical owner of the client’s Databricks platform supporting EDP. This role is accountable for platform operations, security, and governance configuration end-to-end—ensuring the environment is compliant, reliable, cost-controlled, and enables secure analytics and AI/ML workloads at scale

REQUIREMENTS:
The candidate shall also demonstrate the below knowledge and experience:

Responsibilities
  • Administer Databricks account and work spaces across SDLC environments; standardize configuration, naming, and operational patterns.
  • Configure and maintain clusters/compute, job compute, SQL warehouses, runtime versions, libraries, repos, and workspace settings.
  • Implement platform monitoring/alerting, operational dashboards, and health checks; maintain runbooks and operational procedures.
  • Provide Tier 2/3 operational support: troubleshoot incidents, perform root-cause analysis, and drive remediation and preventive actions.
  • Manage change control for upgrades, feature rollouts, configuration changes, and integration changes; document impacts and rollback plans.
  • Enforce least privilege across platform resources (work spaces, jobs, clusters, SQL warehouses, repos, secrets) using role/group-based access patterns.
  • Configure and manage secrets and secure credential handling (secret scopes / key management integrations) for platform and data connectivity.
  • Enable and maintain audit logging and access/event visibility; support security reviews and evidence requests.
  • Administer Unity Catalog governance: metastores, catalogs/schemas/tables, ownership, grants, and environment/domain patterns.
  • Configure and manage external locations, storage credentials, and governed access to cloud object storage.
  • Partner with governance stakeholders to support metadata/lineage integration, classification/tagging, and retention controls where applicable.
  • Coordinate secure connectivity and guardrails with cloud/network teams: private connectivity patterns, egress controls, firewall/proxy needs.
  • Configure cloud integrations required for governed data access and service connectivity (roles/permissions, endpoints, storage integrations).
  • Implement cost guardrails: cluster policies, auto-termination, scheduling, workload sizing standards, and capacity planning.
  • Produce usage/cost insights and optimization recommendations; address waste drivers (idle compute, oversized clusters, inefficient jobs).
  • Automate administration and configuration using APIs/CLI/IaC (e.g., Terraform) to reduce manual drift and improve repeatability.
  • Maintain platform documentation: configuration baselines, security/governance standards, onboarding guides, and troubleshooting references.
  • Design and implement backup and disaster recovery procedures for workspace configurations, notebooks, Unity Catalog metadata, and job definitions; maintain recovery runbooks and perform periodic DR testing aligned to RTO/RPO objectives.
  • Monitor and optimize platform performance, including SQL warehouse query tuning, cluster autoscaling configuration, Photon enablement, and Delta Lake optimization guidance (OPTIMIZE, VACUUM, Z-ordering strategies).
  • Administer Delta Live Tables (DLT) pipelines and coordinate with data engineering teams on pipeline health, data quality monitoring, failed job remediation, and pipeline configuration best practices.
  • Manage third-party integrations and ecosystem connectivity, including BI tool integrations (e.g., Power BI), and external metadata catalog integrations.
  • Implement Databricks Asset Bundles (DABs) for standardized deployment patterns; automate workspace resource deployment (jobs, pipelines, dashboards) across SDLC environments using bundle-based CI/CD workflows.
  • Conduct capacity planning and scalability analysis, including forecasting concurrent user/workload growth, platform scaling strategies, and proactive resource allocation during peak usage periods.
  • Facilitate user onboarding and enablement, including new user/team onboarding procedures, training coordination, workspace access provisioning, and creation of self-service…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary