×
Register Here to Apply for Jobs or Post Jobs. X

SR. Data Platform Engineer – Elk Grove, CA; Hybrid - on-site

Job in Elk Grove, Sacramento County, California, 95759, USA
Listing for: INSPYR Solutions
Full Time, Part Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Big Data, Data Science Manager
Salary/Wage Range or Industry Benchmark: 160000 - 175000 USD Yearly USD 160000.00 175000.00 YEAR
Job Description & How to Apply Below
Position: SR. Data Platform Engineer – Elk Grove, CA (Hybrid - 3 days a week on-site)

Title:

Sr. Data Platform Engineer

Locations:
Elk Grove, CA (Hybrid – 3 days a week on-site)

Duration:
Direct Hire/FTE

Compensation: $160,000 – $175,000 base +20% Bonus and excellent benefits

Work Requirements:
Holders, or Authorized to Work in the U.S.

Position Summary

The Senior Data Engineer is a technical expert responsible for designing, implementing, and supporting scalable data solutions using modern big data technologies. In this full-time role, you will work closely with data architects, analysts, and business stakeholders to build, optimize, and maintain data pipelines and platforms that drive analytics, reporting, and machine learning. You will ensure the reliability, performance, and security of production data systems, and play a key role in troubleshooting, monitoring, and continuous improvement.

KEY

RESPONSIBILTITES
  • Design, develop, and maintain robust data pipelines for ingesting, transforming, and storing large volumes of structured, semi-structured, and unstructured data.
  • Implement data workflows and ETL processes using technologies such as Spark, Delta Lake, and cloud-native tools.
  • Support the production use of big data platforms (e.g., Databricks, Snowflake, Google Cloud Platform), ensuring high availability, scalability, and performance.
  • Monitor, troubleshoot, and resolve issues in production data systems, including job failures, performance bottlenecks, and data quality concerns.
  • Collaborate with data architects to translate business requirements into technical solutions, ensuring alignment with architectural standards and best practices.
  • Optimize SQL queries, Spark jobs, and data processing workloads for efficiency and cost-effectiveness.
  • Implement and maintain data governance, security, and compliance measures, including access controls, data masking, and audit logging.
  • Integrate data workflows into CI/CD pipelines, automate deployment processes, and manage source control using Git and related Dev Ops tools.
  • Document data pipelines, workflows, and operational procedures to support knowledge sharing and maintainability.
  • Stay current with emerging big data technologies and recommend improvements to enhance reliability, scalability, and efficiency.
SUPERVISORY RESPONSIBILITIES

No Direct reports.

Education and Experience
  • 7+ years of experience in data engineering or related roles, with hands‑on experience building and supporting production data pipelines.
  • Strong proficiency with big data platforms (e.g., Databricks, Snowflake, Google Cloud Platform) and the Spark ecosystem.
  • Experience with data lakehouse and warehouse architectures, including Delta Lake or similar technologies.
  • Senior Data Engineer with deep expertise in Retrieval‑Augmented Generation (RAG) systems and advanced chunking strategies for optimizing semantic search and LLM integration, delivering scalable solutions for enterprise knowledge retrieval and contextual AI applications.
  • Advanced skills in PySpark, Python, SQL, and related data processing frameworks.
  • Experience with cloud storage, messaging/streaming technologies (e.g., Apache Kafka, cloud Pub/Sub), and vector databases.
  • Proven ability to optimize ETL jobs, tune cluster configurations, and troubleshoot production issues.
  • Familiarity with data governance tools, catalog solutions, and security best practices.
  • Experience integrating data workflows into CI/CD pipelines and using Dev Ops tools (e.g., Git, Jenkins, Terraform).
  • Strong problem‑solving skills, attention to detail, and ability to work independently or as part of a team.
  • Excellent communication skills to collaborate with cross‑functional teams and document technical solutions.
  • Experience working in an Agile environment.
Tools & Technologies
  • Big Data Platforms:
    Spark, Delta Lake, SQL engines, MLflow (or similar model tracking tools), relational databases.
  • Data Governance:
    Data catalog and access control solutions.
  • Programming & Data Processing:
    PySpark, Python, SQL.
  • Cloud Services:
    Experience with cloud storage, messaging/streaming technologies, vector databases and Big Query.
  • Dev Ops & CI/CD:
    Git for version control, Jenkins, and infrastructure‑as‑code tools (e.g., Terraform) are a plus.
  • Other Tools:
    Project and workflow…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary