More jobs:
Senior Data Engineer
Job in
Glendale, Los Angeles County, California, 91222, USA
Listed on 2026-01-07
Listing for:
KellyMitchell Group
Full Time
position Listed on 2026-01-07
Job specializations:
-
Software Development
Data Engineer
Job Description & How to Apply Below
Get AI-powered advice on this job and more exclusive features.
Job SummaryOur client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California.
Duties- Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
- Build tools and services to support data discovery, lineage, governance, and privacy
- Collaborate with other software and data engineers and cross‑functional teams
- Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS
- Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
- Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more
- Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics
- Participate in agile and scrum ceremonies to collaborate and refine team processes
- Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements
- Maintain detailed documentation of work and changes to support data quality and data governance requirements
- 5+ years of data engineering experience developing large data pipelines
- Proficiency in at least one major programming language such as Python, Java or Scala
- Strong SQL skills and the ability to create queries to analyze complex datasets
- Hands‑on production experience with distributed processing systems such as Spark
- Experience interacting with and ingesting data efficiently from API data sources
- Experience coding with the Spark Data Frame API to create data engineering workflows in Databricks
- Hands‑on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
- Experience developing APIs with GraphQL
- Deep understanding of AWS or other cloud providers, as well as infrastructure‑as‑code
- Familiarity with data modeling techniques and data warehousing best practices
- Strong algorithmic problem‑solving skills
- Excellent written and verbal communication skills
- Advanced understanding of OLTP versus OLAP environments
- Medical, Dental, & Vision Insurance Plans
- Employee‑Owned Profit Sharing (ESOP)
- 401K offered
The approximate pay range for this position is between $51.00 and $73.00
. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Seniority level:
Mid‑Senior level.
Employment type:
Contract.
Position Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×