Sr. Data Engineer
Job in
Los Angeles, Los Angeles County, California, 90079, USA
Listed on 2026-01-25
Listing for:
Van Kaizen
Full Time
position Listed on 2026-01-25
Job specializations:
-
IT/Tech
Data Engineer, Data Analyst, Data Science Manager, Cloud Computing
Job Description & How to Apply Below
Overview
Company that is revitalizing the manufacturing industry via technology, is looking for a Sr. Data Engineer to work across hybrid environments—on-prem and cloud—to automate data movement, validate data quality, and enable analytical teams to derive insights from trusted, well-organized datasets.
To note: this is a hybrid, three days a week, in office role to Downtown Los Angeles, and the candidate must be a US Citizen or Permanent Resident (Green Card Holder) not needing future sponsorship.
Responsibilities- Co-design data interfaces and pipelines in close collaboration with software engineers and technical leads, ensuring alignment with application domain models and product roadmaps.
- Model curated data within the lake into data warehouse structures (e.g., star schemas, wide tables, semantic layers) optimized for business intelligence (BI), ad-hoc analytics, and key performance indicator (KPI) reporting.
- Establish robust data observability and quality checks to ensure reliability and consistency.
- Operate the platform reliably by orchestrating jobs, monitoring pipelines, and continuously tuning cost and performance.
- Utilize modern data technologies to operationalize and expand the enterprise Data Lake.
- Implement efficient ingestion strategies, integrating diverse data sources, and ensuring data is structured for accessibility and analysis
- Bachelor’s degree (BA/BS) in Computer Science, Data Science, Mathematics, Analytics, or a related quantitative field (or equivalent experience).
- 5-8+ years of proven experience building production-grade data systems with a strong understanding of cloud-based data lake architectures and data warehouses.
- Demonstrated expertise in designing and operating data pipelines, including schema evolution, backfills, and performance tuning.
- Hands-on proficiency with Python and SQL, including experience with distributed processing frameworks (e.g., Apache Spark) and CI/CD for data workflows.
- Proven ability to design and implement ETL/ELT workflows and data modeling techniques (e.g., star schemas, wide tables, semantic models).
- Proficiency with cloud data platforms and services such as AWS, Databricks, and Snowflake, with a focus on scalability and reliability.
- Familiarity with open table formats (e.g., Iceberg, Delta, Hudi) and business intelligence data modeling.
- Experience or strong interest in enabling AI/ML use cases.
Do you want to help design data pipelines, warehouse, and lakes from scratch for an industry leading organization? Send your resume, along with your application, and let's chat!
#J-18808-LjbffrTo View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×