Sr. Engineer, Data
Listed on 2026-02-07
-
IT/Tech
Data Engineer
Company Description
Liv Cor, a Blackstone portfolio company, is a real estate asset management business specializing in multi-family housing. Formed in 2013 and headquartered in Chicago, Liv Cor is currently responsible for a portfolio of over 400 Class A and B properties comprising more than 150,000 units in markets across the United States.
Our business is focused on making real estate more valuable. But for us, it’s more than that. It’s people first, community always. It’s a life-filled career, not just a career-filled life. It’s doing good work, with good humans, and making a difference. It’s excellence in all its forms. Ultimately, we create great places to work, live, and grow. We do that by focusing on leaving people – and places – better than we found them.
Let’s talk about where you’d fit in:
You Should Be- Authentic. You do you. Together, we’ll do something amazing.
- A passionate person with a love for real estate and investing; and believes that helping others win is a noble cause, essential to our success.
- An excellent team player who enjoys working with others and has strong interpersonal skills.
- Highly motivated, energetic, and organized
We are seeking a highly skilled Sr. Engineer, Data to join our dynamic data team. The ideal candidate will have extensive experience designing, building, and optimizing data pipelines using ADLS, Databricks, Snowflake, and Python. You will play a critical role in developing scalable data infrastructure to support analytics, machine learning, and business intelligence initiatives. This role will also support data needs of business-critical applications.
WhatYou Will Do
- Design, develop, and maintain robust data pipelines using Databricks to process and transform large-scale datasets.
- Write efficient, reusable, and scalable SQL and/or Python code for data ingestion, transformation, and integration.
- Optimize data workflows for performance, reliability, and cost-efficiency in cloud environments.
- Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver solutions.
- Implement data governance, security, and compliance best practices within data pipelines.
- Monitor and troubleshoot data pipeline issues, ensuring high availability and data integrity.
- Leverage Databricks for advanced analytics, machine learning workflows, and real-time data processing.
- Integrate data from various sources (APIs, databases, blob, SFTP, streams) into Datalake for centralized storage and querying.
- Mentor junior data engineers and contribute to team knowledge sharing.
- Stay updated on emerging data technologies and recommend improvements to existing systems.
- Performance Tuning and Optimization of all Data Ingestion and Data Integration processes, including the Data Platform and databases
What you should have:
- Bachelor’s degree in computer science, Engineering, or a related field (or equivalent experience).
- 5+ years of experience in data or a related role; 3+ years of experience as a data engineer, software engineer and/or data warehouse engineer.
- Advanced proficiency in SQL and/or Python for data processing, scripting, and automation.
- Hands-on experience building and optimizing data pipelines.
- Strong expertise in data modeling, performance tuning, and SQL development.
- Experience with a cloud platform (AWS, Azure, or GCP) and their data services.
- Proficiency in ETL/ELT processes and tools for data integration.
- Understanding of data schema modeling (dimensions, measures, slowly changing dimensions)
- Familiarity with version control (e.g., Git) and CI/CD practices for data engineering workflows.
- Strong problem-solving skills and ability to work with complex, unstructured datasets.
- Excellent communication skills to collaborate with cross-functional teams.
- Strong communication skills, with the ability to initiate and drive projects proactively and accurately with a large, diverse team.
- An overwhelming desire to learn new things and to help people succeed.
- Experience with Azure Databricks Delta Live Tables/Declarative Pipelines, Spark, DuckDB
- Familiarity with Azure Durable Functions
- Data Governance Frameworks and Unity…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).