×
Register Here to Apply for Jobs or Post Jobs. X

Data Platform Engineer, Data Capture

Job in Myrtle Point, Coos County, Oregon, 97458, USA
Listing for: Upstart
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Science Manager, Cloud Computing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Location: Myrtle Point

About Upstart

Upstart is the leading AI lending marketplace partnering with banks and credit unions to expand access to affordable credit. By leveraging Upstart's AI marketplace, Upstart-powered banks and credit unions can have higher approval rates and lower loss rates across races, ages, and genders, while simultaneously delivering the exceptional digital-first lending experience their customers demand. More than 80% of borrowers are approved instantly, with zero documentation to upload.

Upstart is a digital-first company, which means that most Upstarters live and work anywhere in the United States. However, we also have offices in San Mateo, California;
Columbus, Ohio; and Austin, Texas.

Most Upstarters join us because they connect with our mission of enabling access to effortless credit based on true risk. If you are energized by the impact you can make at Upstart, we’d love to hear from you!

The Team

The Data Engineering team (Part of Upstart's core platform vertical) provides developer tools, frameworks, and scalable data infrastructure as shared services across Upstart. The team's primary objective is to provide Data Analysts, Software Engineers, and ML scientists access to high-quality data and developer tools to create business metrics for respective product verticals.

You will join the subdivision focusing on Data Capture.

Role Overview

As the Data Platform Engineer joining this team, you will get the opportunity to contribute to 3 areas:

  • Expand the Developer tools that capture business events, operational data and third party data into the Lakehouse.
  • Expand the self-serve and orchestration framework that developers use to build pipelines.
  • Improve the observability and SLAs for the above services and raw layer data in the Lakehouse.

We would love to hear from you if you are passionate about building data products!

Position Location

This role is available in the following locations:
Remote

Time Zone Requirements

This team operates on the East/West Coast time zones.

Travel requirements

As a digital first company, the majority of your work can be accomplished remotely. The majority of our employees can live and work anywhere in the U.S but are encouraged to still spend high quality time in-person collaborating via regular onsites. The in-person sessions’ cadence varies depending on the team and role; most teams meet once or twice per quarter for 2-4 consecutive days at a time.

How

you’ll make an impact
  • Build and improve the tools owned by the Capture team.
  • Own the service/raw layer data observability and SLAs.
  • Participate in planning and prioritization by collaborating with stakeholders across the various product verticals and functions (ML, Analytics, Finance) to ensure our architecture aligns with the overall business objectives.
  • Collaborate with stakeholders such as Software Engineering, Machine Learning, Machine Learning Platform and Analytics teams to ingest data into the Lakehouse and adopt the developer tools.
  • Participate in code reviews and architecture discussions to exchange actionable feedback with peers.
  • Contribute to engineering best practices and mentor junior team members.
  • Help break down complex projects and requirements into sprints.
  • Continuously monitor and improve data platform performance, reliability, and security.
  • Stay up-to-date with emerging technologies and industry best practices in data engineering.
Minimum Qualifications
  • A bachelor's degree in Computer Science, Data Science, Engineering, or a related field.
  • 3+ years of experience in data engineering or related fields, with a strong focus on data quality, governance, and data infrastructure.
  • Proficiency in data engineering tech stack;
    Databricks / Postgre

    SQL / Python / Spark / Kafka / Streaming / SQL / AWS / Airflow / Airbyte / Fivetran / DBT / EKS / containers and orchestration (Docker, Kubernetes) and others.
  • Ability to approach problems with first principles thinking, embrace ambiguity, and enjoy collaborative work on complex solutions.
Preferred Qualifications
  • Strong foundation in algorithms and data structures and their real-world use cases.
  • Experience and understanding of distributed systems, data architecture design, and big data…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary