×
Register Here to Apply for Jobs or Post Jobs. X

Senior Software Engineer - Data Lakehouse Infrastructure

Job in Worcester, Worcester County, Massachusetts, 01609, USA
Listing for: TRM Labs
Part Time position
Listed on 2026-01-12
Job specializations:
  • Software Development
    Data Engineer, Software Engineer
Job Description & How to Apply Below

TRM is on a mission to build a safer financial system for billions of people. We deliver a blockchain intelligence data platform to financial institutions, crypto companies, and governments to fight cryptocurrency fraud and financial crime. We consider our business and our profit as a way to move towards our mission sustainably and at scale.

The Data Platform team collaborates with an experienced group of data scientists, engineers, and product managers to build highly available and scalable data infrastructure for TRM's products and services. As a Software Engineer on the Data Platform team, you will be responsible for executing mission‑critical systems and data services that analyze blockchain transaction activity at petabyte scale, and ultimately work to build a safer financial system for billions of people.

The

impact you'll have here:
  • Build highly reliable data services to integrate with dozens of blockchains.
  • Develop complex ETL pipelines that transform and process petabytes of structured and unstructured data in real‐time.
  • Design and architect intricate data models for optimal storage and retrieval to support sub‑second latency for querying blockchain data.
  • Oversee the deployment and monitoring of large database clusters with an unwavering focus on performance and high availability.
  • Collaborate across departments, partnering with data scientists, backend engineers, and product managers to design and implement novel data models that enhance TRM's products.
What we're looking for:
  • Bachelor's degree (or equivalent) in Computer Science or a related field.
  • A proven track record, with 5+ years of hands‑on experience in architecting distributed system architecture, guiding projects from initial ideation through to successful production deployment.
  • Exceptional programming skills in Python, as well as adeptness in SQL or Spark

    SQL.
  • Versatility that spans the entire spectrum of data engineering in one or more of the following areas:
  • In‑depth experience with data stores such as Click House, Elastic Search, Postgres, Redis, and Neo4j.
  • Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT, Luigi, Azkaban, and Storm.
  • Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink.
  • Competence in deploying and monitoring infrastructure within public cloud platforms, utilizing tools such as Docker, Terraform, Kubernetes, and Datadog.
  • Proven ability in loading, querying, and transforming extensive datasets
About the Team:
  • The Data Platform team is the funnel between all of TRM's data world and product world. We care about all layers of stack including petabyte of data stores, pipelines, and processes.
  • We have quite a big scope as the team with new and exciting projects every quarter. As a result, we collaborate across the board with most teams at TRM.
  • We believe in async communication and are also not afraid to jump on a quick huddle if that helps to move things faster. We are both scrappy when the situation demands and also process‑oriented when we need to achieve our OKRs.
  • We are always looking for people who can elevate the quality our tech and our execution. If you enjoy a remote‑first and async friendly environment to achieve efficacy and efficiency at petabyte scale, our team could be a great pick for you!
  • Team members are based in the US across almost all timezones! Our on‑call tends to be in EST/PST shift, whatever suits you best.
  • We do try to reserve some overlap in the day for meetings. Our north star - no IC spends more than 3‑4 hours/week in meetings.
Learn about TRM Speed in this position:
  • Build scalable engines to optimize routine scaling and maintenance tasks like create self‑serve automation for creating new pgbouncer, scaling disks, scaling/updating of clusters, etc.
  • Enable tasks to be faster next time and reducing dependency on a single person.
  • Identify ways to compress timelines using 80/20 principle. For instance, what does it take to be operational in a new environment? Identify the must have and nice to haves that are need to deploy our stack to be fully operation. Focus on must haves first to get us operational and then use future milestones to…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary