Senior Software Engineer, Data Platform
Listed on 2026-03-12
-
IT/Tech
Data Engineer, Data Science Manager
Build a Safer World.
TRM Labs provides blockchain analytics and AI solutions to help law enforcement and national security agencies, financial institutions, and cryptocurrency businesses detect, investigate, and disrupt crypto‑related fraud and financial crime. TRM’s blockchain intelligence and AI platforms include solutions to trace the source and destination of funds, identify illicit activity, build cases, and construct an operating picture of threats. TRM is trusted by leading agencies and businesses worldwide who rely on TRM to enable a safer, more secure world for all.
OverviewThe Data Platform team collaborates with an experienced group of data scientists, engineers, and product managers to build highly available and scalable data infrastructure for TRM's products and services. As a Senior Software Engineer on the Data Platform team, you will be responsible for executing mission‑critical systems and data services that analyze blockchain transaction activity at petabyte scale, and ultimately work to build a safer financial system for billions of people.
ImpactYou’ll Have Here
- Build highly reliable data services to integrate with dozens of blockchains.
- Develop complex ETL pipelines that transform and process petabytes of structured and unstructured data in real‑time.
- Design and architect intricate data models for optimal storage and retrieval to support sub‑second latency for querying blockchain data.
- Oversee the deployment and monitoring of large database clusters with an unwavering focus on performance and high availability.
- Collaborate across departments, partnering with data scientists, backend engineers, and product managers to design and implement novel data models that enhance TRM’s products.
- A Bachelor's degree (or equivalent) in Computer Science or a related field.
- A proven track record, with 5+ years of hands‑on experience in architecting distributed system architecture, guiding projects from initial ideation through to successful production deployment.
- Exceptional programming skills in Python, as well as adeptness in SQL or Spark
SQL. - Versatility that spans the entire spectrum of data engineering in one or more of the following areas:
- In‑depth experience with data stores such as Iceberg, Trino, Big Query, and Star Rocks, and Citus.
- Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT, etc.
- Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink.
- Competence in deploying and monitoring infrastructure within public cloud platforms, utilizing tools such as Docker, Terraform, Kubernetes, and Datadog.
- Proven ability in loading, querying, and transforming extensive datasets.
- The Data Platform team is the funnel between all of TRM's data world and product world. We care about all layers of stack including petabyte of data stores, pipelines, and processes.
- We have quite a big scope as the team with new and exciting projects every quarter. As a result, we collaborate across the board with most teams at TRM.
- We believe in async communication and are also not afraid to jump on a quick huddle if that helps to move things faster. We are both scrappy when the situation demands and also process‑oriented when we need to achieve our OKRs.
- We are always looking for people who can elevate the quality our tech and our execution. If you enjoy a remote‑first and async friendly environment to achieve efficacy and efficiency at petabyte scale, our team could be a great pick for you!
- Team members are based in the US across almost all timezones! Our on‑call tends to be in EST/PST shift, whatever suits you the best.
- We do try to reserve some overlap in the day for meetings. Our north star – no IC spends more than 3‑4 hours/week in meetings.
- Build scalable engines to optimize routine scaling and maintenance tasks like create self‑serve automation for creating new pgbouncer, scaling disks, scaling/updating of clusters, etc.
- Enable tasks to be faster next time and reducing dependency on a single person.
- Identify ways to compress timelines using 80/20 principle. For instance, what does…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).