×
Register Here to Apply for Jobs or Post Jobs. X

Architect; -prem Data Engineer

Job in Chicago, Cook County, Illinois, 60290, USA
Listing for: MA CAPITAL U.S. LLC
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Data Engineer, Data Scientist, Data Warehousing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: Architect (On-prem) Data Engineer

Overview

MA Capital US LLC is a proprietary trading firm specializing in systematic and high-performing discretionary strategies across multiple asset classes. We leverage advanced technology, quantitative research, and sophisticated models to capitalize on opportunities in global markets. Our culture is built on innovation, efficiency, and transparency, providing our professionals with the tools and flexibility to succeed.

Position Overview

We’re seeking an Architect (On-prem) Data Engineer to define and build the firm’s data strategy and core data platform. This is a founding, senior, hands-on role responsible for establishing the framework for live data ingestion, at-rest storage, archival, and downstream consumption across research and trading systems. This role has end-to-end ownership of the data lifecycle and requires balancing architectural direction with hands-on delivery.

The role is 70% hands-on engineering and 30% design and strategy, comfortable operating independently, making pragmatic trade-offs, and building systems without over-engineering.

Responsibilities

Data Architecture & Strategy

  • Define the data architecture and operating model spanning live ingestion, at-rest datasets, and long-term archival.
  • Establish a data lifecycle management framework, including storage tiers, processing patterns, and retention policies.
  • Design and maintain centralized data repositories (data lake / warehouse) optimized for large-scale quantitative research and analytics.
  • Set standards for data formats, schemas, partitioning, and lifecycle management, balancing performance, cost, and scalability.

Data Infrastructure

  • Design and build high-throughput, fault-tolerant data pipelines for market data & analytics workloads.
  • Develop scalable ETL/ELT workflows supporting real-time, near-real-time, batch, and replay-based research pipelines.
  • Implement storage solutions optimized for large-scale, time-series and event-driven datasets.
  • Develop core abstractions and tooling that allow new datasets and features to be added without reworking foundational systems.
  • Establish Infrastructure as Code and automation patterns to ensure reliability, reproducibility, and operability as a largely solo resource.

End-to-End Data Ownership

  • Own the full data lifecycle from ingestion through downstream consumption by research and trading systems.
  • Establish standards for data quality, validation, reproducibility, and operational reliability.
  • Implement monitoring, alerting, and operational controls for data pipelines.
  • Maintain clear documentation and conventions to ensure consistent usage across teams.

Collaboration & Enablement

  • Work closely with traders, researchers, and engineers to translate strategy requirements into robust data systems.
  • Enable rapid onboarding of new strategies by delivering reliable historical and live datasets.
  • Support ongoing model development by providing extensible pipelines for feature generation and experimentation.

Required Qualifications

  • 8+ years designing and building data infrastructure or distributed systems in data-intensive environments; experience building platforms from scratch strongly preferred. Trading, HFT, or fintech experience is a plus.
  • Strong proficiency in Python and SQL; familiarity with high-speed data protocols (FIX, ITCH, multicast) is advantageous.
  • Experience with high-throughput, event-driven systems (e.g., Kafka, Spark/Flink) and stateful processing.
  • Deep understanding of distributed systems, storage architectures, partitioning, ordering guarantees, and performance trade-offs.
  • Experience operating in on-prem or self-managed environments, including deployment automation and cluster operations.
  • Familiarity with columnar storage (e.g., Parquet), data lake architectures, or time-series databases.
  • Proven ability to operate independently and deliver production-grade systems end-to-end.

Nice to Have

  • Experience with market data ingestion, exchange feeds, order book reconstruction, or pcap workflows.
  • Background in low-latency or real-time trading systems.
  • Experience building replayable pipelines ensuring research/production parity.

Why Join Us?

  • Foundational Impact:
    Shape the firm’s core data platform and influence the long-term research and trading technology strategy.
  • Flexible Working Options:
    Hybrid schedule based in Chicago.
  • Startup Environment:
    Agile, entrepreneurial culture encouraging ownership and rapid decision-making.
  • Efficient Infrastructure:
    Proprietary low-latency platform supporting systematic & discretionary trading.
  • Comprehensive Health Coverage:
    Medical, dental, and vision insurance.
  • 401(k) Retirement Plan:
    Supporting long-term financial security.
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary