×
Register Here to Apply for Jobs or Post Jobs. X

Legal Data Acquisition Engineer; Scraping & Extraction

Job in Zürich, 8058, Zurich, Kanton Zürich, Switzerland
Listing for: Omnilex
Full Time position
Listed on 2026-03-06
Job specializations:
  • IT/Tech
    AI Engineer, Data Analyst, Data Security, Data Engineer
Salary/Wage Range or Industry Benchmark: 30000 - 80000 CHF Yearly CHF 30000.00 80000.00 YEAR
Job Description & How to Apply Below
Position: Legal Data Acquisition Engineer (Scraping & Extraction)
Location: Zürich

The Role (Behind the Scenes, Mission-Critical)

We’re looking for an engineer who loves the messy reality of web data: dynamic pages, broken markup, inconsistent PDFs, changing source structures, missing metadata, rate limits, anti-bot protections, and jurisdiction-specific publishing habits.

This is a behind-the-scenes role with huge product impact. You’ll build and maintain the systems that continuously collect and extract legal content from websites, APIs, bulk files, and document repositories and turn it into reliable inputs for our AI products.

If you enjoy scraping, parsing, reverse-engineering content structures, and designing robust ingestion pipelines that survive real-world change, this role is for you.

About Omnilex

Omnilex is a young, dynamic AI legal tech startup with roots at ETH Zurich. Our interdisciplinary team is building AI-native tools for legal research and answering complex legal questions across jurisdictions.

A core reason we stand out is our data foundation: combining external legal sources, customer-internal sources, and our own AI-first legal content. This role strengthens that foundation.

Tasks

⚙️ What You’ll Work On

Your focus will be source acquisition, scraping, parsing, and extraction reliability for legal data.

Core responsibilities

Build and maintain resilient pipelines to ingest legal content from:

  • public websites
  • APIs
  • document portals
  • bulk datasets
  • PDFs / HTML / XML / DOCX-like formats

Design scraping systems that are robust to:

  • layout changes
  • pagination quirks
  • JavaScript-rendered sites
  • inconsistent metadata
  • rate limits and retry behavior
  • Implement parsers and extractors for legal documents (statutes, decisions, guidance, commentaries, etc.)

Extract and structure:

  • document text
  • headings/sections
  • citations and references
  • dates, courts, authorities, identifiers
  • language / jurisdiction metadata
  • Build source-specific adapters and reusable extraction components (rather than one-off scripts)
  • Monitor source health and detect breakage quickly (e.g., selector failures, coverage drops, schema drift)
  • Improve data quality with validation checks, deduplication, canonicalization, and content versioning
  • Work closely with AI/data/search teammates so extracted data is optimized for downstream indexing, RAG, and analytics
  • Document source behavior and operational playbooks so ingestion remains maintainable as we scale
What Success Looks Like

In this role, success is not “number of scrapers written.” Success looks like:

  • high source coverage across target jurisdictions
  • fast detection and repair when sources change
  • clean, structured extractions with fewer downstream fixes
  • stable ingestion SLAs and predictable runtimes/costs
  • reusable tooling that makes adding new sources increasingly faster
Requirements

✅ Minimum Qualifications

  • Degree in Computer Science, Data Science, Software Engineering, or related field — or equivalent practical experience
  • Strong hands-on engineering experience with Type Script (backend/data pipeline context)
  • Real experience building web scraping / crawling / extraction pipelines in production
  • Strong understanding of HTML/DOM parsing, HTTP, pagination, sessions/cookies, and common web data edge cases
  • Experience working with messy document formats (especially PDFs
    ) and text extraction challenges
  • Good SQL skills (Postgre

    SQL) and experience storing structured/unstructured content
  • Strong debugging skills and a pragmatic mindset: you can make unreliable sources reliable
  • Ability to work with ownership in a fast-moving startup
  • Availability full-time; on-site in Zurich at least two days per week (hybrid)

🌟 Preferred Qualifications

  • Familiarity with modern scraping and browser automation tools (e.g.
    Playwright
    , Puppeteer)
  • Experience with PDF/document tooling, OCR pipelines, and parsing libraries
  • Experience designing queue-based or worker-based ingestion systems
  • Experience with Azure (including storage/search services), Docker, and CI/CD
  • Working proficiency in German and proficiency in English
  • Swiss work permit or EU/EFTA citizenship
  • Experience with legal or regulatory document structures (Switzerland / Germany / EU / US is a plus)
  • Familiarity with downstream AI/search use cases (chunking, embeddings, indexing, citation…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary