×
Register Here to Apply for Jobs or Post Jobs. X

Data engineer

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: writer.com
Full Time position
Listed on 2026-01-18
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager
Job Description & How to Apply Below

🚀 About WRITER

WRITER is where the world's leading enterprises orchestrate AI-powered work. Our vision is to expand human capacity through superintelligence. And we're proving it's possible – through powerful, trustworthy AI that unites IT and business teams together to unlock enterprise-wide transformation. With WRITER's end-to-end platform, hundreds of companies like Mars, Marriott, Uber, and Vanguard are building and deploying AI agents that are grounded in their company's data and fueled by WRITER's enterprise-grade LLMs.

Valued at $1.9B and backed by industry-leading investors including Premji Invest, Radical Ventures, and ICONIQ Growth, WRITER is rapidly cementing its position as the leader in enterprise generative AI.

Founded in 2020 with office hubs in San Francisco, New York City, Austin, Chicago, and London, our team thinks big and moves fast, and we're looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work with AI.

📐 About the role

We're looking for a data engineer to help design, build, and scale the data infrastructure that powers our analytics, reporting, and product insights. As part of a small but high-impact data team, you'll define the architectural foundation and tooling for our end-to-end data ecosystem—the systems that enable data-driven decisions across the entire company and fuel WRITER's rapid growth.

You’ll work closely with engineering, product, and business stakeholders to build robust pipelines, scalable data models, and reliable workflows that transform raw data into actionable intelligence. This isn't just about moving data around—it's about creating the foundation that helps us understand how our AI agents perform, how our customers succeed, and where our biggest opportunities lie. If you're passionate about data infrastructure and solving complex data problems at a company that's reshaping how enterprises work with AI, this is your opportunity to make an outsized impact.

Core tools in our tech stack include:
Snowflake, Big Query, dbt, Fivetran, Hightouch, Segment

Periphery tools: AWS DMS, Google Data stream, Terraform, Gith Hub Actions

This is a hybrid role based out of our San Francisco or New York City hubs, reporting to the director of analytics.

🦸🏻♀️ What you'll do
  • Design and maintain scalable, fault-tolerant data pipelines and ingestion frameworks across multiple data sources, ensuring reliable delivery of critical business data
  • Architect and optimize our data warehouse (Snowflake/Big Query) to ensure performance, cost efficiency, and security as we scale to support hundreds of enterprise customers
  • Build efficient and reusable data models optimized for both analytical and operational workloads, enabling teams across WRITER to access the insights they need
  • Define and implement data governance frameworks including schema management, lineage tracking, and access control to maintain data quality and security
  • Build and manage robust ETL workflows using dbt and orchestration tools like Airflow or Prefect, with comprehensive monitoring, alerting, and logging to ensure pipeline observability
  • Develop data validation, testing, and anomaly detection systems that catch issues before they impact business decisions, establishing SLAs for key data assets
  • Partner with product and engineering teams to deliver data solutions that align with downstream use cases, from training AI models to measuring product performance
⭐️ What you need
  • 5+ years of hands-on experience in a data engineering role, ideally in a SaaS or fast-growing tech environment where you've built systems from the ground up
  • Expert-level proficiency in SQL, dbt, and Python—you write clean, efficient code and know how to optimize queries that run against massive datasets
  • Strong experience with data pipeline orchestration tools (Airflow, Prefect, Dagster, or similar) and CI/CD for data workflows—you treat pipelines like production software
  • Deep understanding of cloud-based data architectures on AWS or GCP, including networking, IAM, and security best practices that keep enterprise data safe
  • Strong grasp of data modeling principles, warehouse optimization, and cost…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary