AI Infrastructure Architect
Listed on 2026-01-13
-
IT/Tech
Data Engineer, Cloud Computing, Systems Engineer, Data Security
Austin, or Remote (with flexibility for timezone overlap)
About Dreambase ðŸ’We're completely reimagining product analytics by flipping the entire industry approach upside down. While everyone else forces you through the painful dance of data extraction, transformation, expensive tooling, and armies of data engineers just to understand your users, we start where your data already lives: your Postgres product database. From there, we build up and out, connecting database records to event streams to behavioral patterns to research insights to your actual codebase in one unified system that eliminates the traditional pipeline nightmare and puts real product intelligence directly in your hands.
Yourmission
As our first AI Infrastructure Architect, you'll build the unbreakable foundation that makes our AI-native analytics revolution possible at enterprise scale. You'll architect data systems that seamlessly handle everything from a startup's first thousand events to enterprises processing billions of records, designing the infrastructure that keeps customer data secure and compliant while delivering lightning-fast insights. This isn't just backend work, you're engineering the critical systems that let us deliver on our promise to eliminate the analytics nightmare, building infrastructure so robust and intelligent that it makes the impossible feel effortless.
Whatyou'll do
Architect for Scale: Design and build database systems that gracefully scale from gigabytes to petabytes while maintaining sub-second query performance
Secure the Foundation: Implement enterprise-grade security, compliance frameworks (SOC 2, GDPR, HIPAA), and data governance that customers can trust with their most sensitive data
Master the Data Stack: Build sophisticated data infrastructure leveraging Postgres optimization, DuckDB for analytics workloads, Apache Iceberg for data lakes, and S3 for scalable storage
Stream at Speed: Design and implement real-time event streaming pipelines that capture millions of user interactions per second with zero data loss
Own the Backend: Architect APIs, authentication systems, and database layers using Supabase, including RLS policies, performance tuning, indexing strategies, and storage optimization
Build AI-Native Infrastructure: Create MCP (Model Context Protocol) servers and Streamable HTTP endpoints that enable seamless AI-to-data communication
Optimize Relentlessly: Monitor, profile, and optimize every layer of the stack for performance, cost-efficiency, and reliability at scale
Pioneer Standards: Establish infrastructure patterns and best practices as we grow from startup to enterprise platform
What we're looking forInfrastructure Mastery: You've built and scaled backend systems that handle massive data volumes in production environments
Database Wizard: Deep expertise in Postgres optimization, indexing, query planning, and scaling strategies; bonus for DuckDB and SQLite experience
Security-First Mindset: You understand data security, compliance requirements, and how to build systems that meet enterprise standards
Supabase Expert: You know Supabase inside and out: auth, storage, RLS, Edge Functions, real-time subscriptions, and performance optimization
API Architect: You design clean, efficient REST APIs and understand HTTP at a deep level
Data Pipeline Guru: Experience building event streaming systems and analytics pipelines that reliably process high-volume data
Cloud Native: Hands-on experience with Vercel & Cloudflare hosting, Supabase backend, AWS (especially S3), data lake technologies like Apache Iceberg, and modern cloud infrastructure
AI Infrastructure Savvy: You understand what AI applications need from their infrastructure and how to build backends that support LLM-powered features
Full-Stack Capable: Comfortable with Node.js, Type Script, React, and Next.js to collaborate effectively across the stack
Problem-Solving Machine: You debug complex distributed systems issues and architect elegant solutions to gnarly infrastructure challenges
Ship Fast, Scale Smart: You balance rapid iteration with building foundations that won't need complete rewrites at scale
Experience with Databricks and/or…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).