Senior Research Scientist; LLMs
Listed on 2026-01-23
-
Research/Development
Research Scientist
About Aldea
Headquartered in Miami, Aldea is a next‑generation AI company focused on voice‑based clinical and expert applications. Our flagship product, Advisor, uses proprietary AI to scale the impact of world‑class minds across personal development, finance, parenting, relationships, and more—delivering faster, more cost‑effective performance than traditional models.
The RoleWe are hiring a Foundational AI Research Scientist (LLMs) to pioneer next‑generation large‑language‑model architectures. Your work will focus on designing, prototyping, and empirically validating efficient transformer variants and attention mechanisms that can scale to production‑grade systems. You’ll explore cutting‑edge ideas in efficient sequence modeling, architecture design, and distributed training—building the foundations for Aldea’s next‑generation language models. This role is ideal for researchers who combine deep theoretical grounding with hands‑on systems experience.
WhatYou’ll Do
- Research and prototype sub‑quadratic attention architectures to unlock efficient scaling of large language models.
- Design and evaluate efficient attention mechanisms including state‑space models (e.g., Mamba), linear attention variants, and sparse attention patterns.
- Lead pre‑training initiatives across a range of model scales from 1B to 100B+ parameters.
- Conduct rigorous experiments measuring the efficiency, performance, and scaling characteristics of novel architectures.
- Collaborate closely with product and engineering teams to integrate models into production systems.
- Stay at the forefront of foundational research and help shape Aldea’s long‑term model roadmap.
- Requires a Ph.D. in Computer Science, Engineering, or related field.
- 3+ years of relevant industry experience.
- Deep understanding of modern sequence modeling architectures including State Space Models (SSMs), Sparse Attention mechanisms, Mixture of Experts (MoE), and Linear Attention variants.
- Hands‑on experience pre‑training large language models across a range of scales (1B+ parameters).
- Expertise in PyTorch, Transformers, and large‑scale deep‑learning frameworks.
- Proven ability to design and evaluate complex research experiments.
- Demonstrated research impact through patents, deployed systems, or core‑model contributions.
- Experience with distributed training frameworks and multi‑node optimization.
- Knowledge of GPU acceleration, CUDA kernels, or Triton optimization.
- Publication record in top‑tier ML venues (NeurIPS, ICML, ICLR) focused on architecture research.
- Experience with model scaling laws and efficiency‑performance tradeoffs.
- Background in hybrid architectures combining attention with alternative sequence modeling approaches.
- Familiarity with training stability techniques for large‑scale pre‑training runs.
- Competitive base salary
- Performance‑based bonus aligned with research and model milestones
- Equity participation
- Flexible Paid Time Off
- Comprehensive health, dental, and vision coverage
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).