Research Analyst
Listed on 2026-01-27
-
IT/Tech
Data Analyst, Data Scientist, Data Engineer, Data Science Manager
Join to apply for the Research Analyst role at Tunnl
.
The Research Analyst will support the execution of Tunnl’s research projects through audience building, data validation, quality control, and reporting. Reporting to a Research Director, this role is essential to ensuring our research is executed accurately, on time, and with clarity. You will collaborate closely with Strategic Research, Account Management, and Product teams to support audience building projects from kickoff to delivery.
You will support client projects from kickoff through delivery, gaining exposure to the full audience lifecycle—from audience building to activation. You will play a key role in structuring and optimizing Tunnl’s professional data table, which combines multiple datasets to create high‑value Opinion Maker audiences for our clients. You will develop and implement a standardized taxonomy to improve searchability, segmentation, and data usability.
You will also contribute to enhancing match accuracy between datasets by applying SQL and python‑based solutions and working alongside the product and engineering team to refine automated processes. This is a hands‑on role suited for someone with strong attention to detail, comfort working with data, and a desire to grow technical and analytical skills in survey research, microtargeting, and management of large datasets.
The ideal candidate is a motivated self‑starter who enjoys solving problems, works well in fast‑paced environments, takes pride in getting the details right, and is eager to build technical and analytical skills in survey research and microtargeting. Candidates must be located in the DC area.
- Audience Building
- Data Structuring & Standardization
- Optimize the searchability and segmentation of professional audiences within Tunnl’s internal tooling
- Document and maintain a structured data framework to ensure long‑term usability and scalability
- Data Querying & Reporting
- Support internal teams with audience segmentation and custom data requests leveraging Python and SQL
- Develop automated reports to monitor data integrity, match rates, and audience performance
- Assist with data validation processes to ensure high‑quality outputs for clients
- Quality Control & Data Validation
- Manage hand‑matching workflows that ensure high‑profile matches are correct
- Conduct first‑pass QC on survey instruments, fielding logic, and audience assignments
- Validate data files, ensuring accurate application of weights, quotas, and filtering
- Identify inconsistencies or data issues and propose fixes before manager review
- Data Import/Export & ETL Support
- Support importing/exporting data from internal systems and external platforms (e.g., Live Ramp, Qualtrics, Dropbox)
- Assist with audience file formatting, , and record tagging
- Conduct QA on incoming and outgoing datasets, including row‑level spot checks and file integrity reviews
- Partner with senior team members to troubleshoot ETL issues and improve import/export processes
- Process Documentation & Continuous Improvement
- Follow SOPs for QC, audience tagging, survey builds, and delivery
- Document recurring issues and suggest improvements to workflow
- Maintain internal Dropbox folders and tracking tools to support delivery consistency
- 0–3 years of experience in data analysis, data engineering, database management, survey research, market research, analytics, or equivalent (internships or academic experience considered)
- Working knowledge of SQL with the ability to write queries to explore, validate, and QA data (complex optimization skills can be developed over time).
- Proficiency with Python for data manipulation and automation
- Strong attention to detail and communication skills with demonstrated ability to catch and resolve errors
- Proactive team player that wants to join a high performing team and is curious about research
- Experience working with large datasets and entity resolution techniques
- Ability to structure unorganized data into a usable, scalable taxonomy
- Experience with AWS services, particularly S3 for data storage and retrieval
- Familiarity with Databricks and Spark for large‑scale data…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).