Remote Generalist - English & Japanese - AI Trainer
Fort Wayne, Allen County, Indiana, 46804, USA
Listed on 2026-01-12
-
IT/Tech
Data Scientist, Data Analyst, Technical Writer
Location
Geography restricted to Japan, USA. Type:
Full-time or Part-time Contract Work.
English & Japanese.
Why This Role ExistsMercor partners with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. These systems are used across a wide range of everyday and professional scenarios, and their effectiveness depends on how clearly, accurately, and helpfully they respond to real user questions. This project focuses on evaluating and improving general chat behavior in large language models (LLMs).
You will assess model-generated responses across diverse topics, provide high-quality human feedback, and help ensure AI systems communicate in ways that are accurate, well-reasoned, and aligned with human expectations.
- Evaluate LLM-generated responses on their ability to effectively answer user queries.
- Conduct fact‑checking using trusted public sources and external tools.
- Generate high‑quality human evaluation data by annotating response strengths, areas for improvement, and factual inaccuracies.
- Assess reasoning quality, clarity, tone, and completeness of responses.
- Ensure model responses align with expected conversational behavior and system guidelines.
- Apply consistent annotations by following clear taxonomies, benchmarks, and detailed evaluation guidelines.
- Hold a Bachelor’s degree.
- Native speaker or have ILR 5 / primary fluency (C2 on the CEFR scale) in Japanese.
- Significant experience using large language models (LLMs) and understand how and why people use them.
- Excellent writing skills and can clearly articulate nuanced feedback.
- Strong attention to detail and consistently notice subtle issues others may overlook.
- Adaptable and comfortable moving across topics, domains, and customer requirements.
- Background or experience in domains requiring structured analytical thinking (e.g., research, policy, analytics, linguistics, engineering).
- Excellent college‑level mathematics skills.
- Prior experience with RLHF, model evaluation, or data annotation work.
- Experience writing or editing high‑quality written content.
- Experience comparing multiple outputs and making fine‑grained qualitative judgments.
- Familiarity with evaluation rubrics, benchmarks, or quality scoring systems.
- Identify factual inaccuracies, reasoning errors, and communication gaps in model responses.
- Produce clear, consistent, and reproducible evaluation artifacts.
- Feedback leads to measurable improvements in response quality and user experience.
- Mercor customers trust the quality of their AI systems because your evaluations surface issues before public release.
At Mercor, you’ll work at the frontier of human‑in‑the‑loop AI development, directly shaping how advanced language models behave in the real world. This role offers flexible, remote contract work and the opportunity to contribute meaningfully to AI systems used by millions of people. Contract rates are competitive and aligned with the level of expertise required and scope of work.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).