Bilingual Korean Generalist Evaluator Expert
Listed on 2026-03-15
-
Language/Bilingual
Bilingual
Mercor is seeking native Korean speakers with exceptional writing skills to contribute to a high-impact AI research project with a leading lab. Freelancers will author Korean/English prompt–golden answer pairs that train and evaluate advanced language models. This is a short-term, flexible opportunity for professionals who combine language mastery, strong critical thinking, and a knack for instructional clarity. Ideal for those who enjoy distilling complex concepts into well-crafted, culturally grounded Korean text while maintaining technical precision in English.
Job Details
Multilingual Prompt Design & Optimization:
Create detailed prompts in Korean and/or English with multiple constraints and instructions, ensuring natural phrasing and real-world relevance for Korean-speaking users.
Define and Document Evaluation Standards:
Establish high-level expectations for correct responses in Korean consumer contexts, and develop comprehensive rubrics that account for linguistic nuance, tone, and cultural conventions.
Model Testing and Grading (Bilingual):
Run prompts through models and assess preliminary outputs for accuracy, fluency, and cultural fit in Korean, comparing results against English where needed.
Benchmarking & Quality Assurance:
Collaborate in QA review processes to ensure prompt tasks and rubrics meet rigor—maintaining consistency and reliability across Korean-language benchmarks before integration into official evaluations.
Minimum Qualifications
Native-level fluency in Korean (written) with strong reading/writing ability in English.
BS or BA from a reputable institution (completed or in progress).
Strong writing and critical thinking skills.
Ability to work independently and meet deadlines.
Significant familiarity with ChatGPT or similar tools for personal decision-making, hobbies, or general interests.
Based in South Korea (or able to reliably produce South Korea-specific, culturally accurate Korean).
Preferred Qualifications
Experience in teaching, research, editing, or academic writing.
Experience creating evaluation criteria, rubrics, or grading guidelines.
Familiarity with LLMs, prompting, or model evaluation (helpful but not required).
Application & Onboarding Process
Complete an AI-led interview (about 15 minutes).
Complete a 45-minute written assessment focused on writing and rubric creation.
If selected, you will be invited to work on the project.
More Details About This Role
Expect to contribute at least 20 hours per week.
Expect a commitment of around 2+ months.
You’ll be working in a structured project environment with clear goals and tools.
We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).