Scale is the leading data and evaluation partner for frontier AI companies, playing an integral role in advancing the science of evaluating and characterizing large language models (LLMs). Our research focuses on tackling the hardest problems in scalable oversight and the evaluation of advanced AI capabilities. We collaborate broadly across industry and academia and regularly publish our findings.
Our Research team is shaping the next generation of evaluation science for frontier AI models and works at the leading edge of model assessment and oversight. Some of our current research includes:
- Developing AI-assisted evaluation pipelines, where models help critique, grade, and explain outputs (e.g. RLAIF, model-judging-model).
- Advancing scalable oversight methods, such as rubric-guided evaluations, recursive oversight, and weak-to-strong generalization.
- Designing benchmarks for frontier capabilities (e.g. reasoning, coding, multi-modal, and agentic tasks), inspired by efforts like MMMU, GPQA, SWE-Bench.
- Building evaluation frameworks for agentic systems, measuring multi-step workflows and real-world task success.
You will:
- Lead a team of research scientists and engineers on foundational work in evaluation and oversight.
- Drive research initiatives on frameworks and benchmarks for frontier AI models, spanning reasoning, coding, multi-modal, and agentic behaviors.
- Design and advance scalable oversight methods, leveraging model-assisted evaluation, rubric-guided judgments, and recursive oversight.
- Collaborate with leading research labs across industry and academia.
- Publish research at top-tier venues and contribute to open-source benchmarking initiatives.
- Remain deeply engaged with the research community, both understanding trends and setting them.
Ideally you’d have:
- Track record of impactful research in machine learning, especially in generative AI, evaluation, or oversight.
- Significant experience leading ML research in academia or industry.
- Strong written and verbal communication skills for cross-functional collaboration.
- Experience building and mentoring teams of research scientists and engineers.
- Publications at major ML/AI conferences (e.g. NeurIPS, ICML, ICLR, ACL, EMNLP, CVPR) and/or journals.
Our research interviews are crafted to assess candidates’ skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organizational culture. We do not ask LeetCode-style questions.
Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.