at Apple
Location
Seattle, United States of America
Compensation
$201k–$302k USD
Type
full time
Posted
1 weeks ago
Market range · company + function + seniority
p25 · target · p75 · n=515
Posted $302k · in the market band
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
As a Principal Applied Scientist on the Human Centered AI team, you will be the technical engine behind our Data Quality Validation framework. This is a high-impact individual contributor role for a scientist who wants to architect and build — not just advise. You will own the data science methodology underpinning our data quality validation models, design the statistical frameworks that govern judge reliability, and work hands-on to close the loop between automated evaluation and human ground truth.
You will be the person who answers the hardest question in our stack: "Can we trust the evaluators that are evaluating our models?"
Design, develop, and iterate on the reasoning agent that serves as our adjudicator, auditing Production LLM Judge outputs for hallucination, drift, and systematic bias
Develop the statistical and ML approaches that detect when Production LLM Judges diverge from ground truth, including confidence calibration, entropy-based uncertainty quantification, and out-of-distribution detection
Define the algorithms that determine what gets routed for deeper review, moving the team from random sampling to principled, risk-stratified smart sampling
Design the hierarchical weighting model and the confidence interval framework that replaces misleading point estimates with statistically rigorous ranges
Establish the standards for how immutable ground truth sets are built, versioned, and validated, including inter-annotator agreement protocols
Partner with Autograder Developers to validate new LLM Judge through our standard validation processes, ensuring LLM Judges are rigorously validated before reaching production
Serve as the scientific authority on data quality evaluation methodology for partner teams across ASE, translating complex statistical findings into clear decision-readiness signals for engineering and leadership stakeholders
Master's degree in Statistics, Data Science, Machine Learning, Computer Science, or a related quantitative field
8+ years of hands-on experience in applied data science, ML research, or evaluation science
Deep expertise in uncertainty quantification and model calibration — including entropy modeling and Bayesian approaches
Demonstrated experience building disagreement detection or anomaly detection models in production ML systems
Strong command of statistical measurement frameworks — inter-rater reliability, correlation analysis, and statistical process control
Proven experience designing or contributing to Human-in-the-Loop (HITL) or active learning pipelines
Proficiency in Python for statistical modeling, ML experimentation, and data pipeline development
Exceptional ability to translate rigorous statistical methodology into clear, actionable guidance for engineering and product partners
PhD in Statistics, Computer Science, Machine Learning, or a related field
Experience specifically in LLM evaluation science — including autograder validation, judge-as-a-model frameworks, or RLHF data quality
Hands-on experience with large-scale reasoning models (e.g., 70B+ parameter models) used in chain-of-thought evaluation or meta-reasoning contexts
Experience defining governance gates or certification pipelines for AI systems in a CI/CD context
Familiarity with out-of-distribution detection techniques for identifying input drift in live production systems
Track record of publishing or presenting evaluation methodology work internally or externally
Apple Services Engineering (ASE) powers AI and LLM features across App Store, Music, Video, and more. As these systems increasingly rely on LLM Judges and automated evaluators to score model performance at scale, the trustworthiness of those evaluation signals becomes mission-critical. We believe that to build exceptional LLMs, you need exceptional mechanisms to validate the signals used to train and evaluate them.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant
At Apple, we believe accessibility is a fundamental human right. You’ll find that idea reflected in everything here — in our culture, our benefits and our digital tools. By welcoming as many perspectives as possible, we help you build a career where you feel like you belong.
Learn about accessibility in Apple’s workplace
Learn about reasonable accommodations for job applicants
Apple accepts applications to this posting on an ongoing basis.