at Apple
Location
Cupertino, United States of America
Compensation
$181k–$318k USD
Type
full time
Posted
1 months ago
Market range · company + function + seniority
p25 · target · p75 · n=515
Posted $318k · in the market band
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
In this role, you'll be at the forefront of architecting and building our next-generation distributed ML infrastructure, where you'll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing sophisticated parallelization strategies that split models across many GPUs, optimizing every layer of the stack—from low-level memory access patterns to high-level distributed algorithms—to achieve maximum hardware utilization while minimizing latency for real-time user experiences. You'll work at the intersection of cutting-edge ML systems and hardware acceleration, collaborating directly with silicon architects to influence future GPU designs based on your deep understanding of inference workload characteristics, while simultaneously building the production systems that will serve billions of requests daily.
This is a hands-on technical leadership position where you'll not only architect these systems but also dive deep into performance profiling, implement novel optimization techniques, and solve unprecedented scaling challenges as you help define the future of AI experiences delivered through Apple's secure cloud infrastructure.
Design and implement tensor/data/expert parallelism strategies for large language model inference across distributed server cluster environments
Drive hardware and software roadmap decisions for ML acceleration
Expert in designing architectures that achieves peak compute utilizations and optimal memory throughput
Develop and optimize distributed inference systems with focus on latency, throughput, and resource efficiency across multiple nodes
Architect scalable ML serving infrastructure supporting dynamic model sharding, load balancing, and fault tolerance
Collaborate with hardware teams on next-generation accelerator requirements and software teams on framework integration
Lead performance analysis and optimization of ML workloads, identifying bottlenecks in compute, memory, and network subsystems
Drive adoption of advanced parallelization techniques including pipeline parallelism, expert parallelism, and various other emerging approaches
10+ years of experience in GPU programming (CUDA, ROCm) and high-performance computing, successfully optimizing large-scale parallel workloads.
Strong experience with inter-node communication technologies (InfiniBand, RDMA, NCCL) in the context of ML training/inference
Must have excellent system programming skills in C/C+
Deep understanding of distributed systems and parallel computing architectures
Understand how tensor frameworks (PyTorch, JAX, TensorFlow) are used in distributed training/inference
Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical field
Familiar with model development lifecycle from trained model to large scale production inference deployment
Proven track record in ML infrastructure at scale
Python is a plus
PhD in Computer Science, Engineering, Mathematics, or a related technical field
Apple Silicon GPU SW architecture team within the Media, Graphics & Compute Technologies group is seeking a senior/principal engineer to lead server-side ML acceleration and multi-node distribution initiatives. You will help define and shape our future GPU compute infrastructure on Private Cloud Compute that enables Apple Intelligence.
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $181,100 and $318,400, and your base pay will depend on your skills, qualifications, experience, and location.Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant
At Apple, we believe accessibility is a fundamental human right. You’ll find that idea reflected in everything here — in our culture, our benefits and our digital tools. By welcoming as many perspectives as possible, we help you build a career where you feel like you belong.
Learn about accessibility in Apple’s workplace
Learn about reasonable accommodations for job applicants
Apple accepts applications to this posting on an ongoing basis.
Open postings ranked by description similarity — useful if this role isn't quite right.