at Apple
Location
Sunnyvale, United States of America
Compensation
$147k–$272k USD
Type
full time
Posted
1 months ago
Market range · company + function + seniority
p25 · target · p75 · n=515
Posted $272k · in the market band
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
In this role, you will innovate foundational machine learning algorithms for computational photography and computer vision, to research, design and qualify novel cameras and sensors for future Apple products, in collaboration with a wide spectrum of top engineers across Apple.
Design, train and tune machine learning algorithms, support camera architects to drive innovative solutions for imaging and sensing challenges, and provide data-driven feedback for future camera architectures across a wide spectrum of Apple products.
Work closely with cross functional teams to build computational imaging and machine learning prototypes for future Apple products that enable novel photography experiences.
Build differentiable simulation and physics-informed machine learning pipelines to analyze and improve cameras and sensors.
Ground the exploration via validated simulation and metrology results to avoid machine learning domain gaps and ensure production feasibility.
BS in electrical, optical or computer engineering/science, and a minimum of 3 years relevant industry experience
Experience with software coding in Python.
Experience with one of the following: machine learning/deep learning systems, computer vision, graphics, computational imaging applications.
Experience with Pytorch.
MS/PhD in computer vision, electrical, optical or computer engineering or related fields.
Experience or strong personal curiosity with designing imaging and sensing systems.
Strong independent problem-solving and communication skills.
Solid understanding of machine learning, deep learning fundamentals and optimizations; practical expertise in designing, training and improving deep neural networks.
Experience with cutting edge computer vision and machine learning research trends and models.
Experience with two or more of the following: ISP (image signal processing), 3A (AE, AF, AWB), diffusion models, multi-modal, generative AI, sensor fusion, sensor physics, differentiable rendering, 3D rendering.
Do you love taking on big challenges that require exceptionally creative solutions?
The Camera & Depth Architecture organization is responsible for research, design, and specifications of cameras and sensors for iPhone and other Apple products. As part of our machine learning team, you will play a vital role in prototyping foundational machine learning tools that bridge the camera hardware and software, in order to build flawless camera technology innovations and experiences that we are known worldwide.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant
At Apple, we believe accessibility is a fundamental human right. You’ll find that idea reflected in everything here — in our culture, our benefits and our digital tools. By welcoming as many perspectives as possible, we help you build a career where you feel like you belong.
Learn about accessibility in Apple’s workplace
Learn about reasonable accommodations for job applicants
Apple accepts applications to this posting on an ongoing basis.