To apply to Nvidia, go to nvidia.com/careers (Workday-powered) or browse applinity's Nvidia company page for filtered views. Nvidia runs a standard 4–8 week interview loop (recruiter screen → technical phone screen → virtual onsite of 4–5 rounds → manager call → offer). The 2023+ AI-wave has made Nvidia one of the largest single hiring sources in Silicon Valley; we see their open-role count consistently in the top 5 of the catalog. Domain depth (silicon, CUDA, ML, robotics) matters more than at most peers.
Where to apply
Nvidia uses Workday — the same enterprise ATS that powers Salesforce, Oracle, Cisco, and most Fortune-500 careers sites. Practical implications:
- You'll create a Workday-hosted Nvidia profile. Your résumé will be parsed, but you'll still need to enter education, work history, and authorization status into structured fields by hand.
- The Workday session expires aggressively. Save progress every few minutes or you'll lose the form.
- Apply to a tight set of 3–5 roles that match your background. The recruiter team filters Workday's "applied to 25 reqs" candidates the same way Google's does.
What Nvidia looks for, by track
Silicon engineering (ASIC, RTL, verification, physical design)
Nvidia's silicon roles are the deepest hardware engineering jobs in the catalog. Interviewers expect:
- Strong fundamentals: digital logic, computer architecture, RTL (Verilog / SystemVerilog), verification (UVM), STA, PD.
- Real GPU / SoC project experience or extensive coursework (Stanford EE282, CMU 18-447, equivalent).
- For senior roles: tape-out experience, debug stories from silicon bringup, post-silicon validation depth.
CUDA / driver / library / runtime engineering
The teams that ship CUDA itself, cuDNN, NCCL, TensorRT, and the GPU driver expect deep low-level systems engineering: C++, memory models, concurrency, profiling. CUDA experience is essentially required.
ML / applied AI engineer
Nvidia's ML roles span training infrastructure, model deployment, applied research, and product ML (e.g. autonomous driving stack, Omniverse, healthcare AI). The bar is mid-range for ML interviews: coding rounds emphasize practical implementation, ML rounds test fundamentals + applied judgment, system design covers training infra and inference serving.
Research scientist
Nvidia Research (and the affiliated NVAITC, Toronto AI Lab) values publication record and conference acceptances. The interview loop is research-heavy: paper walkthrough, technical depth in your area, live whiteboard math, behavioral. PhD or equivalent track record usually required.
Robotics / autonomous driving
Drive (autonomous driving) and Isaac (robotics) teams hire for perception, planning, control, simulation, and SLAM. C++ expertise and real-time-systems experience are emphasized. Research and product engineering paths exist side by side.
The Nvidia interview loop
- Recruiter screen (30 min). Background, motivation, comp expectations, location, work authorization. Be specific about what teams interest you.
- Hiring manager screen (45–60 min). Some teams have the HM call before the technical loop, some after. Treat it as both technical and behavioral — the HM is your future skip- level and gates the loop.
- Technical phone screen (60 min). Coding round (CoderPad or shared doc). For ML roles, often a mixed coding + ML fundamentals round.
- Virtual onsite (4–5 rounds). Mix of coding, system / hardware design (track-dependent), domain depth, and behavioral. Typically half a day, virtual; some Santa Clara loops are still in-person.
- Offer + negotiation. Nvidia's RSU packages are the most negotiable element. Base is more rigid (especially after California pay-transparency disclosures). Push on RSU and signing.
Common reasons applications stall
- Generic CUDA / GPU claims on résumé without depth. Nvidia interviewers will probe past "used PyTorch on a GPU" into memory hierarchy, kernel launch overhead, profiling, and warp-level concurrency. Don't overstate.
- Workday application abandoned mid-form. Nvidia recruiters can see partial applications; reaching out to follow up on an incomplete profile signals lack of follow-through.
- Cooldown after a no-pass. Like Google, Nvidia generally requires 12 months between full loops. Internal referrals can sometimes accelerate this.
Résumé tips specific to Nvidia
- Lead with the GPU / silicon / ML angle that fits the role. A driver-engineering hire should see CUDA in your first bullet; an ML-engineer hire should see model deployment + scale.
- Quantify training and inference numbers. "Trained model with 30B params across 256 H100s" lands; "worked with large language models" doesn't.
- List specific hardware experience. Names matter: A100, H100, B100, GB200, GH200, Hopper, Blackwell, NVLink, NVSwitch, Grace, BlueField. If you've worked with them, say so.
- Run an ATS check. Workday's parser is the strictest in the major-ATS cohort. Score yours on the free applinity ATS scorer before submitting.
Frequently asked questions
What ATS does Nvidia use?
Nvidia uses Workday for most engineering and corporate roles, with some research postings on internal systems. Workday means a multi-step application form, account creation, and frequent re-entry of work history — plan to spend 20–30 minutes per application.
Does Nvidia sponsor H-1B?
Yes — Nvidia is a top-10 H-1B sponsor in the United States. Their public LCA data shows thousands of approved petitions annually across software, silicon, ML, and applied research. International candidates should confirm cap-subject vs transfer status with their recruiter.
Is Nvidia still hiring engineers?
Yes — aggressively. As of 2026, Nvidia's catalog footprint on applinity is one of the largest of any single employer. Open roles span silicon engineering (ASIC, RTL, verification), software (CUDA, drivers, libraries), ML / applied AI, robotics, autonomous driving, and the full breadth of corporate functions.
What's the comp band at Nvidia?
Nvidia's RSU-heavy comp packages have appreciated significantly with the stock run-up since 2023. Most US engineering postings now disclose salary ranges (California pay-transparency law). Stock vesting is on a 4-year schedule with a 1-year cliff for new hires.
Do I need to know CUDA to work at Nvidia?
Depends entirely on the role. CUDA library, runtime, and driver teams expect deep CUDA knowledge. Higher-level ML, robotics, and corporate engineering roles don't require CUDA expertise — many engineers join without it and pick it up on the job.
How long does the Nvidia interview process take?
Typical timeline is 4–8 weeks, varying by team. Faster than Google (no committee + team-match), slower than Stripe or Anthropic. Most loops run recruiter screen → 1 technical phone screen → 4–5 round virtual onsite → manager call → offer.
What's the difference between Nvidia's research and product engineering tracks?
Research (Nvidia Research, NVAITC, Nvidia Robotics) emphasizes publication record, conference acceptance (NeurIPS, ICML, ICCV, CVPR), and PhD credentials. Product engineering (driver, library, runtime, infrastructure) values shipping experience and systems engineering depth more than research output.
Can I work remotely at Nvidia?
Nvidia runs a hybrid model anchored on Santa Clara, with smaller engineering hubs in Austin, Durham, and several international cities. Fully-remote US roles exist but are a minority — most postings specify a hub. Use the remote-OK filter on applinity to surface only the explicitly-remote listings.