at NVIDIA
Location
US, CA, Santa Clara
Compensation
$320k–$489k USD
Type
full time
Posted
Yesterday
Market range · company + function + seniority
p25 · target · p75 · n=106
Posted $489k · well above market
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
AI Cloud Data Storage
NVIDIA DGXC Storage org handles some of the fastest training and inference tasks. Every GPU cycle depends on a storage platform built to keep tens of thousands of accelerators continuously busy. It maintains exabytes of data securely and powers the largest AI workloads worldwide across cloud, neocloud, and on-prem setups. With the growth of accelerated computing, storage is essential. It can make the difference between effective GPU use and wasted potential and between launching a frontier model on time or missing the deadline by months. We seek a Distinguished Engineer to lead NVIDIA's storage strategy for AI Cloud across the Neocloud Provider (NCP) and Cloud Service Provider (CSP) ecosystem. You will direct the architecture of high-performance parallel file systems, object stores, and block storage at exabyte scale. You will stay hands-on, collaborating with engineers, SREs, partners, and storage vendors. You will apply NVIDIA's AI tools to increase your productivity and that of those you impact. This is a distinctive prospect to establish the storage framework of the AI era at the company that introduced accelerated computing.
What you'll be doing:
Lead the multi-year technical plan for AI Cloud Storage expansion across NCPs — determine the reference architecture, capabilities, performance and durability SLOs, qualification methodology, and roadmap for the high-performance file, object, and block storage that each NCP must offer to qualify for NVIDIA GPU allocation.
Serve as the chief storage architect with deep hands-on involvement. Lead key reviews of storage builds and investigate root causes of complex production problems. Develop prototype reference implementations to minimize risks in new initiatives. Make final technical decisions on NCP storage deliveries using measurable SLOs. Apply AI tools heavily to amplify your technical influence throughout the program.
Define the standard for "production-ready" in NCP storage, including durability and availability SLOs measured in 9s. Ensure sustained efficiency per TiB, observability, blast-radius containment, and reduced operational toil. Influence GPU delivery gating by requiring AI Cloud to accept GPU capacity only after verifying storage-focused ancillary services.
Develop and guide the architectural direction by working closely with collaborators in training, inference, and accelerated-computing product lines. Coordinate with site-reliability, operations, networking, and security colleagues. Work together with external cloud providers, neocloud operators, and storage vendors to align on a common architecture.
Develop the open-source path forward for AI storage. Establish and guide an open-source strategy that broadens the AI storage ecosystem. Advocate for a GitHub-first, security-first stance. Engage deeply with upstream open-source communities. Formalize the APIs, SDKs, and protocols allowing partners and the industry to build, integrate, and create with NVIDIA at the AI storage level.
Lead an engineering culture centered on AI tools. Regularly use modern AI coding and agentic tools in your daily tasks. Show what 10× engineering means at NVIDIA. Distribute patterns, prompts, and evaluation harnesses across the storage organization.
Partner with peer Distinguished and Principal storage architects across the organization to tackle the most difficult, long-term technical challenges. Make automation the only acceptable solution for infrastructure management tasks like live software upgrades, node and drive replacements, capacity rebalancing, cross-DC data movement, and dataset lifecycle. Establish root-cause analysis and corrective action rigor on every major incident. Design the storage layer for workloads spanning the next several GPU generations, including disaggregated inference with storage-backed KV caching, large-scale write-once-read-many inference patterns, exabyte regional object stores, and cross-DC dataset versioning and copy management.
Mentor and develop senior, principal, and distinguished engineers across the storage organization and nearby business units. Raise the technical bar broadly. Represent NVIDIA externally in standards bodies, open-source communities, customer briefings, and industry forums (FAST, SC, OCP, SNIA, Linux Storage Summit).
What we need to see:
BS, MS, or PhD in Computer Science, Electrical Engineering, or a related field — or equivalent experience.
A minimum of 18+ years of practical engineering experience in storage technology is needed. This involves extensive involvement with a high-performance parallel file system like Lustre, GPFS / Spectrum Scale, WEKA, VAST, BeeGFS, DAOS, or its equivalent, handling data at multi-petabyte scale. Candidates must also have wide-ranging expertise in object storage (S3 / Swift-class) and block storage (NVMe-oF, NVMesh-class, iSCSI).
A track record of crafting and managing storage platforms at exabyte scale for performance-critical workloads — AI training, HPC, video, or hyperscale data lakes — including direct responsibility for durability, availability, and performance SLOs measured in 9s.
Demonstrated ability to set technical strategy across business units and partner organizations. You have driven multi-year storage architectures adopted by multiple teams, vendors, or customers. You can point to measurable outcomes such as GPU utility lift, $/PB reduction, incidents eliminated, and time-to-bring-up compressed.
You are 100% hands-on in engineering. You write and review production code yourself. When a bug requires it, you read Lustre, NFS, kernel, NVMe-oF, or SPDK source code. You also run scale tests or recovery drills personally instead of delegating.
Strong proficiency in at least one systems language (C, C++, Rust, or Go) and proficiency in Python; comfortable in the Linux kernel storage and networking stacks (block layer, RDMA / RoCE / InfiniBand, NVMe, page cache, VFS, multipath).
Frequent daily use of advanced AI coding and autonomous tools, including specific examples showing how you accelerated building, coding, debugging, validation, and operations. Also, share your perspective on future trends.
Excellent written and verbal communication. You can write a one-pager that aligns a VP. You can also write a six-pager that aligns an entire org. You can explain a deep technical trade-off to an SRE, a vendor CTO, and an internal customer in the same week.
Comfort operating in a 24/7 production environment where storage incidents directly impact GPU revenue, with a security-first approach baked into every build.
Ways to stand out from the crowd:
Proven background in designing or managing storage solutions for AI training or inference at 10k+ GPU scale, demonstrating clear improvements in GPU utilization or reducing I/O bottlenecks.
Open-source contributions or maintainership in Lustre, NFS, SPDK, NVMe / NVMe-oF, CSI, Ceph, MinIO, RocksDB, or related projects.
Built or led a disaggregated-inference or Inference-Time-Compute storage architecture — KV caching to fast in-cluster or GPU-adjacent storage, WORM at scale, storage-aware scheduling, or database-integrated inference.
Public technical contributions — patents, peer-reviewed papers (FAST, SOSP, NSDI, OSDI, ATC), keynote talks, or RFCs — that demonstrate expertise and leadership in storage for AI infrastructure.
NVIDIA led the way in accelerated computing. Today, our AI infrastructure drives global intelligence, changing industries worldwide. The AI Cloud Storage group forms the base that maintains the world's largest GPU fleet's productivity. Every model trained, every inference served, and every checkpoint saved passes through systems we develop, construct, and manage.
Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com/
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 320,000 USD - 488,750 USD.You will also be eligible for equity and benefits.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.Open postings ranked by description similarity — useful if this role isn't quite right.