Software Engineer, GDC LLM Serving and GPU Performance
at Google
Location
Sunnyvale, CA, USA
Compensation
$207k–$300k USD
Type
full time
Posted
2 days ago
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
Job description
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
Want to shape the future of how Google serves its most advanced Large Language Models? Join the GDC AI Models and Performance team and work on AI infrastructure.
Imagine re-inventing LLM serving by contributing to our disaggregated serving initiatives – separating compute and memory to unlock new levels of performance and flexibility. You could be optimizing Key-Value (KV) cache transfer mechanisms, designing dynamic resource allocation strategies, or building the next generation of performance analysis tools to dissect and enhance GPU utilization. This is a unique opportunity to go deep, from system-level design down to performance profiling, ensuring Google's LLMs run faster and more cost-effectively than ever before.
Responsibilities
- Design, develop, and implement enhancements to the LLM serving stack, focusing on performance, scalability, and resource efficiency (e.g., on systems like Wiz, Servomatic).
- Contribute to the design and implementation of advanced serving architectures, including disaggregated serving.
- Build and maintain infrastructure and tooling for in-depth performance analysis, profiling, and benchmarking of LLM models on GPU accelerators.
- Identify and address performance bottlenecks across the stack, working closely with teams providing core GPU libraries and kernels.
- Collaborate with research, engineering, and SRE teams to optimize and deploy LLMs in production.
Minimum qualifications:
- Bachelor’s degree or equivalent practical experience.
- 8 years of experience in software development.
- 5 years of experience testing, and launching software products, and 3 years of experience with software design and architecture.
- 5 years of experience with one or more of the following: Speech/audio (e.g., technology duplicating and responding to the human voice), reinforcement learning (e.g., sequential decision making), ML infrastructure, or specialization in another ML field.
- 5 years of experience with ML design and ML infrastructure (e.g., model deployment, model evaluation, data processing, debugging, fine tuning).
Preferred qualifications:
- Master’s degree or PhD in Engineering, Computer Science, or a related technical field.
- 8 years of experience with data structures and algorithms.
- 3 years of experience in a technical leadership role leading project teams and setting technical direction.
- 3 years of experience working in a complex, matrixed organization involving cross-functional, or cross-business projects.