Product Policy Lead, Generative AI
at Google
Location
New York, NY, USA; Sunnyvale, CA, USA
Compensation
$165k–$239k USD
Type
full time
Posted
2 weeks ago
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
Job description
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
Responsibilities
- Analyze issues facing products with Generative AI capabilities and make policy recommendations on how to address them.
- Collaborate with Product Managers, Trust and Safety teams, and Engineers to influence product decisions and prioritization, and improve user experience for multimodal Generative AI.
- Drive research and collaboration in the multimodal Generative AI space both within Google and with key opinion formers through our Government Affairs and Public Policy team to set industry standards.
- Provide clear and timely updates to executives and executive stakeholders both within Trust and Safety and within Google on issues related to multimodal Generative AI and base model policies. Partner closely with Deepmind to draft model-level policies.
- Work with sensitive content or situations and may be exposed to graphic, controversial or upsetting topics or content.
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 7 years of experience in a Policy, Legal, Trust and Safety, or Technology Environment.
- Experience working on AI-related policy issues.
Preferred qualifications:
- JD, MBA, or Master’s degree.
- Experience in development, implementation, and maintenance of policy.
- Experience working on content issues and potentially harmful or upsetting content, including expertise in the technology sector and key policy issues impacting AI safety and content moderation online.
- Ability to translate complex issues into simple and clear language, collaborate with cross-functional stakeholders, and navigate organizational boundaries.
- Ability to communicate effectively in person, in public settings, and in writing, and identify/gather insights and communicate complex technology policy issues.
- Excellent problem-solving and critical thinking skills with attention to detail.