applinity

Research Engineer, Frontier Safety Loss of Control, DeepMind

at Google

Location

San Francisco, CA, USA

Compensation

$174k–$252k USD

Type

full time

Posted

3 weeks ago

Tailor your résumé to this role in 30 seconds.

Free account · ATS keyword check · per-job bullet rewrite by Claude.

Tailor my résuméApply on company site

Job description

Our team develops monitoring and control for potentially misaligned AI to mitigate risks of extreme harms. Currently, this primarily involves: designing, building, and testing monitors for potentially dangerous behaviours; developing and implementing response policies to preserve AI usefulness while mitigating risks; and foreseeing ways in which our control tools might be bypassed or degraded. We are looking for an engineer who can rapidly iterate to solve never-before-seen problems with creativity and thoroughness.

The Loss of Control team contributes to a defense in depth against the risk of misaligned AI systems being deployed. We take the possibility of very advanced AI seriously. We don’t think control is a suitable alternative to alignment in the limit of advancing intelligence. But while AI remains effectively monitorable, we think that control is an important part of an overall strategy for building safe AI.

We are looking for a research engineer for the Frontier Safety Loss of Control team within the AGI Safety and Alignment Team based in either San Francisco or London.

In this role, the core responsibility is to help Google prepare for the internal use of potentially misaligned AI systems. That means building defense-in-depth against AI that might persistently pursue goals that users and system developers did not intend.

Artificial intelligence will be one of humanity’s most transformative inventions. At Google DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high-quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.

We are pushing the boundaries across multiple domains. Our global teams offer various learning opportunities and varied career pathways for those driven to achieve exceptional results through collective effort.

The US base salary range for this full-time position is $174,000-$252,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Identify potential harms from misaligned agents and develop strategies for detection and prevention.
  • Implement technical controls to monitor agent thoughts, behaviour, and respond to mitigate potential harms.
  • Integrate various agent behaviour signals from across the organisation to inform response policies.
  • Conduct adversarial testing of controls.
  • Work with internal product teams to ensure that control systems are adopted over all high-risk AI surfaces.

Minimum qualifications:

  • Bachelor's degree in Computer Science, Machine Learning, or a related technical field, or equivalent practical experience.
  • 5 years of experience in engineering and agentic assistance, including software development in Python.
  • Experience working in a frontier AI research and development environment.
  • Experience working in a professional software engineering or research team environment.
  • Experience working with technical stakeholders.
  • Experience in frontier model risk.

Preferred qualifications:

  • Experience of engineering or product design for AI tools or assistants, especially those focused on ML Research and Development (R&D).
  • Experience with cybersecurity detection and response.
  • Experience with collaborating or leading an applied ML project.
  • Experience with Large Language Model (LLM) training and inference.
  • Knowledge of AI control, chain-of-thought and other monitoring, faithfulness and monitorability and related research areas.

Applicants in San Francisco: Qualified applications with arrest or conviction records will be considered for employment in accordance with the San Francisco Fair Chance Ordinance for Employers and the California Fair Chance Act.