Research Engineer, AI Safety and Alignment

Full Time
San Francisco, CA, USA
10 months ago

Office locations: Also open to Mountain View and London. 

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

Our team is responsible for enabling AI systems to reliably work as intended, including identifying potential risks from current and future AI systems, and conducting technical research to mitigate them. As a Research Engineer, you will design, implement, and empirically validate approaches to alignment and risk mitigation, and integrate successful approaches into our best AI systems.

About Us

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, interpretability, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems.

Research Engineers work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind.

The Role

Key responsibilities:

  • Identify and investigate possible failure modes for foundation models, ranging from sociotechnical harms (e.g. fairness, misinformation) to misuse (e.g. weapons development, criminal activity) to loss of control (e.g. high-stakes failures, rogue AI).
  • Develop and implement technical approaches to mitigate these risks, such as benchmarking and evaluations, dataset design, scalable oversight, interpretability, adversarial robustness, monitoring, and more, in coordination with the team’s broader technical agenda
  • Build infrastructure that accelerates research velocity by enabling fast experimentation on foundation models, and easy logging and analysis of experimental results.
  • Collaborate with other internal teams to ensure that Google DeepMind AI systems and products (e.g. Gemini) are informed by and adhere to the most advanced safety research and protocol
About You
  • You have at least a year of experience working with deep learning and/or foundation models (whether from industry, academia, coursework, or personal projects).
  • Your knowledge of mathematics, statistics and machine learning concepts enables you to understand research papers in the field.
  • You are adept at building codebases that support machine learning at scale. You are familiar with ML / scientific libraries (e.g. JAX, TensorFlow, PyTorch, Numpy, Pandas), distributed computation, and large scale system design.
  • You are keen to address risks from foundation models, and plan for your research to impact production systems on a timescale between “immediately” and “a few years”.
  • You are excited to work with strong contributors to make progress towards a shared ambitious goal.