Research Scientist, AI Safety and Alignment

Full Time
San Francisco, CA, USA
10 months ago

Office locations: Also open to Mountain View and London. 

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

Our team is responsible for enabling AI systems to reliably work as intended, including identifying potential risks from current and future AI systems, and conducting technical research to mitigate them. As a Research Scientist, you will design, implement, and empirically validate approaches to alignment and risk mitigation, and integrate successful approaches into our best AI systems.

About Us

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, interpretability, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems.

Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind.

The Role

Key responsibilities:

  • Identify and investigate possible failure modes for foundation models, ranging from sociotechnical harms (e.g. fairness, misinformation) to misuse (e.g. weapons development, criminal activity) to loss of control (e.g. high-stakes failures, rogue AI).
  • Develop and implement technical approaches to mitigate these risks, such as benchmarking and evaluations, dataset design, scalable oversight, interpretability, adversarial robustness, monitoring, and more, in coordination with the team’s broader technical agenda.
  • Report and present research findings and developments to internal and external collaborators with effective written and verbal communication.
  • Collaborate with other internal teams to ensure that Google DeepMind AI systems and products (e.g. Gemini) are informed by and adhere to the most advanced safety research and protocols.
About You
  • You have extensive research experience with deep learning and/or foundation models (for example, a PhD in machine learning).
  • You are adept at generating ideas and designing experiments, and implementing these in Python with real AI systems.
  • You are keen to address risks from foundation models, and have thought about how to do so. You plan for your research to impact production systems on a timescale between “immediately” and “a few years”.
  • You are excited to work with strong contributors to make progress towards a shared ambitious goal. With strong, clear communication skills, you are confident engaging technical stakeholders to share research insights tailored to their background.