Research Scientist, Frontier Safety & Governance

Full Time
London, UK
1 week ago

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

On the Frontier Safety and Governance Team, we help Google DeepMind and the world prepare for a world with advanced AI. We partner with GDM’s policy, safety and responsibility teams to lead work in three areas. Our Frontier Safety work seeks to understand and evaluate dangerous capabilities, and help build Google’s risk management framework (the Frontier Safety Framework). Our Frontier Governance and Planning efforts blueprint better governance of AI, identifying norms and institutional structures that could improve decision-making around advanced AI, and forecast significant new capabilities and developments, anticipating the potential role of (multilateral) national efforts. 

We are a collaborative team, with expertise spanning political science, international relations, technology policy, economics, political-economy, history, national security, institutional design, ethics, philosophy, technical ML, technical AI safety, hardware, supply chains, and other domains. We are interested in recruiting a wide range of expertise.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

We’ve built an inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We encourage each other to push boundaries and achieve ambitious goals!

Our list of benefits is extensive, and we’re happy to discuss this further throughout the interview process.

The Role

Research Scientists join Google DeepMind to work collaboratively within and across a range of research fields. They develop solutions to fundamental questions in machine learning, computational neuroscience, AI and AI policy and governance. We are looking to hire in early and mid-career levels, though will also consider more senior candidates. Those coming from academia may join us from their PhD, post-doc, or professor positions.

Key responsibilities:
  • Propose, contribute to, and lead research projects related to the governance of advanced AI at Google DeepMind.
  • Build and contribute to internal and external collaborations, through involvement in working groups, presentations, and the writing of memos. 
  • Prepare briefings and recommendations for Google DeepMind leadership.
  • Monitor trends and developments in the AI landscape and implications for AI safety, strategy and governance.
  • Produce insightful, engaging and actionable research papers, memos and risk analysis that are easily digestible by internal decision makers and external stakeholders. 
  • Proactively build relationships across the company to inform your research and find opportunities for how your work can support other teams.
  • Monitor relevant external developments closely, cultivate relationships with external domain experts and partners, and share targeted updates with internal audiences.
  • Work in collaboration with our Policy, Safety and Responsibility teams to ensure our advances in intelligence are developed ethically and provide broad benefits to humanity.
About You

In order to set you up for success as a Research Scientist at Google DeepMind,  we look for the following skills and experience:

  • PhD or MA + equivalent research or practical experience in a relevant field. 
  • Generalist with a breadth of abilities. Expertise in a relevant field (e.g. political science, international relations, technology policy, economics, history, institution design).
  • Professional or research experience in AI governance or policy. 
  • Knowledge of the technical AI landscape, policy-making and history of major government decisions relevant to AI, and the global governance of AI. 
  • The ability to come up to speed quickly on sophisticated political, social, and technical topics.
  • Ability to collaborate with a diverse and interdisciplinary range of stakeholders.
  • Ability to synthesise complex material into accessible documents, tailored to different audiences.
  • Professional communication, writing, and presentation skills. 

In addition, the following would be an advantage: 

  • Technical expertise (ML, AGI Safety).
  • Experience incorporating the perspectives and interests of a diverse range of communities, groups and partners. 

Competitive salary applies

The deadline for applications is 5pm BST on Tuesday 17th September.