Operational Ethics and Safety Manager

Full Time
London, UK
7 months ago

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

We are looking for an Operational Ethics and Safety Manager to join our Responsible Development & Innovation (ReDI) team at Google DeepMind. 

In this individual contributor role, you will be responsible for partnering with research and product teams to consider the downstream impacts of Google DeepMind’s research and its applications. 

You will work with teams at Google DeepMind , to ensure that our work is conducted in line with ethics and safety best practices, helping Google DeepMind to progress towards its mission. You will review the safety performance of AI models and provide analysis and advice to various Google DeepMind stakeholders, including our Responsibility and Safety Council.

About us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The role

As an Operational Ethics & Safety Manager within the ReDI team, you’ll use your expertise on the societal implications of technology to deliver impactful assessment, advisory and review work through direct collaboration on groundbreaking research projects and to help develop the broader governance ecosystem at Google DeepMind.

Key responsibilities
  • Leading ethics and safety reviews of projects, in close collaboration with project teams, to assess the downstream societal implications of Google DeepMind’s technology.  
  • Closely collaborating with the ReDI evaluations and model policy teams to review the safety performance of AI models.
  • Supporting the management of the Responsibility and Safety Council, presenting projects and communicating assessments to senior stakeholders on a frequent basis.
  • Designing engagement models to tackle ethics and safety challenges, e.g. running workshops, engaging with external experts, to help teams consider the direct and indirect implications of their work.
  • Identifying areas relevant to ethics and safety to advance research. 
  • Working with broader Google teams to monitor the outcomes of projects to understand their impact
  • Developing and documenting best practices through working with internal Google DeepMind teams and experts, and where appropriate, external organisations, to develop best practices for Google DeepMind projects. 
About you

In order to set you up for success as an Operational Ethics and Safety Manager at Google DeepMind, we look for the following skills and experience: 

  • Experience navigating and assessing complex ethical and societal questions related to technology development, including balancing the benefits and risks of research and applications.
  • A strong understanding of the challenges and issues in the field of AI ethics and safety, through proven AI and society experience (e.g. relevant governance, policy, legal or research work).
  • Strong executive stakeholder management skills, including the ability to communicate effectively in tight turnaround times.
  • Significant experience collaborating with technical stakeholders and highly interdisciplinary teams. Proven ability to communicate complex concepts and ideas simply for a range of collaborators.
  • Excellent technical understanding and communication ability, with the ability to distil sophisticated technical ideas to their essence.

In addition, the following would be an advantage: 

  • Experience working with governance processes within a public or private institution.
  • Experience working within the field of AI ethics and safety.
  • Relevant research experience. 
  • Product management expertise or other similar experience.

Application deadline: 5pm BST Friday 31st May 2024