Ethics Foresight Policy & Frameworks Manager

Full Time
London, UK
5 months ago

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

We are looking for an Ethics Foresight Policy & Frameworks manager to join our Responsible Development & Innovation (ReDI) team at Google DeepMind (GDM).

In this role, you will be responsible for proactively identifying, researching, and addressing emerging AI ethics and safety challenges in the area of generative AI.

You will partner with internal and external experts to develop and iterate on practical guidelines and policies. These guidelines and policies will ensure that GDM develops and deploys its technology in a way that is aligned with the company's AI Principles. When necessary, you will also conduct novel research to help inform their practical, pioneering policies and guidelines.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At GDM, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery. We collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The role

As an Ethics Foresight Policy & Frameworks Manager, you should expect your outputs to take various forms, depending on the topic or need.

This may include: guidelines for research or governance teams to follow when developing or deploying technology; model content policies that steer model development and evaluations; and, artefacts, processes, or coordination mechanisms needed to best support the creations and implementation of those guidelines and policies at GDM and beyond.

Key responsibilities:
  • Build upon and operationalise policies for AI-generated content across a wide range of modalities in collaboration with internal and external experts
  • Support the teams across GDM in interpreting the policies and how it applies to their work
  • Work closely with relevant Google teams, including Trust & Safety, to align on model policies and to collaborate on updates
  • Identify and research risks associated with AI-generated model policy issues
  • Conduct research on identified challenges, gathering information from a variety of sources, including external and internal experts, academic literature, and industry reports
  • Develop practical guidelines and frameworks to help ensure that Google is developing and deploying technology that is aligned with Google’s AI Principles
  • Where appropriate, advise on the creation of evaluations for the implementation of the novel guidelines or review outputs to determine if policies have been violated
  • Communicate findings and recommendations to stakeholders, including researchers, engineers, product managers, and executives
About you

In order to set you up for success in this role, we look for the following skills and experience:

  • Master's degree or PhD, or equivalent experience, in a relevant field, such as philosophy, ethics, computer science, social sciences, or public policy
  • Developed experience in AI ethics, AI policy or a related field
  • Strong research and writing skills
  • Experience working within interdisciplinary teams.
  • Ability to communicate complex concepts and ideas simply for a range of collaborators.
  • Ability to think critically and creatively about complex ethical issues
  • Strong understanding of the latest developments in AI ethics and safety.

We will be reviewing applications on a rolling basis.