Responsibility Product Manager

Full Time
Mountain View, CA, USA
9 months ago

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

We’re looking for someone passionate about AI development and ensuring its positive impacts, to join our Responsible Development & Innovation (ReDI) team as a Responsibility Product Manager at Google DeepMind (GDM).

In this role you will partner with tech teams to design and execute leading safety and responsibility approaches for our flagship systems. You’ll gather insights from policy and governance experts along with users of our models, synthesize this and other data into a responsibility roadmap, and ensure what we build upholds our AI Principles.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Responsible Development & Innovation (ReDI) team is responsible for partnering with DeepMind efforts to consider the downstream implications of our work. We work across all areas of the organisation to consider the short and long-term benefits and risks of projects.

The Role

As a Responsibility Product Manager within the ReDI team, you’ll be responsible for shaping and driving how Google DeepMind’s models are developed safely and responsibly, contributing to the responsibility ecosystem within the organisation.

Key responsibilities:

  • Partner with project leadership teams to develop responsibility strategies to ensure a holistic approach to risk mitigation and  alignment with GDM model policy requirements.
  • Embed into model development teams to design and deliver end-to-end responsibility roadmaps for GDM models.
  • Work with tech leads and other senior stakeholders to identify opportunities to innovate in the domains of responsibility and safety
  • Understand safety requirements for users and policymakers related to models being developed. 
  • Collaborate with broader Responsibility & Safety teams across GDM and Google to identify and monitor ethics and safety benefits and risks for projects.
  • Work with partner expert teams to ensure projects uphold best practices in ethics & safety evaluations, and comply with regulatory safety requirements, or internally defined launch-criteria.
  • Create and monitor feedback loops for safety and responsibility data for models in-production.
About You

In order to set you up for success as a Responsibility Product Manager at Google DeepMind,  we look for the following skills and experience:

  • Strong understanding of machine learning systems, able to discuss technical topics in-depth with engineers and researchers
  • Strong understanding of safety & responsibility implications of generative models, able to discuss technical mitigation approaches in-depth.
  • Comfortable with ambiguity, can define a compelling vision, rally a team around it and break the big picture down into incremental pieces that deliver value along the way
  • Curious, collaborative and kind, someone that can bring teams with different opinions together and define a common and optimal path forward
  • Hands on experience as a technical product manager (or similar role) for a ML model from concept to design to launch

In addition, the following would be an advantage: 

  • Specific experience developing safety solutions for an ML system
  • Experience working with researchers and PAs across Google or similar large technology organization.

The US base salary range for this full-time position is between $139,000 - $213,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process

Application Deadline: 12pm GMT Thursday 7th March 2024