AGI Safety Manager
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
SnapshotWe are looking for an AGI Safety Manager to join our Responsible Development & Innovation (ReDI) team at Google DeepMind.
In this role you will be responsible for partnering with research, product and policy teams focused on AGI. You will help anticipate the risks and challenges from AGI and assess AGI related efforts and technologies. Among your core responsibilities, you will manage the operation of our AGI Safety Council. You will contribute to our efforts to ensure that our AGI work is conducted in line with our responsibility and safety best practices, helping Google DeepMind to progress towards its mission to build AI responsibly to benefit humanity.
About UsArtificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and responsibility are the highest priority.
We constantly iterate on our workplace experience with the goal of ensuring it encourages a balanced life. From excellent office facilities through to extensive manager support, we strive to support our people and their needs as effectively as possible.
The RoleAs an AGI Safety Manager within the ReDI team, you’ll use your expertise to deliver impactful work through direct collaboration on groundbreaking research projects and to help develop the broader governance ecosystem at Google DeepMind.
In your role, you will support the operation of the AGI Safety Council by producing analyses and reports that inform decision making. The AGI Safety Council is concerned with extreme risk from AGI, whether from misalignment, misuse, or structural risks.
Your role will be broad and cross-cutting, requiring a variety of skills. As the first operational hire supporting the AGI Safety Council, you will help define the role and the mode of operation of the committee. Synthesising and producing research ideas, prioritising effectively, and building trusted relationships are critical skills for this role.
The responsibilities include:
- Conduct generalist research and analysis to answer pressing questions for AGI safety including identifying novel considerations and critical insights at the intersection of different areas of expertise
- Create insightful and compelling reports, research papers and case studies for internal and external stakeholders.
- Leading AGI safety reviews of relevant projects, in close collaboration with project teams, to assess the downstream societal implications of Google DeepMind’s technology.
- Deeply embed in our research efforts, particularly the teams working on Scalable Alignment, Frontier Safety and Governance, and Gemini Safety. Liaise between the committee and project teams, providing guidance to technical stakeholders in operationalising AGI safety.
- Provide progress updates to key stakeholders and ensure alignment on priorities and expectations.
- Inform opportunities for enhanced knowledge sharing and collaboration across the organisation
- Collaborating with external partners to test and validate potential risks and benefits of Google DeepMind’s work.
- Developing and documenting best practices through working with internal Google DeepMind teams and experts, and where appropriate, external organisations, to develop best practices for Google DeepMind projects.
In order to set you up for success as an AGI Safety Manager at Google DeepMind, we look for the following skills and experience:
- You are familiar with AGI safety and governance work, alongside safety and policy issues related to technology and society more broadly.
- Comfortable with ambiguity, you enjoy providing “good enough” solutions to open-ended problems, switching between problems to prioritise the most impactful opportunities, and considering how your work fits within a broad and constantly adapting space.
- You excel at problem solving, knowing which questions to ask, how to go about answering those questions, and providing solutions. Skilled in thinking logically about the big picture, you identify areas of improvement and drive change.
- You are keen to strengthen your knowledge of programme management and portfolio management through collaboration with technical and non-technical individuals. Possessing a curious attitude, you are committed to continuous learning about groundbreaking research and technology.
- Strong stakeholder management skills across a variety of fields and levels.
- Significant experience supporting teams to deliver projects in fast-paced and constantly evolving environments.
- Navigating, innovating and defining complex processes.
- Proven ability to communicate complex concepts and ideas simply for a range of collaborators.
- Excellent technical understanding and communication ability, with the ability to distil sophisticated technical ideas to their essence.
In addition, the following would be an advantage:
- Experience working with governance processes within a public or private institution.
- Experience working within the field of AI Safety and Governance.
- Experience managing and running complex projects with multidisciplinary stakeholders.
- A combination of research experience alongside a proven track record in roles requiring operational and analytical skills.
Application deadline: Monday 6th May at 6pm BST.