Research Engineer, Assurance Evaluation

Full Time
London, UK
1 month ago

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role

We’re looking for engineers who are interested in creating and executing the evaluations that we use to make release decisions for our cutting-edge AI systems.

You will apply your engineering skills to develop and maintain infrastructure and methods for reliable, repeatable evaluations, including datasets, systems to run automated and human evaluations, and analysis tools.

You will develop and maintain an understanding of the trends in AI development, governance, and sociotechnical research. Using this understanding, you will help design new evaluations, and communicate the results clearly to advise and inform decision-makers on the safety of our AI systems.

In all this work, you will work closely with other engineers and research scientists, both with researchers focused on developing AI systems and with experts in AI ethics and policy.

Key responsibilities:
  • Design, develop evaluations to test the safety of cutting edge AI models.
  • Develop and maintain infrastructure for these evaluations.
  • Run these evaluations prior to releases for new AI models.
  • Clearly communicate results to decision-makers.
  • Collaborate with experts in various fields of AI ethics, policy and safety.
About You

In order to set you up for success as a Research Engineer at Google DeepMind,  we look for the following skills and experience:

  • Bachelor's degree in a technical subject (e.g. machine learning, AI, computer science, mathematics, physics, statistics), or equivalent experience.
  • Strong knowledge and experience of Python.
  • Knowledge of mathematics, statistics and machine learning concepts useful for understanding research papers in the field.
  • Ability to present technical results clearly.
  • A deep interest in the ethics and safety of AI systems, and in AI policy.

In addition, the following would be an advantage: 

  • Experience with crowd computing (e.g. designing experiments, working with human raters).
  • Experience with web application development and user experience design.
  • Experience with data analysis tools & libraries.
  • Skill and interest in working on projects with many stakeholders.

Applications close on Tuesday 12th November 2024.