Research Engineer - Assurance Evaluation

Vollzeit
London, UK
vor 11 Monate

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to build outstanding impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

This role is for an engineer focusing on assurance evaluations at Google DeepMind. These are the evaluations which allow decision-makers to ensure that our model releases are safe and responsible. The role involves developing and maintaining these evaluations and the infrastructure that supports them.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The role

We’re looking for engineers who are interested in creating and completing the evaluations that we use to make release decisions for our ground breaking AI systems.

You will apply your engineering skills to develop and maintain infrastructure and methods for reliable, repeatable evaluations, including datasets, systems to run automated and human evaluations, and analysis tools.

You will develop and maintain an understanding of the trends in AI development, governance, and sociotechnical research. Using this understanding, you will help craft new evaluations, and communicate the results clearly to advise and inform decision-makers on the safety of our AI systems.

In all this work, you will work closely with other engineers and research scientists, both with researchers focused on developing AI systems and with experts in AI ethics and policy.

Key responsibilities:
  • Design, develop and execute evaluations to test the safety of groundbreaking AI models.
  • Develop and maintain infrastructure for these evaluations.
  • Clearly communicate results to decision-makers.
  • Collaborate with experts in various fields of AI ethics, policy and safety.
About you

In order to set you up for success in this role, we look for the following skills and experience:

  • Bachelor's degree in a technical subject (e.g. machine learning, AI, computer science, mathematics, physics, statistics), or equivalent experience.
  • Strong knowledge and experience of Python.
  • Knowledge of mathematics, statistics and machine learning concepts useful for understanding research papers in the field.
  • Ability to present technical results clearly.

In addition, some of the following would be an advantage:

  • A deep interest in the ethics and safety of AI systems, and in AI policy.
  • Experience with crowd computing (e.g. designing experiments, working with human raters).
  • Experience with web application development and user experience design.
  • Experience with data analysis tools & libraries.
  • Skill and interest in working on projects with many collaborators.

 

Applications close at 6pm (UK Time) on Wednesday 24th January 2024.