OpenAI, peers warn in Red Team report that AI could support “misuse” of biology

Report comes as nation states fret over risk, UK's PM says AI companies shouldn't "mark their own homework"

OpenAI, peers warn in Red Team report that AI could support “misuse” of biology

The Frontier Model Forum, a group set up by OpenAI, has warned that future generations of LLMs without appropriate mitigations “could accelerate a bad actor’s efforts to misuse biology” within 36 months.

Anthropic, Google DeepMind, Microsoft and OpenAI set up the group this summer and have backed it with $10 million. They launched it amid growing pressure from states like the UK around “frontier” AI safety.

Prime Minister Rishi Sunak warned late last week that “right now, the only people testing the safety of AI are the very organisations developing it. 

This post is for subscribers only

Already have an account? Sign in.