Butterfly wing. Photographed with an OM System TG-7 in microscope mode.

Biden's AI Executive Order to create standards for identifying "AI-generated content and authenticating official content"

Order also demands "strong new standards for biological synthesis screening" and follows UK concerns around

A new Executive Order from US President Joe Biden will demand “strong new standards for biological synthesis screening” in AI development and require standards body NIST to set “rigorous standards for extensive red-team testing to ensure safety before public release” of new AI systems. 

The order comes as the Frontier Model Forum, a group set up by OpenAI, warned that future generations of LLMs without appropriate mitigations “could accelerate a bad actor’s efforts to misuse biology” within 36 months.

It also demands the establishment of “standards and best practices for detecting AI-generated content and authenticating official content” with the US Department of Commerce to develop “guidance for content authentication and watermarking to clearly label AI-generated content.

(OpenAI last week said that it is “developing a technical approach to provenance in order to assist in identifying audiovisual content created by our models. Once this approach is developed, we will be deploying it broadly across our new frontier systems. We are assessing a range of provenance techniques, each with distinct pros and cons, that broadly fall into three buckets: watermarking, classifiers, metadata-based approaches.”)

The AI Executive Order comes days after AI companies aimed to get ahead of some common criticisms by publishing a short set of examples of model Red Teaming under the rubrik of their newly created “Frontier Model Forum” – with members Anthropic, Google DeepMind, Microsoft and OpenAI vowing to create a responsible disclosure process, by which AI labs can share “information related to the discovery of… potentially dangerous capabilities within frontier AI models — and their associated mitigations.” 

See also: OpenAI, peers warn in Red Team report that AI could support “misuse” of biology

Intriguingly, a Biden AI Executive Order fact sheet also says the administration will “establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. 

“Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure” it adds.

The Executive Order was welcomed by industry including CrowdStrike. The cybersecurity company’s Drew Bagley, VP & Counsel, Privacy and Cyber Policy, said he was “encouraged” to see the order, noting that “adversaries continue expressing interest in leveraging LLMs to move more quickly and scale their operations… [but the] natural language interface of today’s LLMs has the potential to make cybersecurity roles and responsibilities more broadly accessible, helping to close the cybersecurity skills gap and improve response time so defenders can stay ahead of adversaries – boosting proactive security across organizations.”

The Executive Order demands production of “a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.”

Join peers following The Stack on LinkedIn