US watchdog warns on "security, equity, civil rights" risks of open LLMs - launches consultation
Open models hold potential for “substantial harms, such as risks to security, equity, civil rights, or other harms due to, for instance, affirmative misuse, failures of effective oversight, or lack of clear accountability mechanisms”
The US government has asked for “comment” on how it should regulate “dual use” open foundation models, raising the prospect that it could try rein in their use and leave the field dominated by big tech’s closed AI technology.
While closed models such as ChatGPT and Google’s Gemini have dominated the debate around AI for over a year, advocates of “open” AI have argued that big tech should not have a stranglehold on the tech by default.
The National Telecommunications and Information Administration has kicked off a consultation “on the potential risks, benefits, other implications, and appropriate policy and regulatory approaches to dual-use foundation models for which the model weights are widely available.”
The call for comment notes that while “prominent models” such as ChatGPT offer “limited or no public access to their inner workings” the debut of “large, publicly available models, such as those from Google, Meta, Stability AI, Mistral, the Allen Institute for AI, and EleutherAI, however, has fostered an ecosystem of increasingly “open” advanced AI models”.
The NTIA accepts that such models can help broaden access to AI benefits, particularly for small businesses and academic institutions as well as “underfunded entrepreneurs, and even legacy businesses”. They could also allow for more transparency and access and promote competition in downstream markets.
But it also notes they hold out the potential of “substantial harms, such as risks to security, equity, civil rights, or other harms due to, for instance, affirmative misuse, failures of effective oversight, or lack of clear accountability mechanisms”.
They could also be used to develop attacks against proprietary models, “due to similarities in the data sets used to train them”. Meanwhile, the “shrinking amount of compute” needed to tune open models could boost malicious actors’ ability to use them for harm.
The NTIA’s request for comment gives some insight into how it could attempt to regulate open models. It asks for advice on how to define such models, their potential benefits, and how quickly they might be deployed and distributed. And it wants views on the “risks” they pose compared to closed models.
When it come to risks, it cites their impact on “equity” in systems such as healthcare or justice. But it also wants to know about “novel ways that state or non-state actor” could use them to exacerbate security risks, “including but not limited to threats to infrastructure, public health, human and civil rights, democracy, defense, and the economy?”
It also wants suggestions as to how these risks might be mitigated, whether through creating safeguards, license and distribution models, and red-teaming, or through “regain[ing] control” over models that have become available.”
And, it asks, “Are there particular individuals/entities who should or should not have access to open-weight foundation models? If so, why and under what circumstances?”
This line of questioning will be familiar to anyone who has watched the US grapple with the implications of dual use technology over the years. Cold-war restrictions on exports of supercomputers – useful for designing nuclear warheads and ICBMs amongst others things – could have theoretically covered most business class computers throughout most pf the 90s.
Likewise, the US and other western governments attempted to impose restrictions cryptography exports throughout the 90s and/or ensure government backdoors. And, arguably, are still trying to squash that genie back in the bottle.
Will it be different this time? That would be a first.