Hugging Face left with split lip after someone punches way into Spaces secrets
"We pledge to use this as an opportunity to strengthen the security of our entire infrastructure"
Hugging Face has pledged to strengthen security across its entire infrastructure – after revealing that someone had gained unauthorised access to secrets on its Spaces AI model sharing platform.
Hugging Face started life a chatbot platform aimed at teenagers. However, it subsequently opensourced its model and reinvented itself as a machine learning platform, providing tools and services for AI and ML developers, and a platform for sharing models, including open source ones.
The French-American company revealed on Friday that earlier in the week, “our team detected unauthorized access to our Spaces platform, specifically related to Spaces secrets.”
This sparked suspicions that “a subset of Spaces’ secrets” could have been accessed.
Hugging Face has revoked “a number of tokens” related to those particular secrets. Affected users will have already received an email.
More broadly, it said, “We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default.”
It said it is working with “cyber security forensic specialists and has been in touch with law enforcement.
Meanwhile, it is scrambling to review its security setup. It claimed to have made other “significant improvements”, including “completely” removing org tokens to boost traceability.
It has also implemented key management for Spaces secrets, and improved its ability to identify “leaked tokens and proactively invalidate them. And it will “more generally” improve security “across the board” including “completely deprecating “classic” read and write tokens in the near future, as soon as fine-grained access tokens reach feature parity.”
Hugging Face was described as the GitHub of machine learning by CNCF exec director Priyanka Sharma, at Kubecon Europe back in March. With over 640,000 models available, depending who you listen to, it has grown incredibly quickly as individuals and companies attempt to keep up in the AI race.
See also: Deepfake CEO scams overblown - for now at least
It said last week that “We pledge to use this as an opportunity to strengthen the security of our entire infrastructure.” However, most enterprise execs will argue that it should have taken that opportunity a long time ago.
Earlier this year, security firm Wiz revealed it was possible to upload a malicious model and use “container escape techniques” to access other models on Hugging Face’s infrastructure. The companies said they were working on mitigations.
Meanwhile, Jfrog in February identified malicious models on the platform which could be used to backdoor victims’ machines.