Ollama: Easy to exploit vulnerability discovered, patched
Files can be overwritten, corrupted say security researchers.
Security researchers at Wiz found a critical vulnerability within Ollama, an incredibly popular open-source project used to run AI Models.
Ollama is widely used due to its compatibility with mainstream LLMs such as Meta's Llama, Microsoft's Phi, Google's Gemma and the Mistral AI models. It has over 70,o00 stars on GitHub, a records a high rate of monthly pulls.
The easy-to-exploit Remote Code Execution vulnerability in Ollama, CVE-2024-37032 was disclosed to Ollama's maintainers and has since been patched. Users have been encouraged to upgrade their installations to 0.1.34.
The vulnerability was initially reported to Ollama on May 5th, and its maintainers released a patched version on May 8th. According to Wiz, Ollama committed a fix within four hours of being notified of the vulnerability.
See also: LLMs’ “Bullsh*t” problem, DARPA, and testing for nonsense
According to Wiz, due to lack of authentication support from Ollama, vulnerable versions of the project could still be targeted by malicious actors. In order to prevent this, Wiz recommended that users deploy Ollama behind a reverse-proxy to enforce authentication.
Due to the vulnerability, researchers found that files on the server could be overwritten. Dubbing the vulnerability as "Probllama" they noted that it was a classic path transversal vulnerability.
The arbitrary file overwrite could be used to corrupt files within the server, and within Docker installations, it was "quite straightforward" to achieve remote code execution.
Despite the near immediate patch, the vulnerability is still a problem. Wiz noted that as of June 10th, a large number of Ollama instances on the internet were still running an exposed version.
Users are recommended to update Ollama to the latest version, and not expose it to the internet unless protected by some authentication method.