CVE Alert: CVE-2025-25183
![CVE Alert: CVE-2025-25183 1 image 1](https://www.redpacketsecurity.com/wp-content/uploads/2024/09/image-1.png)
Vulnerability Summary: CVE-2025-25183
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python’s built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.
Affected Endpoints:
No affected endpoints listed.
Published Date:
2/7/2025, 8:15:34 PM
❄️ CVSS Score:
Exploit Status:
Not ExploitedReferences:
- https://github.com/python/cpython/commit/432117cd1f59c76d97da2eaff55a7d758301dbc7
- https://github.com/vllm-project/vllm/pull/12621
- https://github.com/vllm-project/vllm/security/advisories/GHSA-rm76-4mrf-v9r8
Recommended Action:
No proposed action available. Please refer to vendor documentation for updates.
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.