CVE Alert: CVE-2025-29783

Vulnerability Summary: CVE-2025-29783
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Mooncake, unsafe deserialization exposed directly over ZMQ/TCP on all network interfaces will allow attackers to execute remote code on distributed hosts. This is a remote code execution vulnerability impacting any deployments using Mooncake to distribute KV across distributed hosts. This vulnerability is fixed in 0.8.0.
Affected Endpoints:
No affected endpoints listed.
Published Date:
3/19/2025, 4:15:32 PM
💀 CVSS Score:
Exploit Status:
Not ExploitedReferences:
- https://github.com/vllm-project/vllm/commit/288ca110f68d23909728627d3100e5a8db820aa2
- https://github.com/vllm-project/vllm/pull/14228
- https://github.com/vllm-project/vllm/security/advisories/GHSA-x3m8-f7g5-qhm7
Recommended Action:
No proposed action available. Please refer to vendor documentation for updates.
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.