vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.10.1.1, a Denial of Service (DoS) vulnerability can be triggered by sending a single HTTP GET request with an extremely large header to an HTTP endpoint. This results in server memory exhaustion, potentially leading to a crash or unresponsiveness. The attack does not require authentication, making it exploitable by any remote user. This vulnerability is fixed in 0.10.1.1.
References
Configurations
No configuration.
History
22 Aug 2025, 18:09
Type | Values Removed | Values Added |
---|---|---|
Summary |
|
21 Aug 2025, 15:15
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2025-08-21 15:15
Updated : 2025-08-22 18:09
NVD link : CVE-2025-48956
Mitre link : CVE-2025-48956
CVE.ORG link : CVE-2025-48956
JSON object : View
Products Affected
No product.
CWE
CWE-400
Uncontrolled Resource Consumption