vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.19.0, a Denial of Service vulnerability exists in the vLLM OpenAI-compatible API server. Due to the lack of an upper bound validation on the n parameter in the ChatCompletionRequest and CompletionRequest Pydantic models, an unauthenticated attacker can send a single HTTP request with an astronomically large n value. This completely blocks the Python asyncio event loop and causes immediate Out-Of-Memory crashes by allocating millions of request object copies in the heap before the request even reaches the scheduling queue. This vulnerability is fixed in 0.19.0.
References
| Link | Resource |
|---|---|
| https://github.com/vllm-project/vllm/commit/b111f8a61f100fdca08706f41f29ef3548de7380 | Patch |
| https://github.com/vllm-project/vllm/pull/37952 | Issue Tracking Patch |
| https://github.com/vllm-project/vllm/security/advisories/GHSA-3mwp-wvh9-7528 | Patch Vendor Advisory |
Configurations
History
20 Apr 2026, 18:30
| Type | Values Removed | Values Added |
|---|---|---|
| References | () https://github.com/vllm-project/vllm/commit/b111f8a61f100fdca08706f41f29ef3548de7380 - Patch | |
| References | () https://github.com/vllm-project/vllm/pull/37952 - Issue Tracking, Patch | |
| References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-3mwp-wvh9-7528 - Patch, Vendor Advisory | |
| First Time |
Vllm
Vllm vllm |
|
| CPE | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* |
06 Apr 2026, 16:16
| Type | Values Removed | Values Added |
|---|---|---|
| New CVE |
Information
Published : 2026-04-06 16:16
Updated : 2026-04-20 18:30
NVD link : CVE-2026-34756
Mitre link : CVE-2026-34756
CVE.ORG link : CVE-2026-34756
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-770
Allocation of Resources Without Limits or Throttling
