vLLM is an inference and serving engine for large language models (LLMs). From 0.7.0 to before 0.19.0, the VideoMediaIO.load_base64() method at vllm/multimodal/media/video.py splits video/jpeg data URLs by comma to extract individual JPEG frames, but does not enforce a frame count limit. The num_frames parameter (default: 32), which is enforced by the load_bytes() code path, is completely bypassed in the video/jpeg base64 path. An attacker can send a single API request containing thousands of comma-separated base64-encoded JPEG frames, causing the server to decode all frames into memory and crash with OOM. This vulnerability is fixed in 0.19.0.
References
| Link | Resource |
|---|---|
| https://github.com/vllm-project/vllm/security/advisories/GHSA-pq5c-rjhq-qp7p | Patch Vendor Advisory |
Configurations
History
20 Apr 2026, 18:31
| Type | Values Removed | Values Added |
|---|---|---|
| References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-pq5c-rjhq-qp7p - Patch, Vendor Advisory | |
| CPE | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
| First Time |
Vllm
Vllm vllm |
06 Apr 2026, 16:16
| Type | Values Removed | Values Added |
|---|---|---|
| New CVE |
Information
Published : 2026-04-06 16:16
Updated : 2026-04-20 18:31
NVD link : CVE-2026-34755
Mitre link : CVE-2026-34755
CVE.ORG link : CVE-2026-34755
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-770
Allocation of Resources Without Limits or Throttling
