vLLM is an inference and serving engine for large language models (LLMs). From 0.16.0 to before 0.19.0, a server-side request forgery (SSRF) vulnerability in download_bytes_from_url allows any actor who can control batch input JSON to make the vLLM batch runner issue arbitrary HTTP/HTTPS requests from the server, without any URL validation or domain restrictions.
This can be used to target internal services (e.g. cloud metadata endpoints or internal HTTP APIs) reachable from the vLLM host. This vulnerability is fixed in 0.19.0.
References
| Link | Resource |
|---|---|
| https://github.com/vllm-project/vllm/security/advisories/GHSA-pf3h-qjgv-vcpr | Patch Vendor Advisory |
Configurations
History
20 Apr 2026, 18:31
| Type | Values Removed | Values Added |
|---|---|---|
| First Time |
Vllm
Vllm vllm |
|
| CPE | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
| References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-pf3h-qjgv-vcpr - Patch, Vendor Advisory |
06 Apr 2026, 16:16
| Type | Values Removed | Values Added |
|---|---|---|
| New CVE |
Information
Published : 2026-04-06 16:16
Updated : 2026-04-20 18:31
NVD link : CVE-2026-34753
Mitre link : CVE-2026-34753
CVE.ORG link : CVE-2026-34753
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-918
Server-Side Request Forgery (SSRF)
