vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
References
| Link | Resource |
|---|---|
| https://github.com/vllm-project/vllm/pull/31987 | Issue Tracking Patch |
| https://github.com/vllm-project/vllm/pull/32319 | Issue Tracking Patch |
| https://github.com/vllm-project/vllm/releases/tag/v0.14.1 | Release Notes |
| https://github.com/vllm-project/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv | Vendor Advisory |
Configurations
History
23 Feb 2026, 18:19
| Type | Values Removed | Values Added |
|---|---|---|
| References | () https://github.com/vllm-project/vllm/pull/31987 - Issue Tracking, Patch | |
| References | () https://github.com/vllm-project/vllm/pull/32319 - Issue Tracking, Patch | |
| References | () https://github.com/vllm-project/vllm/releases/tag/v0.14.1 - Release Notes | |
| References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv - Vendor Advisory | |
| First Time |
Vllm
Vllm vllm |
|
| CPE | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
| Summary |
|
02 Feb 2026, 23:16
| Type | Values Removed | Values Added |
|---|---|---|
| New CVE |
Information
Published : 2026-02-02 23:16
Updated : 2026-02-23 18:19
NVD link : CVE-2026-22778
Mitre link : CVE-2026-22778
CVE.ORG link : CVE-2026-22778
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-532
Insertion of Sensitive Information into Log File
