vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
References
| Link | Resource |
|---|---|
| https://github.com/vllm-project/vllm/commit/58fab50d82838d5014f4a14d991fdb9352c9c84b | Patch |
| https://github.com/vllm-project/vllm/pull/27204 | Issue Tracking Patch Vendor Advisory |
| https://github.com/vllm-project/vllm/pull/6613 | Issue Tracking |
| https://github.com/vllm-project/vllm/security/advisories/GHSA-pmqf-x6x8-p7qw | Mitigation Vendor Advisory |
Configurations
Configuration 1 (hide)
|
History
04 Dec 2025, 17:40
| Type | Values Removed | Values Added |
|---|---|---|
| References | () https://github.com/vllm-project/vllm/commit/58fab50d82838d5014f4a14d991fdb9352c9c84b - Patch | |
| References | () https://github.com/vllm-project/vllm/pull/27204 - Issue Tracking, Patch, Vendor Advisory | |
| References | () https://github.com/vllm-project/vllm/pull/6613 - Issue Tracking | |
| References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-pmqf-x6x8-p7qw - Mitigation, Vendor Advisory | |
| CVSS |
v2 : v3 : |
v2 : unknown
v3 : 6.5 |
| First Time |
Vllm
Vllm vllm |
|
| CPE | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* cpe:2.3:a:vllm:vllm:0.11.1:rc1:*:*:*:*:*:* cpe:2.3:a:vllm:vllm:0.11.1:rc0:*:*:*:*:*:* |
21 Nov 2025, 02:15
| Type | Values Removed | Values Added |
|---|---|---|
| New CVE |
Information
Published : 2025-11-21 02:15
Updated : 2025-12-04 17:40
NVD link : CVE-2025-62372
Mitre link : CVE-2025-62372
CVE.ORG link : CVE-2025-62372
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-129
Improper Validation of Array Index
