vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.
References
Configurations
Configuration 1 (hide)
|
History
04 Dec 2025, 17:42
| Type | Values Removed | Values Added |
|---|---|---|
| First Time |
Vllm
Vllm vllm |
|
| References | () https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610 - Product | |
| References | () https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814 - Product | |
| References | () https://github.com/vllm-project/vllm/commit/3ada34f9cb4d1af763fdfa3b481862a93eb6bd2b - Patch | |
| References | () https://github.com/vllm-project/vllm/pull/27205 - Issue Tracking | |
| References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-69j4-grxj-j64p - Vendor Advisory | |
| CPE | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* cpe:2.3:a:vllm:vllm:0.11.1:rc1:*:*:*:*:*:* cpe:2.3:a:vllm:vllm:0.11.1:rc0:*:*:*:*:*:* |
21 Nov 2025, 02:15
| Type | Values Removed | Values Added |
|---|---|---|
| New CVE |
Information
Published : 2025-11-21 02:15
Updated : 2025-12-04 17:42
NVD link : CVE-2025-62426
Mitre link : CVE-2025-62426
CVE.ORG link : CVE-2025-62426
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-770
Allocation of Resources Without Limits or Throttling
