vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
References
| Link | Resource |
|---|---|
| https://github.com/vllm-project/vllm/commit/ffb08379d8870a1a81ba82b72797f196838d0c86 | Patch |
| https://github.com/vllm-project/vllm/pull/28126 | Issue Tracking |
| https://github.com/vllm-project/vllm/security/advisories/GHSA-8fr4-5q9j-m8gm | Vendor Advisory |
Configurations
History
03 Dec 2025, 17:52
| Type | Values Removed | Values Added |
|---|---|---|
| First Time |
Vllm
Vllm vllm |
|
| CPE | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
| References | () https://github.com/vllm-project/vllm/commit/ffb08379d8870a1a81ba82b72797f196838d0c86 - Patch | |
| References | () https://github.com/vllm-project/vllm/pull/28126 - Issue Tracking | |
| References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-8fr4-5q9j-m8gm - Vendor Advisory |
01 Dec 2025, 23:15
| Type | Values Removed | Values Added |
|---|---|---|
| New CVE |
Information
Published : 2025-12-01 23:15
Updated : 2025-12-03 17:52
NVD link : CVE-2025-66448
Mitre link : CVE-2025-66448
CVE.ORG link : CVE-2025-66448
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-94
Improper Control of Generation of Code ('Code Injection')
