CVE-2026-22773

vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
References
Configurations

Configuration 1 (hide)

cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*

History

27 Jan 2026, 21:03

Type Values Removed Values Added
CPE cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*
First Time Vllm
Vllm vllm
References () https://github.com/vllm-project/vllm/security/advisories/GHSA-grg2-63fw-f2qr - () https://github.com/vllm-project/vllm/security/advisories/GHSA-grg2-63fw-f2qr - Exploit, Vendor Advisory

10 Jan 2026, 07:16

Type Values Removed Values Added
New CVE

Information

Published : 2026-01-10 07:16

Updated : 2026-01-27 21:03


NVD link : CVE-2026-22773

Mitre link : CVE-2026-22773

CVE.ORG link : CVE-2026-22773


JSON object : View

Products Affected

vllm

  • vllm
CWE
CWE-770

Allocation of Resources Without Limits or Throttling