llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.
References
| Link | Resource |
|---|---|
| https://github.com/ggml-org/llama.cpp/commit/39bf0d3c6a95803e0f41aaba069ffbee26721042 | Patch |
| https://github.com/ggml-org/llama.cpp/pull/20908 | Issue Tracking Patch |
| https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-j8rj-fmpv-wcxw | Exploit Vendor Advisory |
Configurations
History
30 Apr 2026, 19:18
| Type | Values Removed | Values Added |
|---|---|---|
| First Time |
Ggml
Ggml llama.cpp |
|
| CPE | cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:* | |
| References | () https://github.com/ggml-org/llama.cpp/commit/39bf0d3c6a95803e0f41aaba069ffbee26721042 - Patch | |
| References | () https://github.com/ggml-org/llama.cpp/pull/20908 - Issue Tracking, Patch | |
| References | () https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-j8rj-fmpv-wcxw - Exploit, Vendor Advisory |
01 Apr 2026, 18:16
| Type | Values Removed | Values Added |
|---|---|---|
| New CVE |
Information
Published : 2026-04-01 18:16
Updated : 2026-04-30 19:18
NVD link : CVE-2026-34159
Mitre link : CVE-2026-34159
CVE.ORG link : CVE-2026-34159
JSON object : View
Products Affected
ggml
- llama.cpp
CWE
CWE-119
Improper Restriction of Operations within the Bounds of a Memory Buffer
