Total
3 CVE
CVE | Vendors | Products | Updated | CVSS v2 | CVSS v3 |
---|---|---|---|---|---|
CVE-2024-41130 | 1 Ggml | 1 Llama.cpp | 2025-08-27 | N/A | 5.4 MEDIUM |
llama.cpp provides LLM inference in C/C++. Prior to b3427, llama.cpp contains a null pointer dereference in gguf_init_from_file. This vulnerability is fixed in b3427. | |||||
CVE-2025-52566 | 1 Ggml | 1 Llama.cpp | 2025-08-27 | N/A | 8.6 HIGH |
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721. | |||||
CVE-2025-49847 | 1 Ggml | 1 Llama.cpp | 2025-08-27 | N/A | 8.8 HIGH |
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662. |