Georgi Gerganov

5 exploits Active since Jul 2024
CVE-2024-41130 WRITEUP MEDIUM WRITEUP
Ggml Llama.cpp < b3427 - NULL Pointer Dereference
llama.cpp provides LLM inference in C/C++. Prior to b3427, llama.cpp contains a null pointer dereference in gguf_init_from_file. This vulnerability is fixed in b3427.
CVSS 5.4
CVE-2024-42477 WRITEUP MEDIUM WRITEUP
llama.cpp - Buffer Overflow
llama.cpp provides LLM inference in C/C++. The unsafe `type` member in the `rpc_tensor` structure can cause `global-buffer-overflow`. This vulnerability may lead to memory data leakage. The vulnerability is fixed in b3561.
CVSS 5.3
CVE-2024-42478 WRITEUP MEDIUM WRITEUP
llama.cpp - Memory Corruption
llama.cpp provides LLM inference in C/C++. The unsafe `data` pointer member in the `rpc_tensor` structure can cause arbitrary address reading. This vulnerability is fixed in b3561.
CVSS 5.3
CVE-2024-42479 WRITEUP CRITICAL WRITEUP
llama.cpp - Buffer Overflow
llama.cpp provides LLM inference in C/C++. The unsafe `data` pointer member in the `rpc_tensor` structure can cause arbitrary address writing. This vulnerability is fixed in b3561.
CVSS 10.0
CVE-2025-49847 WRITEUP HIGH WRITEUP
Ggml Llama.cpp < b5662 - Buffer Overflow
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.
CVSS 8.8