Description
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
Scores
CVSS v4
8.9
EPSS
0.0011
EPSS Percentile
28.3%
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N/E:P
CISA SSVC
Vulnrichment
Exploitation
poc
Automatable
no
Technical Impact
partial
Details
CWE
CWE-122
CWE-680
Status
published
Products (1)
ggml-org/llama.cpp
< 26a48ad699d50b6268900062661bd22f3e792579
Published
Jul 10, 2025
Tracked Since
Feb 18, 2026