Description
llama.cpp provides LLM inference in C/C++. The unsafe `type` member in the `rpc_tensor` structure can cause `global-buffer-overflow`. This vulnerability may lead to memory data leakage. The vulnerability is fixed in b3561.
References (2)
Core 2
Core References
Vendor Advisory x_refsource_confirm
https://github.com/ggerganov/llama.cpp/security/advisories/GHSA-mqp6-7pv6-fqjf
Scores
CVSS v3
5.3
EPSS
0.0027
EPSS Percentile
50.4%
Attack Vector
NETWORK
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N
CISA SSVC
Vulnrichment
Exploitation
poc
Automatable
yes
Technical Impact
partial
Details
CWE
CWE-125
CWE-401
Status
published
Products (2)
ggerganov/llama.cpp
< b3561
ggml/llama.cpp
< b3561
Published
Aug 12, 2024
Tracked Since
Feb 18, 2026