CVE-2025-53630
LLM models - Memory Corruption
Title source: llmDescription
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
Scores
EPSS
0.0006
EPSS Percentile
19.0%
Classification
CWE
CWE-122
CWE-680
Status
draft
Timeline
Published
Jul 10, 2025
Tracked Since
Feb 18, 2026