Skip to content

model: fix llama arch implementation#17665

Merged
ngxson merged 1 commit intoggml-org:masterfrom
giladgd:fixLlamaArch
Dec 1, 2025
Merged

model: fix llama arch implementation#17665
ngxson merged 1 commit intoggml-org:masterfrom
giladgd:fixLlamaArch

Conversation

@giladgd
Copy link
Contributor

@giladgd giladgd commented Dec 1, 2025

No description provided.

@giladgd giladgd requested a review from CISC as a code owner December 1, 2025 20:14
@giladgd
Copy link
Contributor Author

giladgd commented Dec 1, 2025

@ngxson The llama arch seems to be broken since #17644, I noticed it since it failed tests that I run on some llama models.
I think you meant to copy the ml.get_key(LLM_KV_ATTENTION_LAYERNORM_RMS_EPS, hparams.f_norm_rms_eps); line instead of moving it.

Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm yeah that was a mistake, thanks for the fix

@ngxson ngxson merged commit 00c361f into ggml-org:master Dec 1, 2025
60 of 65 checks passed
@giladgd giladgd deleted the fixLlamaArch branch December 1, 2025 20:26
Anico2 added a commit to Anico2/llama.cpp that referenced this pull request Jan 15, 2026
blime4 pushed a commit to blime4/llama.cpp that referenced this pull request Feb 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants