fix(model): convert bool mask_cache to float additive mask for softcapping#2235
Open
nuthalapativarun wants to merge 3 commits into
Open
Conversation
…pping When KV cache is active, build_mask_cache() returns a torch.bool tensor (True=keep). In scaled_dot_product_attention the bool mask was added directly to scores, contributing 0 or 1 instead of 0 or -inf, which breaks causal masking for models that use attention_logit_softcapping (e.g. Gemma 2). Add an elif branch that converts the boolean mask to an additive float mask (True→0.0, False→-inf) before the scores addition. The fix is applied to both CausalSelfAttention and MultiheadLatentAttention. Fixes Lightning-AI#1672
|
Azure Pipelines: 4 pipeline(s) require an authorized user to comment /azp run to run. |
for more information, see https://pre-commit.ci
|
Azure Pipelines: 4 pipeline(s) require an authorized user to comment /azp run to run. |
Author
|
Hi! Just checking in — CI appears to be waiting on an authorized |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
When the KV cache is active,
build_mask_cache()returns atorch.booltensor whereTrueindicates a position that should be attended to (lower triangle). Inscaled_dot_product_attention, for models that useattention_logit_softcapping(e.g. Gemma 2), this boolean mask was added directly to the softcapped scores:This breaks causal masking: future positions received a score boost of
+1instead of-inf, so softmax assigned non-zero attention weight to tokens that should be completely masked out.Fix
Add an
elifbranch that converts the incoming boolean mask to a float additive mask before the addition (True → 0.0,False → -inf). The same fix is applied to bothCausalSelfAttentionandMultiheadLatentAttention.Testing
Added
test_attention_mask_bool_to_float_with_softcappingwhich:mask_cacheis indeedtorch.bool(pre-condition of the bug)Fixes #1672