Skip to content

[Qwen3.5 MoE] Add hybrid decoder model (GatedDeltaNet + full attention + MoE)#2545

Open
gali-leilei wants to merge 2 commits intopytorch:mainfrom
gali-leilei:upstream-qwen3-5-moe
Open

[Qwen3.5 MoE] Add hybrid decoder model (GatedDeltaNet + full attention + MoE)#2545
gali-leilei wants to merge 2 commits intopytorch:mainfrom
gali-leilei:upstream-qwen3-5-moe

Conversation

@gali-leilei
Copy link

Summary

Upstreams Qwen3.5 MoE support from the internal fork. Qwen3.5 MoE is a hybrid decoder: GatedDeltaNet (linear attention) for most layers + full attention every N layers, with MoE + gated shared expert FFN. Validated on real 35B-A3B checkpoints (HF↔DCP roundtrip: 19/19 tensors exact) and L20X GPUs (debugmodel + 35B-A3B, 10 steps, no errors).

New model: torchtitan/models/qwen3_5_moe/

  • model.py: OffsetRMSNorm, RMSNormGated, GatedDeltaNet (FLA optional, torch_naive fallback), Attention (output gating + partial RoPE), TransformerBlock, Model
  • parallelize.py: TP/FSDP/CP/EP support. DTensor-safe wrappers for depthwise conv1d (_DTensorSafeConv1d) and FLA kernel dispatch (_install_dtensor_safe_dispatch). Registers softplus as DTensor pointwise op.
  • state_dict_adapter.py: HF↔TorchTitan conversion. Handles expert gate_up_proj [E,2I,D]w1+w3 split/concat.
  • config_registry.py: debugmodel, 35b-a3b, 35b-a3b-sdpa, 35b-a3b-varlen training configs
  • __init__.py: model configs (debugmodel, 35b-a3b, 122b-a10b, 397b-a17b, 397B_A19B), build(), model_registry()

Related changes

  • torchtitan/models/__init__.py: add "qwen3_5_moe" to _supported_models
  • tests/integration_tests/models.py: qwen3_5_moe FSDP+TP+EP integration test (4 GPUs)

Test plan

  • pre-commit run --all-files — flake8, ufmt, pydoclint, codespell all pass
  • Integration test on 4 GPUs: --module qwen3_5_moe --config qwen3_5_moe_debugmodel with FSDP+TP+EP

🤖 Generated with Claude Code

gali-leilei and others added 2 commits March 11, 2026 14:08
…n + MoE)

Upstreams Qwen3.5 MoE support from the internal fork. This is a hybrid
decoder: GatedDeltaNet (linear attention) for most layers + full attention
every N layers, with MoE + gated shared expert FFN.

## New model: torchtitan/models/qwen3_5_moe/
- model.py: OffsetRMSNorm, RMSNormGated, GatedDeltaNet (FLA optional,
  torch_naive fallback), Attention (output gating + partial RoPE),
  TransformerBlock, Model. Uses nn.init.trunc_normal_ (no common/utils.py dep).
- parallelize.py: TP/FSDP/CP/EP support. DTensor-safe wrappers for
  depthwise conv1d (_DTensorSafeConv1d) and FLA kernel dispatch
  (_install_dtensor_safe_dispatch). Registers softplus as DTensor pointwise op.
- state_dict_adapter.py: HF<->TorchTitan conversion. Handles expert
  gate_up_proj [E,2I,D] <-> w1+w3 split/concat.
- config_registry.py: debugmodel, 35b-a3b, 35b-a3b-sdpa, 35b-a3b-varlen configs.
- __init__.py: model configs (debugmodel, 35b-a3b, 122b-a10b, 397b-a17b,
  397B_A19B), build(), model_registry().

## Related changes
- torchtitan/models/__init__.py: add "qwen3_5_moe" to _supported_models
- torchtitan/models/qwen3/config_registry.py: add qwen3_moe_30b_a3b() config
- torchtitan/tools/utils.py: add L20/L20X GPU peak flops (989 TFLOPS, same as H200)
- tests/unit_tests/test_qwen3_5_moe.py: construction + forward + state dict tests
- tests/integration_tests/models.py: qwen3_5_moe FSDP+TP+EP integration test

Validated on real 35B-A3B checkpoints (HF<->DCP roundtrip: 19/19 tensors
exact) and L20X GPUs (debugmodel + 35B-A3B, 10 steps, no errors).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…py L20 flops

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@meta-cla
Copy link

meta-cla bot commented Mar 11, 2026

Hi @gali-leilei!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Mar 11, 2026
@meta-cla
Copy link

meta-cla bot commented Mar 11, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant