Skip to content

[Dev] fix(moe): Support HybridEP and reduce memory overhead for 1F1B A2A overlap#2201

Merged
yanring merged 47 commits intoNVIDIA:devfrom
lhb8125:hongbinl/1f1b_hybridep
Dec 10, 2025
Merged

[Dev] fix(moe): Support HybridEP and reduce memory overhead for 1F1B A2A overlap#2201
yanring merged 47 commits intoNVIDIA:devfrom
lhb8125:hongbinl/1f1b_hybridep

Conversation

@lhb8125
Copy link
Contributor

@lhb8125 lhb8125 commented Nov 11, 2025

What does this PR do ?

PR for main

  • replace enable_deepep with use_flex_dispatcher so that deepep and hybridep will be treated in the same way in 1f1b a2a overlap;
  • add some deconstructors in 1f1b a2a overlap to release the references to tensors, which helps to reduce the memory overhead;

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share discuss a design-doc with the team.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

lhb8125 and others added 16 commits September 3, 2025 03:30
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125 lhb8125 requested review from a team as code owners November 11, 2025 02:13
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 11, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@yanring
Copy link
Contributor

yanring commented Nov 11, 2025

Thanks for the PR. Please mark the title with [Dev] fix(moe): xxx and label this PR with module:moe and dev.

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@Victarry Victarry added the dev branch Dev branch related issues and development label Nov 11, 2025
@lhb8125 lhb8125 added module: moe Expert Review Apply this label to indicate that your PR is ready for expert review. labels Nov 12, 2025
@lhb8125 lhb8125 changed the title Support HybridEP and reduce memory overhead for 1F1B A2A overlap [Dev] fix(moe): Support HybridEP and reduce memory overhead for 1F1B A2A overlap Nov 12, 2025
@lhb8125 lhb8125 added this to the Core 0.16 milestone Nov 12, 2025
@lhb8125
Copy link
Contributor Author

lhb8125 commented Nov 12, 2025

/ok to test 32fc988

lhb8125 and others added 2 commits December 1, 2025 21:53
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 2, 2025

/ok to test 776d224

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 2, 2025

/ok to test 487eea9

@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 2, 2025

@yanring @Victarry Could you give a final review of this PR? We did some modifications after the previous changes.

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 2, 2025

/ok to test c568c37

@yanring
Copy link
Contributor

yanring commented Dec 5, 2025

@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 5, 2025

/ok to test 36648e3

if g is not None:
g.record_stream(self.stream)
if not self.delay_grads_release:
g.untyped_storage().resize_(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add some explanation here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in lhb8125#50, @lhb8125 can you help take a look~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merged, thanks!

"""Delay the weight gradient computation to improve batch-level communication overlapping"""

ep_overlap_early_attn_memory_release: bool = False
"""Release the memory of the attention module early in EP overlap. Note this flag has
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This description is a bit vague—when exactly should users enable or disable this feature? Also, the connection to overlap_moe_expert_parallel_comm isn't clear here, which will likely confuse users.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in lhb8125#50, @lhb8125 can you help take a look~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merged, thanks!

@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 8, 2025

/ok to test 0708cc1

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 8, 2025

/ok to test 2cfaec1

lhb8125 and others added 2 commits December 8, 2025 22:46
Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 8, 2025

/ok to test 0f8663b

@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 8, 2025

/ok to test 97de523

Signed-off-by: Hongbin Liu <hongbinl@nvidia.com>
@lhb8125
Copy link
Contributor Author

lhb8125 commented Dec 8, 2025

/ok to test 12a2a22

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dev branch Dev branch related issues and development Expert Review Apply this label to indicate that your PR is ready for expert review. module: moe

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants