Skip to content

Add option for LoRA with Transformer Engine op fuser#14411

Merged
gautham-kollu merged 34 commits intoNVIDIA-NeMo:mainfrom
timmoon10:fused-lora
Sep 10, 2025
Merged

Add option for LoRA with Transformer Engine op fuser#14411
gautham-kollu merged 34 commits intoNVIDIA-NeMo:mainfrom
timmoon10:fused-lora

Conversation

@timmoon10
Copy link
Collaborator

@timmoon10 timmoon10 commented Aug 6, 2025

What does this PR do ?

This PR adds a variant of the LoRALinear module using Transformer Engine's operation fuser. It also adds logic in the existing LoRALinear module so that it can call the fused variant. This is an experimental feature.

Collection: NLP

Changelog

  • Add TEFusedLoRALinear module that implements LoRA with TE op fuser.
  • Add logic in LoRALinear module to internally call TEFusedLoRALinear.

Usage

Set use_transformer_engine_op_fuser=True in the LoRA model config (see #13776).

from nemo.collections.llm import Llama2Config70B
from nemo.collections.llm.api import _setup
from nemo.collections.llm.gpt.model import LlamaModel
from nemo.collections.llm.peft.lora import LoRA

config = llm.Llama2Config70B(..., use_transformer_engine_op_fuser=True)
model = LlamaModel(config, ...)
peft = LoRA(...)
_setup(model=model, model_transform=peft, ...)

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

timmoon10 and others added 8 commits August 4, 2025 17:31
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Missing all-gather op

Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: timmoon10 <timmoon10@users.noreply.github.com>
@timmoon10 timmoon10 added the CI label Aug 6, 2025
@timmoon10 timmoon10 marked this pull request as draft August 8, 2025 02:34
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
@github-actions github-actions bot removed the CI label Aug 12, 2025
@timmoon10 timmoon10 marked this pull request as ready for review August 12, 2025 05:08
@timmoon10 timmoon10 added the CI label Aug 12, 2025
@gautham-kollu gautham-kollu requested a review from cuichenx August 12, 2025 18:48
Copy link
Collaborator

@cuichenx cuichenx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Do you have any data on the performance benefits of fused lora?

@github-actions github-actions bot removed the CI label Aug 14, 2025
Signed-off-by: timmoon10 <timmoon10@users.noreply.github.com>
@timmoon10
Copy link
Collaborator Author

Currently I don't see a significant difference between the fused and unfused impl (174 ms/step with a small Llama model on B200). We skip one add (fused into dgrad GEMM), but the dropout implementation is quite unoptimized. We should see speedup as more fusions are supported in TE.

ko3n1g
ko3n1g previously approved these changes Sep 3, 2025
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
timmoon10 and others added 2 commits September 8, 2025 02:19
Signed-off-by: timmoon10 <timmoon10@users.noreply.github.com>
gautham-kollu
gautham-kollu previously approved these changes Sep 8, 2025
Copy link
Collaborator

@gautham-kollu gautham-kollu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to revert the manifest.json before merging

Signed-off-by: gautham-kollu <gkollu@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants