feat: Add TensorRT Edge-LLM AttentionPlugin backend support#4013
Open
feat: Add TensorRT Edge-LLM AttentionPlugin backend support#4013
Conversation
Collaborator
|
@zewenli98 please review |
narendasan
reviewed
Jan 14, 2026
|
|
||
| This example uses a custom TensorRT plugin shared library (``libNvInfer_edgellm_plugin.so``) | ||
| that replaces standard transformer attention operations and RoPE computations with optimized | ||
| CUDA kernels. The plugin source code is available at (internal access only): |
Collaborator
There was a problem hiding this comment.
@chohk88 can you change this to external links?
narendasan
reviewed
Jan 14, 2026
| - kv_cache_start_idx: [B] starting index in KV cache (required for release version) | ||
| """ | ||
|
|
||
| @torch.library.custom_op("xqa::attn", mutates_args=()) |
Collaborator
There was a problem hiding this comment.
lets call the op tensorrt_edge_llm::xqa_attn
narendasan
reviewed
Jan 14, 2026
| - kv_cache_start_idx: [B] starting index in KV cache (required for release version) | ||
| """ | ||
|
|
||
| @torch.library.custom_op("xqa::attn", mutates_args=()) |
Collaborator
There was a problem hiding this comment.
Same thing here: tensorrt_edge_llm::xqa_attn
narendasan
reviewed
Jan 14, 2026
| nkv: int, | ||
| d: int, | ||
| ) -> Tuple[torch.Tensor, torch.Tensor]: | ||
| batch_size = qkv.shape[0] |
Collaborator
There was a problem hiding this comment.
Is it possible to provide a valid implementation here easily? could we lift the kernel from the .so?
Collaborator
There was a problem hiding this comment.
This would be a P1/P2 sort of thing, but I think it would be good for the sake of completeness
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR adds TensorRT Edge-LLM AttentionPlugin backend support as an alternative to the default SDPA lowering, providing 1.7x ~ 3.3x performance improvement for LLM inference.
Supported Models: Llama 3.x (3.1 and 3.2), Qwen 2.5, Qwen 3, Qwen3.1
This is a temporary solution for the initial implementation. The fork contains Torch-TRT compatibility Python runtime support that is not yet available in the official NVIDIA TensorRT-Edge-LLM repository.
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: