[PTX] ldmatrix builtin to accelerate copying data from shared memory to warp memory#10855
Merged
junrushao merged 3 commits intoapache:mainfrom Apr 3, 2022
Merged
[PTX] ldmatrix builtin to accelerate copying data from shared memory to warp memory#10855junrushao merged 3 commits intoapache:mainfrom
ldmatrix builtin to accelerate copying data from shared memory to warp memory#10855junrushao merged 3 commits intoapache:mainfrom
Conversation
vinx13
approved these changes
Apr 1, 2022
Hzfengsy
approved these changes
Apr 2, 2022
yzh119
commented
Apr 2, 2022
Member
Author
yzh119
left a comment
There was a problem hiding this comment.
It turns out that SplitHostDevice would split device and host function via the position of launch_thread, and launch_thread was always placed under block allocated buffers, thus the allocated buffer would not be recognized as device buffer.
I created a boundary block as a workaround.
pfk-beta
pushed a commit
to pfk-beta/tvm
that referenced
this pull request
Apr 11, 2022
…y to warp memory (apache#10855) We already have PTX mma and mma.sp builtin support in apache#9909 and apache#10339 . However, we have not supported corresponding data movement builtins for these mma instructions, so the data movement would not be as fast as wmma. This PR brings the `ldmatrix` builtin, which is a native PTX warp-level instruction (https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-ldmatrix), and we can use it to load several (1/2/4) 8x8 matrices from shared memory to warp memory.
mehrdadh
pushed a commit
to mehrdadh/tvm
that referenced
this pull request
Apr 11, 2022
…y to warp memory (apache#10855) We already have PTX mma and mma.sp builtin support in apache#9909 and apache#10339 . However, we have not supported corresponding data movement builtins for these mma instructions, so the data movement would not be as fast as wmma. This PR brings the `ldmatrix` builtin, which is a native PTX warp-level instruction (https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-ldmatrix), and we can use it to load several (1/2/4) 8x8 matrices from shared memory to warp memory.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
We already have PTX mma and mma.sp builtin support in #9909 and #10339 . However, we have not supported corresponding data movement builtins for these mma instructions, so the data movement would not be as fast as wmma.
This PR brings the
ldmatrixbuiltin, which is a native PTX warp-level instruction (https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-ldmatrix), and we can use it to load several (1/2/4) 8x8 matrices from shared memory to warp memory.@vinx13 @Hzfengsy