[BYOC] Enable bfloat16 in DNNL BYOC#11111
Merged
masahi merged 18 commits intoapache:mainfrom May 26, 2022
Merged
Conversation
be5f228 to
e2fcfd1
Compare
This was referenced Apr 27, 2022
41e9720 to
4caede4
Compare
Contributor
Author
|
Relative modifications since the original PR:
All the CI checks have passed now, what a long journey. |
masahi
reviewed
May 26, 2022
| ids=["compile", "run"], | ||
| ) | ||
|
|
||
| bf16_supported = "avx512" in open("/proc/cpuinfo", "r").read() |
Member
There was a problem hiding this comment.
Probably need more precise detection, but ok.
masahi
approved these changes
May 26, 2022
Member
|
cc @AndrewZhaoLuo this is cool (the first e2e run of AMP + bf16!!) |
Contributor
Author
|
Thanks a lot. |
liaopeiyuan
added a commit
to zk-ml/tachikoma
that referenced
this pull request
Sep 14, 2022
for simplicity in DNNL run-time; we need to remove TR, and maybe move to apache#11111
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Enable bfloat16 in DNNL BYOC following the path:
Main work include:
With those improvements, a float32 graph could be converted to bfloat16 through AMP, and then be lowered by native codegen or consumed by oneDNN and finally inference in bfloat16 mode now.