Update is_floating_point to handle bfloat16#7133
Merged
masahi merged 5 commits intoapache:mainfrom Dec 19, 2020
Merged
Conversation
masahi
approved these changes
Dec 19, 2020
TusharKanekiDey
pushed a commit
to TusharKanekiDey/tvm
that referenced
this pull request
Jan 20, 2021
* Add div_ and is_floating_point operators * Add handling of exprs to op, update tests * Properly handle bfloat16 in is_floating_point * Revert test changes * revert whitespace changes
trevor-m
pushed a commit
to neo-ai/tvm
that referenced
this pull request
Jan 21, 2021
* Add div_ and is_floating_point operators * Add handling of exprs to op, update tests * Properly handle bfloat16 in is_floating_point * Revert test changes * revert whitespace changes
electriclilies
pushed a commit
to electriclilies/tvm
that referenced
this pull request
Feb 18, 2021
* Add div_ and is_floating_point operators * Add handling of exprs to op, update tests * Properly handle bfloat16 in is_floating_point * Revert test changes * revert whitespace changes
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.
The current implementation of
is_floating_point()is based on the Pytorch Documentation, but it turns out that the documentation does not accurately describe the function's behavior. This PR will enableis_floating_point()to properly handlebfloat16once support is added for the bfloat16 dtype.I am unable to test this functionality as the PyTorch frontend does not currently support bfloat16, giving the error message