[Relay][Frontend][Onnx] Compare against onnxruntime more consistently during testing#7300
Merged
tqchen merged 5 commits intoapache:mainfrom Jan 19, 2021
Merged
[Relay][Frontend][Onnx] Compare against onnxruntime more consistently during testing#7300tqchen merged 5 commits intoapache:mainfrom
tqchen merged 5 commits intoapache:mainfrom
Conversation
Contributor
Author
|
@masahi @mbrookhart can you guys let me know what you think of this PR? |
jwfromm
commented
Jan 17, 2021
| input_data = np.random.uniform(size=input_size).astype("int32") | ||
| verify_with_ort_with_inputs(onnx_model, [input_data]) | ||
| input_data = np.random.uniform(size=input_size).astype("float32") | ||
| verify_with_ort_with_inputs(onnx_model, [input_data], apply_softmax=True) |
Contributor
Author
There was a problem hiding this comment.
This is a fun one that I wanted to point out. Previously we were casting inputs to int32, however because they were generated with np.random.uniform they all were just being cast to 0. Using non-zero inputs caused some minor mismatch on outputs due to numerical instability but applying softmax (which torchvision models don't use by default) reduces the numerical difference well below our test threshold.
jwfromm
commented
Jan 17, 2021
masahi
reviewed
Jan 17, 2021
masahi
reviewed
Jan 18, 2021
masahi
approved these changes
Jan 19, 2021
Member
masahi
left a comment
There was a problem hiding this comment.
thanks for the heroic effort 👍
mbrookhart
reviewed
Jan 19, 2021
TusharKanekiDey
pushed a commit
to TusharKanekiDey/tvm
that referenced
this pull request
Jan 20, 2021
… during testing (apache#7300) Co-authored-by: Josh Fromm <jwfromm@uw.edu>
trevor-m
pushed a commit
to neo-ai/tvm
that referenced
this pull request
Jan 21, 2021
… during testing (apache#7300) Co-authored-by: Josh Fromm <jwfromm@uw.edu>
electriclilies
pushed a commit
to electriclilies/tvm
that referenced
this pull request
Feb 18, 2021
… during testing (apache#7300) Co-authored-by: Josh Fromm <jwfromm@uw.edu>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
I noticed that many of our onnx frontend tests compare to results produced by numpy rather than onnx itself. This is somewhat counterproductive because it makes assumptions about what onnx should do rather than checking what it actually does. I decided to go through all the tests in
test_forward.pyand make them use the newverify_with_orthelper function. This makes our testing suite more consistent and align more closely with their intention. In the process of making this conversion, I discovered many bugs with the importer that are also fixed in this PR. Although this PR might be a little painful to review due to its scope, I think that the result is an overall much cleaner and easier to maintain test suite.