Merged
Conversation
added 16 commits
March 22, 2021 10:11
Contributor
electriclilies
left a comment
There was a problem hiding this comment.
Overall looks good to me, just a few nitpicks
| channels = infer_channels(inputs[1], True) | ||
| out_type = infer_type(inputs[1]) | ||
| out_shapes = [get_const_tuple(out_type.checked_type.shape)] | ||
| channels = out_shapes[0][1] |
Contributor
There was a problem hiding this comment.
Does this need to work for layouts other than NCHW? It looks like the ONNX op doesn't specify layout in the ConvTranspose operator
Contributor
Author
There was a problem hiding this comment.
ONNX always assumes NCHW
Contributor
There was a problem hiding this comment.
Cool, just wanted to make sure we didn't have to worry about it!
electriclilies
approved these changes
Mar 22, 2021
jroesch
approved these changes
Mar 24, 2021
Member
jroesch
left a comment
There was a problem hiding this comment.
In the next PR can you just paste you rational above the test section so that new comers understand what's going on?
trevor-m
pushed a commit
to trevor-m/tvm
that referenced
this pull request
May 6, 2021
* WIP * some fixes * more fixes * fix some conv_transpose tests * fix out of bounds slice * fix flatten import * fix logsoftmax and softmax tests * fix Error in Upsample * fix onehot * normalize errors * fix gather with negative indices * parameterize test * skip unsupported tests * clean up * fix rebase * fix lint * add an error message when we find an un-identified tensor
trevor-m
pushed a commit
to neo-ai/tvm
that referenced
this pull request
May 11, 2021
* WIP * some fixes * more fixes * fix some conv_transpose tests * fix out of bounds slice * fix flatten import * fix logsoftmax and softmax tests * fix Error in Upsample * fix onehot * normalize errors * fix gather with negative indices * parameterize test * skip unsupported tests * clean up * fix rebase * fix lint * add an error message when we find an un-identified tensor
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
We've been hitting a lot of errors running models with ONNX, that has led to a lot of piecewise fixes. This is an attempt to fix the importer more broadly by running the tests onnx ships with pip https://github.com/onnx/onnx/tree/master/onnx/backend/test/data/node
These files contain an onnx graph, input arrays, and expected outputs, so we can test directly against the canonical onnx tests. This PR provides a method to import these tests as parameterized unit tests, execute them, and skip any we know currently fail. I also fixed a lot of low hanging fruit to reduce the number of skipped unit tests.
Future PRs will work to fix the currently skipped tests, and then extend this to GPU.
For reference, this is the pytest result on my system, testing against ONNX 1.6, which is what we have in CI:
434 passed, 123 skipped, 83 deselected, 1185 warnings in 32.40sThis adds a lot of tests, but they are all small, so the runtime is actually pretty minuscule, and it improves our ONNX import coverage dramatically.
cc @jwfromm @masahi @jroesch @electriclilies @adelbertc