[Relay][Frontend][ONNX] Add ConvInteger support.#8456
[Relay][Frontend][ONNX] Add ConvInteger support.#8456mbrookhart merged 5 commits intoapache:mainfrom
Conversation
|
@mbrookhart @AndrewZhaoLuo can you guys take a look at this PR? |
|
I'm having trouble getting the docker images in CI updated due to a TF update. |
|
@mbrookhart that makes sense, I've removed all changes to convtranspose from this PR. |
| out_channels = kernel_shape[0] | ||
| dilation = attr.get("dilations", [1] * (ndim - 2)) | ||
| strides = attr.get("strides", [1] * (ndim - 2)) | ||
| padding = attr["pads"] if "pads" in attr else 0 |
There was a problem hiding this comment.
Should we throw if both pads and auto_pad are present?
There was a problem hiding this comment.
Good question, this is the same logic used in all other convolutions though (where we overwrite pads with auto_padding if both are present). I personally think that defaulting to auto_pad if present is reasonable behavior but am open to other takes. @mbrookhart what do you think?
There was a problem hiding this comment.
Thought about it a bit more. So the onnx spec has it so you cannot have both. I think it's safe to assume the converter will only hit well formed onnx models so I think this current behavior is fine.
|
Thanks @jwfromm @AndrewZhaoLuo |
* Add ConvInteger support and fix some ConvTranspose padding bugs. * Simplify pads check. * Fix style. * Remove changes to conv_transpose.
* Add ConvInteger support and fix some ConvTranspose padding bugs. * Simplify pads check. * Fix style. * Remove changes to conv_transpose.
While playing with onnxruntime's quantization tools I found that
ConvIntegernodes were heavily used and pretty easy to map toqnn.conv2d. This PR adds support for that op.