[Hexagon] vrmpy tensorization for e2e compilation of int8 models#12911
Merged
tmoreau89 merged 14 commits intoapache:mainfrom Oct 3, 2022
Merged
[Hexagon] vrmpy tensorization for e2e compilation of int8 models#12911tmoreau89 merged 14 commits intoapache:mainfrom
tmoreau89 merged 14 commits intoapache:mainfrom
Conversation
5cc80ab to
e972136
Compare
e972136 to
17bde45
Compare
masahi
commented
Sep 27, 2022
| Unlike the nn.dense case (see dense_alter_op.py), we do not convert (uint8, int8) to | ||
| (uint8, uint8). That would introduce another convolution by a constant (128 or 1) filter, | ||
| to compensate for the dtype legalization. In the nn.dense case, such compensation factor is | ||
| just a sum over the K axis. |
Member
Author
There was a problem hiding this comment.
cc @ibsidorenko @tkonolige @nverke on this. We can convert u8 * s8 convolution to u8 * u8 like below
W'_u8 = W_s8 + 128
X_u8 * W_s8 = X_u8 * (W'_u8 - 128)
= X'_u8 * W'_u8 - X_u8 * 128
Here, X_u8 * 128 is a convolution of X_u8 by a constant filter. We can factor out 128 to end up with a filter where all elements are 1. So what we need is a windowed sum, or "sum pooling" op - without it, I think we need to do a full blown convolution. This is why I don't use legalization for conv2d. Let me know if you have better idea.
masahi
commented
Sep 27, 2022
| _, inner = s[x].split(fused, factor=128 // np.dtype(x.dtype).itemsize) | ||
| outer, inner = s[x].split(fused, factor=128 // np.dtype(x.dtype).itemsize) | ||
| s[x].vectorize(inner) | ||
| s[x].parallel(outer) |
Member
Author
There was a problem hiding this comment.
cc @kparzysz-quic @nverke, we are enabling multithreading of elemwise ops here. Multithreading on e2e models have been stable since #12807
14e83d5 to
4ad3e63
Compare
Contributor
|
LGTM! |
kparzysz-quic
approved these changes
Sep 30, 2022
Contributor
|
Thanks @masahi @ibsidorenko @kparzysz-quic the PR has been merged |
xinetzone
pushed a commit
to daobook/tvm
that referenced
this pull request
Nov 25, 2022
…che#12911) * [Hexagon] Support vrmpy tensorization for conv2d and dense schedules * update * clean up * migrate tests to test_launcher.py * remove vrmpy test files * use generic int8 conv2d schedule * clean up * doc update * pylint fix * parametrize dtype in test * doc update * add missing paralleization for dense * more pylint * fixed for fp32 dense
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds TE compute and schedule definitions for int8 conv2d and dense using vrmpy tensorization, and Relay alter layout / legalize to enable using them in e2e settings. Since
vrmpyis very similar to x86 VNNI or ARMsdot/udotinstructions, lots of code are shared with existing x86 / ARM backend implementations.This lets us run int8 resnet50 in 146 msec on SD888. All convolutions and the final dense op are tensorized. The current bottleneck is
requantize-related operations. The test script and model files to run int8 resnet50 are attached below.test_qresnet50.zip
@kparzysz-quic @tkonolige @nverke @ibsidorenko