[TOPI] Fix mali conv2d performance regression#3131
Conversation
merrymercy
commented
May 2, 2019
- Fix the performance regression on mali [PERF] Performance Regression on Mali GPU #3088
- Fix tophub for mali after modifying the argument of dense ([Relay, Quantization, TOPI] int8 dense on CUDA & Dense op quantization #2877)
| + tvm.const(0, out_dtype) * M[alpha-1][alpha-1][CO-1][P_round-1], | ||
| # thw following hack term is used to make the padding in batch gemm ("M") | ||
| # effective, otherwise the padding will be eliminated by bound inference | ||
| + tvm.expr.Mul(tvm.const(0, out_dtype), |
There was a problem hiding this comment.
I suggest to leave a comment point to the issue #3088 so ppl understand why Mul instead of *
There was a problem hiding this comment.
I'm still confused at why we need this multiplication?
There was a problem hiding this comment.
@icemelon9 During batch gemm, we introduce some padding to avoid partial tile, so we can safely vectorize the innermost loop. However, we won't use all the output of batch gemm (the padded part is ignored in final results). The InferBound pass in tvm analyzes computation region from output to input, and only keeps the necessary part. If we don't add this term, the padding added in batch gemm will be ignored, regardless of how we tweak the shape argument in tvm.compute.
This term accesses the last element in the padded buffer, so it makes all padding effective.
There was a problem hiding this comment.
@yzliu tvm.expr.Mul won't do const fold, while * is equal to tvm.expr.Mul + const fold.
There was a problem hiding this comment.
Could you elaborate in the comment by what you replied to @icemelon9 ?
There was a problem hiding this comment.
It's too long to be put in the comment..
|
Thanks @merrymercy @tqchen @eqy @icemelon9 for fixing and reviewing. |
* [TOPI] fix mali conv * fix typo * address comments
* [TOPI] fix mali conv * fix typo * address comments