[TOPI] Add dilation argument to conv2d and depthwise_conv2d#1970
[TOPI] Add dilation argument to conv2d and depthwise_conv2d#1970tqchen merged 32 commits intoapache:masterfrom
Conversation
|
Currently, the log format of autotvm depends on arguments. So every time we change arguments, we will invalidate all old log records. Should we use another design? @tqchen Since this PR will break old logs, you should provide a log converter for other users. |
6b5c6d7 to
9178e2d
Compare
|
I think in this case, adding dilation makes sense, because this makes things consistent with the high level op. We do need to provide a quick upgrade path though, please try make API consistent with high level op https://docs.tvm.ai/langref/relay_op.html#level-2-definitions |
|
@merrymercy Are there cases where dilation is used in tophub logs? To convert old logs to new workload, I will add a dilation field to all old logs. Since we don't have dilation previously, if dilation is used we will have dilated kernel shape in the workload. |
|
No dilation is used in tophub. |
|
here is the converter for old logs |
0e94944 to
981289f
Compare
|
@vinx13 please check the CI errors, then ask for review again |
|
the test of dilated conv2d nhwc is incorrect, see https://github.com/dmlc/tvm/blob/42dc24a310170577f929f648f477ca2567c8bc9a/topi/tests/python/test_topi_conv2d_nhwc.py#L16 @tqchen How can I update pickle-memorized reference data? |
|
Currently x86 backend in under refactor #1993 . To update a pickle-memorized data, a trick is to rename the string. |
c4bc34d to
924e25f
Compare
| # placeholder | ||
| Input = tvm.placeholder((batch, in_height, in_width, in_channel), name='Input') | ||
| Filter = tvm.placeholder((filter_height, filter_width,filter_channel, channel_multiplier), name='Filter') | ||
| DilatedFilter = topi.nn.dilate(Filter, (1, 1, dilation, dilation), name='DilatedFilter') |
There was a problem hiding this comment.
@merrymercy previous dilation is incorrect, I have updated to [dilation, dilation, 1, 1] but current fallback schedule generated invalid ptx on cuda
There was a problem hiding this comment.
The filter is incorrect. The dilation is correct.
There was a problem hiding this comment.
nchw and nhwc have different filter layouts, the filter here is consistent with nn.depthwise_conv2d_nhwc.
https://github.com/dmlc/tvm/blob/b840e9602e357c50124d0c7fb131c52321062570/topi/python/topi/nn/depthwise_conv2d.py#L80
There was a problem hiding this comment.
Sorry.
NHWC layout doesn't use autotvm template. I think we can just disable this test, since no one uses this layout.. or you can try to fix the manual schedule https://github.com/dmlc/tvm/blob/b840e9602e357c50124d0c7fb131c52321062570/topi/python/topi/cuda/depthwise_conv2d.py#L119
|
@merrymercy I have looked into the error in http://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/PR-1970/21/pipeline/232#step-255-log-1230. There is one failing case of nchw conv2d on nvptx because tophub logs are invalidated in this pr. The fallback schedule from |
|
There are some errors with nvptx backend. We found some cases before. The solution can be
|
I tried llvm-6.0 locally but the error still occurs. This error can be fixed by updating tophub logs with dilation arg, or disabling fallback schedule generated from reference log |
1d7ca3b to
e8575ed
Compare
|
OK. We will first merge #2034, and then merge this PR. In the meanwhile, you can do update to tophub
|
8724ce6 to
f5c0549
Compare
|
Can you also help to update the style of cuda winograd and cuda int8 according to #2034 ? |
f5c0549 to
9443ecf
Compare
|
Thanks @merrymercy @vinx13 this is now merged |
Currently
dilateis a separate operation and one has to calldilate+conv2dto compose dilated convolution.The problems:
compute_inline.This PR adds
dilationas a new argument toconv2d/depthwise_conv2d. If dilation is used, it will callnn.dilateinternally in conv2d (likenn.pad). The workload is also updated. This will invalidate AutoTVM logs generated previously.cc @merrymercy @tqchen @masahi @nishi-t @anijain2305 @Huyuwei