[Metal] Reduce number of threads for reduction layers#8206
Merged
masahi merged 1 commit intoapache:mainfrom Jun 10, 2021
Merged
[Metal] Reduce number of threads for reduction layers#8206masahi merged 1 commit intoapache:mainfrom
masahi merged 1 commit intoapache:mainfrom
Conversation
Reduced default number of threads in reduction kernels for Metal. Default code generation generated thread block with the following size: 32x32x1. With this size number of threads per threadgroup was equal to 1024 (32 * 32 * 1). Sometimes device doesn't have enough resources and in this case we will get an exception that the block size is greater than value of maxTotalThreadsPerThreadgroup. To prevent such situation we decrease default number of threads. With this fix every model should work with default codegen and auto-tuning or auto-scheduling will select the optimal number of threads.
masahi
approved these changes
Jun 10, 2021
trevor-m
pushed a commit
to trevor-m/tvm
that referenced
this pull request
Jun 17, 2021
Reduced default number of threads in reduction kernels for Metal. Default code generation generated thread block with the following size: 32x32x1. With this size number of threads per threadgroup was equal to 1024 (32 * 32 * 1). Sometimes device doesn't have enough resources and in this case we will get an exception that the block size is greater than value of maxTotalThreadsPerThreadgroup. To prevent such situation we decrease default number of threads. With this fix every model should work with default codegen and auto-tuning or auto-scheduling will select the optimal number of threads.
trevor-m
pushed a commit
to neo-ai/tvm
that referenced
this pull request
Jun 17, 2021
Reduced default number of threads in reduction kernels for Metal. Default code generation generated thread block with the following size: 32x32x1. With this size number of threads per threadgroup was equal to 1024 (32 * 32 * 1). Sometimes device doesn't have enough resources and in this case we will get an exception that the block size is greater than value of maxTotalThreadsPerThreadgroup. To prevent such situation we decrease default number of threads. With this fix every model should work with default codegen and auto-tuning or auto-scheduling will select the optimal number of threads.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Reduced default number of threads in reduction kernels for Metal.
Default code generation generated thread block with the following size:
32x32x1. With this size number of threads per threadgroup was equal to
1024 (32 * 32 * 1). Sometimes device doesn't have enough resources and
in this case we will get an exception that the block size is greater
than value of maxTotalThreadsPerThreadgroup.
To prevent such situation we decrease default number of threads. With
this fix every model should work with default codegen and auto-tuning or
auto-scheduling will select the optimal number of threads.
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.