Added sigmoid like activation functions#9011
Added sigmoid like activation functions#9011AIexanderDicke wants to merge 5 commits intoTheAlgorithms:masterfrom
Conversation
rohan472000
left a comment
There was a problem hiding this comment.
For ensuring clarity and ease of parameter interpretation across the functions.
| >>> np.linalg.norm(np.array([0.5, 0.66666667, 0.83333333]) - result) < 10**(-5) | ||
| True | ||
| """ | ||
| return _base_activation(vector, 0, 1) |
There was a problem hiding this comment.
| return _base_activation(vector, 0, 1) | |
| return _base_activation(vector, alpha=0, beta=1) |
| >>> np.linalg.norm(np.array([0, 0.66666667, 1.6]) - result) < 10**(-5) | ||
| True | ||
| """ | ||
| return _base_activation(vector, 1, beta) |
There was a problem hiding this comment.
| return _base_activation(vector, 1, beta) | |
| return _base_activation(vector, alpha=1, beta=beta) |
| >>> np.linalg.norm(np.array([0, 0.7310585, 0.462098]) - result) < 10**(-5) | ||
| True | ||
| """ | ||
| return swish(vector, 1) |
There was a problem hiding this comment.
| return swish(vector, 1) | |
| return swish(vector, beta=1) |
There was a problem hiding this comment.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper reviewto trigger the checks for only added pull request files@algorithms-keeper review-allto trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
| import numpy as np | ||
|
|
||
|
|
||
| def _base_activation(vector: np.ndarray, alpha: float, beta: float) -> np.ndarray: |
There was a problem hiding this comment.
As there is no test file in this pull request nor any test function or class in the file neural_network/activation_functions/sigmoid_like.py, please provide doctest for the function _base_activation
There was a problem hiding this comment.
The examples that you have kept under comments, make it doctest, remove that Example word, and merge both statements i,e.. result and np.linalg....
>>> np.linalg.norm(np.array([0.5, 0.66666667,
0.83333333]) - ( _base_activation(np.array([0,
np.log(2), np.log(5)]), 0, 1))) < 10**(-5)
True
| import numpy as np | ||
|
|
||
|
|
||
| def _base_activation(vector: np.ndarray, alpha: float, beta: float) -> np.ndarray: |
There was a problem hiding this comment.
The examples that you have kept under comments, make it doctest, remove that Example word, and merge both statements i,e.. result and np.linalg....
>>> np.linalg.norm(np.array([0.5, 0.66666667,
0.83333333]) - ( _base_activation(np.array([0,
np.log(2), np.log(5)]), 0, 1))) < 10**(-5)
True
tianyizheng02
left a comment
There was a problem hiding this comment.
Personally, I think it's better to implement the three activation functions without using a shared helper function. While it might not be as elegant, I think it's better from an educational standpoint for users to be able to see the explicit formula for each of the functions.
Also, we already have the sigmoid and SiLU in the maths/ directory. However, I'd rather we have these functions in neural_network/activation_functions like you did, so we should delete these two existing files in favor of yours.
tianyizheng02
left a comment
There was a problem hiding this comment.
Just small improvements, but other than that LGTM
| >>> np.linalg.norm(np.array([0.5, 0.66666667, 0.83333333]) \ | ||
| - sigmoid(vector=np.array([0, np.log(2), np.log(5)]))) < 10**(-5) |
There was a problem hiding this comment.
| >>> np.linalg.norm(np.array([0.5, 0.66666667, 0.83333333]) \ | |
| - sigmoid(vector=np.array([0, np.log(2), np.log(5)]))) < 10**(-5) | |
| >>> np.linalg.norm(np.array([0.5, 0.66666667, 0.83333333]) | |
| ... - sigmoid(vector=np.array([0, np.log(2), np.log(5)]))) < 10**(-5) |
I believe you can use ... to avoid using \
| - sigmoid(vector=np.array([0, np.log(2), np.log(5)]))) < 10**(-5) | ||
| True | ||
| """ | ||
| return 1 / (1 + np.exp(-1 * vector)) |
There was a problem hiding this comment.
| return 1 / (1 + np.exp(-1 * vector)) | |
| return 1 / (1 + np.exp(-vector)) |
Just slightly more concise
| >>> np.linalg.norm(np.array([0.5, 1., 1.5]) \ | ||
| - swish(np.array([1, 2, 3]), 0)) < 10**(-5) | ||
| True | ||
| >>> np.linalg.norm(np.array([0, 0.66666667, 1.6]) \ | ||
| - swish(np.array([0, 1, 2]), np.log(2))) < 10**(-5) |
There was a problem hiding this comment.
| >>> np.linalg.norm(np.array([0.5, 1., 1.5]) \ | |
| - swish(np.array([1, 2, 3]), 0)) < 10**(-5) | |
| True | |
| >>> np.linalg.norm(np.array([0, 0.66666667, 1.6]) \ | |
| - swish(np.array([0, 1, 2]), np.log(2))) < 10**(-5) | |
| >>> np.linalg.norm(np.array([0.5, 1., 1.5]) | |
| ... - swish(np.array([1, 2, 3]), 0)) < 10**(-5) | |
| True | |
| >>> np.linalg.norm(np.array([0, 0.66666667, 1.6]) | |
| ... - swish(np.array([0, 1, 2]), np.log(2))) < 10**(-5) |
| >>> np.linalg.norm(np.array([0, 0.7310585, 0.462098]) \ | ||
| - sigmoid_linear_unit(np.array([0, 1, np.log(2)]))) < 10**(-5) |
There was a problem hiding this comment.
| >>> np.linalg.norm(np.array([0, 0.7310585, 0.462098]) \ | |
| - sigmoid_linear_unit(np.array([0, 1, np.log(2)]))) < 10**(-5) | |
| >>> np.linalg.norm(np.array([0, 0.7310585, 0.462098]) | |
| ... - sigmoid_linear_unit(np.array([0, 1, np.log(2)]))) < 10**(-5) |
| - sigmoid_linear_unit(np.array([0, 1, np.log(2)]))) < 10**(-5) | ||
| True | ||
| """ | ||
| return vector / (1 + np.exp(-1 * vector)) |
There was a problem hiding this comment.
| return vector / (1 + np.exp(-1 * vector)) | |
| return vector / (1 + np.exp(-vector)) |
|
#9078 is merged, now this build will pass, run it again. |
Describe your change:
Checklist: