General Mathematical Operators#
BrainPy Array#
Multiple-dimensional array in BrainPy. |
|
The sharded array, which stores data across multiple devices. |
Array Interoperability to JAX#
Convert the input to a |
|
Convert the input to a |
Array Interoperability to NumPy#
Convert the input to a |
|
Convert the input to a |
Array Interoperability to BrainPy#
Convert the input to a |
|
Activation Functions#
Continuously-differentiable exponential linear unit activation. |
|
Exponential linear unit activation function. |
|
Gaussian error linear unit activation function. |
|
Gated linear unit activation function. |
|
Applies the element-wise function: |
|
SiLU activation function. |
|
Scaled exponential linear unit activation. |
|
Rectified Linear Unit 6 activation function. |
|
Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper: |
|
Hard SiLU activation function |
|
Leaky rectified linear unit activation function. |
|
Hard \(\mathrm{tanh}\) activation function. |
|
Hard Sigmoid activation function. |
|
Applies the element-wise function: |
|
Hard SiLU activation function |
|
Applies the Hard Shrinkage (Hardshrink) function element-wise. |
|
Soft-sign activation function. |
|
Applies the soft shrinkage function elementwise: |
|
Softmax function. |
|
Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1. |
|
Softplus activation function. |
|
SiLU activation function. |
|
Applies the Mish function, element-wise. |
|
Log-sigmoid activation function. |
|
Log-Softmax function. |
|
One-hot encodes the given indicies. |
|
Normalizes an array by subtracting mean and dividing by sqrt(var). |
|
Sigmoid activation function. |
|
Similar to |