brainpy.dnn
module#
Non-linear Activations#
Applies an activation function to the inputs |
|
Flattens a contiguous range of dims into 2D or 1D. |
|
Thresholds each element of the input Tensor. |
|
Applies the rectified linear unit function element-wise: |
|
Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper: |
|
Applies the HardTanh function element-wise. |
|
Applies the element-wise function: |
|
Applies the element-wise function: |
|
Applies the Hardsigmoid function element-wise. |
|
Applies the Hyperbolic Tangent (Tanh) function element-wise. |
|
Applies the Sigmoid Linear Unit (SiLU) function, element-wise. |
|
Applies the Mish function, element-wise. |
|
Applies the Hardswish function, element-wise, as described in the paper: Searching for MobileNetV3. |
|
Applies the Exponential Linear Unit (ELU) function, element-wise, as described in the paper: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). |
|
Applies the element-wise function: |
|
Applied element-wise, as: |
|
Applies the gated linear unit function \({GLU}(a, b)= a \otimes \sigma(b)\) where \(a\) is the first half of the input matrices and \(b\) is the second half. |
|
Applies the Gaussian Error Linear Units function: |
|
Applies the Hard Shrinkage (Hardshrink) function element-wise. |
|
Applies the element-wise function: |
|
Applies the element-wise function: |
|
Applies the Softplus function \(\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x))\) element-wise. |
|
Applies the soft shrinkage function elementwise: |
|
Applies the element-wise function: |
|
Applies the element-wise function: |
|
Applies the element-wise function: |
|
Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1. |
|
Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1. |
|
Applies SoftMax over features to each spatial location. |
|
Applies the \(\log(\text{Softmax}(x))\) function to an n-dimensional input Tensor. |
Convolutional Layers#
One-dimensional convolution. |
|
Two-dimensional convolution. |
|
Three-dimensional convolution. |
|
alias of |
|
alias of |
|
alias of |
|
One dimensional transposed convolution (aka. |
|
Two dimensional transposed convolution (aka. |
|
Three dimensional transposed convolution (aka. |
Dense Connection Layers#
A linear transformation applied over the last dimension of the input. |
|
alias of |
|
A placeholder identity operator that is argument-insensitive. |
|
Synaptic matrix multiplication with All2All connections. |
|
Synaptic matrix multiplication with One2One connection. |
|
Synaptic matrix multiplication with masked dense computation. |
|
Synaptic matrix multiplication with CSR sparse computation. |
|
Synaptic matrix multiplication with event CSR sparse computation. |
|
Synaptic matrix multiplication with the just-in-time connectivity. |
|
Synaptic matrix multiplication with the just-in-time connectivity. |
|
Synaptic matrix multiplication with the just-in-time connectivity. |
|
Synaptic matrix multiplication with the just-in-time connectivity. |
|
Synaptic matrix multiplication with the just-in-time connectivity. |
|
Synaptic matrix multiplication with the just-in-time connectivity. |
Normalization Layers#
1-D batch normalization [1]_. |
|
2-D batch normalization [1]_. |
|
3-D batch normalization [1]_. |
|
alias of |
|
alias of |
|
alias of |
|
Layer normalization (https://arxiv.org/abs/1607.06450). |
|
Group normalization layer. |
|
Instance normalization layer. |
Pooling Layers#
Pools the input by taking the maximum over a window. |
|
Applies a 1D max pooling over an input signal composed of several input |
|
Applies a 1D max pooling over an input signal composed of several input |
|
Applies a 1D max pooling over an input signal composed of several input |
|
Pools the input by taking the minimum over a window. |
|
Pools the input by taking the average over a window. |
|
Applies a 1D average pooling over an input signal composed of several input |
|
Applies a 2D average pooling over an input signal composed of several input |
|
Applies a 3D average pooling over an input signal composed of several input |
|
Adaptive one-dimensional average down-sampling. |
|
Adaptive two-dimensional average down-sampling. |
|
Adaptive three-dimensional average down-sampling. |
|
Adaptive one-dimensional maximum down-sampling. |
|
Adaptive two-dimensional maximum down-sampling. |
|
Adaptive three-dimensional maximum down-sampling. |
Interoperation with Flax#
Transform a Flax module as a BrainPy |
|
Transform a BrainPy |
|
alias of |
Other Layers#
Base class for a layer of artificial neural network. |
|
A layer that stochastically ignores a subset of inputs each training step. |
|
Applies an activation function to the inputs |
|
Flattens a contiguous range of dims into 2D or 1D. |
|