brainpy.dnn module#

Non-linear Activations#

Activation

Applies an activation function to the inputs

Threshold

Thresholds each element of the input Tensor.

ReLU

Applies the rectified linear unit function element-wise:

RReLU

Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper:

Hardtanh

Applies the HardTanh function element-wise.

ReLU6

Applies the element-wise function:

Sigmoid

Applies the element-wise function:

Hardsigmoid

Applies the Hardsigmoid function element-wise.

Tanh

Applies the Hyperbolic Tangent (Tanh) function element-wise.

SiLU

Applies the Sigmoid Linear Unit (SiLU) function, element-wise.

Mish

Applies the Mish function, element-wise.

Hardswish

Applies the Hardswish function, element-wise, as described in the paper: Searching for MobileNetV3.

ELU

Applies the Exponential Linear Unit (ELU) function, element-wise, as described in the paper: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs).

CELU

Applies the element-wise function:

SELU

Applied element-wise, as:

GLU

Applies the gated linear unit function \({GLU}(a, b)= a \otimes \sigma(b)\) where \(a\) is the first half of the input matrices and \(b\) is the second half.

GELU

Applies the Gaussian Error Linear Units function:

Hardshrink

Applies the Hard Shrinkage (Hardshrink) function element-wise.

LeakyReLU

Applies the element-wise function:

LogSigmoid

Applies the element-wise function:

Softplus

Applies the Softplus function \(\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x))\) element-wise.

Softshrink

Applies the soft shrinkage function elementwise:

PReLU

Applies the element-wise function:

Softsign

Applies the element-wise function:

Tanhshrink

Applies the element-wise function:

Softmin

Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1.

Softmax

Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.

Softmax2d

Applies SoftMax over features to each spatial location.

LogSoftmax

Applies the \(\log(\text{Softmax}(x))\) function to an n-dimensional input Tensor.

Convolutional Layers#

Conv1d

One-dimensional convolution.

Conv2d

Two-dimensional convolution.

Conv3d

Three-dimensional convolution.

Conv1D

alias of Conv1d

Conv2D

alias of Conv2d

Conv3D

alias of Conv3d

ConvTranspose1d

One dimensional transposed convolution (aka.

ConvTranspose2d

Two dimensional transposed convolution (aka.

ConvTranspose3d

Three dimensional transposed convolution (aka.

Dense Connection Layers#

Dense

A linear transformation applied over the last dimension of the input.

Linear

alias of Dense

Identity

A placeholder identity operator that is argument-insensitive.

AllToAll

Synaptic matrix multiplication with All2All connections.

OneToOne

Synaptic matrix multiplication with One2One connection.

MaskedLinear

Synaptic matrix multiplication with masked dense computation.

CSRLinear

Synaptic matrix multiplication with CSR sparse computation.

EventCSRLinear

Synaptic matrix multiplication with event CSR sparse computation.

JitFPHomoLinear

Synaptic matrix multiplication with the just-in-time connectivity.

JitFPUniformLinear

Synaptic matrix multiplication with the just-in-time connectivity.

JitFPNormalLinear

Synaptic matrix multiplication with the just-in-time connectivity.

EventJitFPHomoLinear

Synaptic matrix multiplication with the just-in-time connectivity.

EventJitFPNormalLinear

Synaptic matrix multiplication with the just-in-time connectivity.

EventJitFPUniformLinear

Synaptic matrix multiplication with the just-in-time connectivity.

Normalization Layers#

BatchNorm1d

1-D batch normalization [1]_.

BatchNorm2d

2-D batch normalization [1]_.

BatchNorm3d

3-D batch normalization [1]_.

BatchNorm1D

alias of BatchNorm1d

BatchNorm2D

alias of BatchNorm2d

BatchNorm3D

alias of BatchNorm3d

LayerNorm

Layer normalization (https://arxiv.org/abs/1607.06450).

GroupNorm

Group normalization layer.

InstanceNorm

Instance normalization layer.

Pooling Layers#

MaxPool

Pools the input by taking the maximum over a window.

MaxPool1d

Applies a 1D max pooling over an input signal composed of several input

MaxPool2d

Applies a 1D max pooling over an input signal composed of several input

MaxPool3d

Applies a 1D max pooling over an input signal composed of several input

MinPool

Pools the input by taking the minimum over a window.

AvgPool

Pools the input by taking the average over a window.

AvgPool1d

Applies a 1D average pooling over an input signal composed of several input

AvgPool2d

Applies a 2D average pooling over an input signal composed of several input

AvgPool3d

Applies a 3D average pooling over an input signal composed of several input

AdaptiveAvgPool1d

Adaptive one-dimensional average down-sampling.

AdaptiveAvgPool2d

Adaptive two-dimensional average down-sampling.

AdaptiveAvgPool3d

Adaptive three-dimensional average down-sampling.

AdaptiveMaxPool1d

Adaptive one-dimensional maximum down-sampling.

AdaptiveMaxPool2d

Adaptive two-dimensional maximum down-sampling.

AdaptiveMaxPool3d

Adaptive three-dimensional maximum down-sampling.

Interoperation with Flax#

FromFlax

Transform a Flax module as a BrainPy DynamicalSystem.

ToFlaxRNNCell

Transform a BrainPy DynamicalSystem into a Flax recurrent module.

ToFlax

alias of ToFlaxRNNCell

Utility Layers#

Dropout

A layer that stochastically ignores a subset of inputs each training step.

Flatten

Flattens a contiguous range of dims into a tensor.

Unflatten

Unflattens a tensor dim expanding it to a desired shape.

FunAsLayer