LayerNorm

LayerNorm#

class brainpy.dnn.LayerNorm(normalized_shape, epsilon=1e-05, bias_initializer=ZeroInit, scale_initializer=OneInit(value=1.0), elementwise_affine=True, mode=None, name=None)[source]#

Layer normalization (https://arxiv.org/abs/1607.06450).

\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

This layer normalizes data on each example, independently of the batch. More specifically, it normalizes data of shape (b, d1, d2, …, c) on the axes of the data dimensions and the channel (d1, d2, …, c). Different from batch normalization, scale and bias are assigned to each position (elementwise operation) instead of the whole channel. If users want to assign a single scale and bias to a whole example/whole channel, please use GroupNorm/ InstanceNorm.

Parameters:
  • normalized_shape (int, sequence of int) –

    The input shape from an expected input of size

    \[[* \times \text{normalized\_shape}[0] \times \text{normalized\_shape}[1] \times \ldots \times \text{normalized\_shape}[-1]]\]

    If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.

  • epsilon (float) – a value added to the denominator for numerical stability. Default: 1e-5

  • bias_initializer (Initializer, ArrayType, Callable) – an initializer generating the original translation matrix

  • scale_initializer (Initializer, ArrayType, Callable) – an initializer generating the original scaling matrix

  • elementwise_affine (bool) – A boolean value that when set to True, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default: True.

Examples

>>> import brainpy as bp
>>> import brainpy.math as bm
>>>
>>> # NLP Example
>>> batch, sentence_length, embedding_dim = 20, 5, 10
>>> embedding = bm.random.randn(batch, sentence_length, embedding_dim)
>>> layer_norm = bp.layers.LayerNorm(embedding_dim)
>>> # Activate module
>>> layer_norm(embedding)
>>>
>>> # Image Example
>>> N, C, H, W = 20, 5, 10, 10
>>> input = bm.random.randn(N, H, W, C)
>>> # Normalize over the last three dimensions (i.e. the channel and spatial dimensions)
>>> # as shown in the image below
>>> layer_norm = bp.layers.LayerNorm([H, W, C])
>>> output = layer_norm(input)
update(x)[source]#

The function to specify the updating rule.