GroupNorm

GroupNorm#

class brainpy.dnn.GroupNorm(num_groups, num_channels, epsilon=1e-05, affine=True, bias_initializer=ZeroInit, scale_initializer=OneInit(value=1.0), mode=None, name=None)[source]#

Group normalization layer.

\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

This layer divides channels into groups and normalizes the features within each group. Its computation is also independent of the batch size. The feature size must be multiple of the group size.

The shape of the data should be (b, d1, d2, …, c), where d denotes the batch size and c denotes the feature (channel) size.

Parameters:
  • num_groups (int) – The number of groups. It should be a factor of the number of channels.

  • num_channels (int) – The number of channels expected in input.

  • epsilon (float) – a value added to the denominator for numerical stability. Default: 1e-5

  • affine (bool) – A boolean value that when set to True, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default: True.

  • bias_initializer (Initializer, ArrayType, Callable) – An initializer generating the original translation matrix

  • scale_initializer (Initializer, ArrayType, Callable) – An initializer generating the original scaling matrix

Examples

>>> import brainpy as bp
>>> import brainpy.math as bm
>>> input = bm.random.randn(20, 10, 10, 6)
>>> # Separate 6 channels into 3 groups
>>> m = bp.layers.GroupNorm(3, 6)
>>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)
>>> m = bp.layers.GroupNorm(6, 6)
>>> # Put all 6 channels into a single group (equivalent with LayerNorm)
>>> m = bp.layers.GroupNorm(1, 6)
>>> # Activating the module
>>> output = m(input)
update(x)[source]#

The function to specify the updating rule.