brainpy.initialize.calculate_gain

Contents

brainpy.initialize.calculate_gain#

brainpy.initialize.calculate_gain(nonlinearity, param=None)[source]#

Return the recommended gain value for the given nonlinearity function. The values are as follows:

nonlinearity

gain

Linear / Identity

\(1\)

Conv{1,2,3}D

\(1\)

Sigmoid

\(1\)

Tanh

\(\frac{5}{3}\)

ReLU

\(\sqrt{2}\)

Leaky Relu

\(\sqrt{\frac{2}{1 + \text{negative\_slope}^2}}\)

SELU

\(\frac{3}{4}\)

Warning

In order to implement Self-Normalizing Neural Networks , you should use nonlinearity='linear' instead of nonlinearity='selu'. This gives the initial weights a variance of 1 / N, which is necessary to induce a stable fixed point in the forward pass. In contrast, the default gain for SELU sacrifices the normalisation effect for more stable gradient flow in rectangular layers.

Parameters:
  • nonlinearity – the non-linear function (nn.functional name)

  • param – optional parameter for the non-linear function