- class brainpy.dnn.Dropout(prob, mode=None, name=None)#
A layer that stochastically ignores a subset of inputs each training step.
In training, to compensate for the fraction of input values dropped (rate), all surviving values are multiplied by 1 / (1 - rate).
This layer is active only during training (
mode=brainpy.math.training_mode). In other circumstances it is a no-op.
- update(x, fit=None)#
The function to specify the updating rule.