brainpy.optimizers.Momentum
brainpy.optimizers.Momentum#
- class brainpy.optimizers.Momentum(lr, train_vars=None, momentum=0.9, name=None)[source]#
Momentum optimizer.
Momentum 1 is a method that helps accelerate SGD in the relevant direction and dampens oscillations. It does this by adding a fraction \(\gamma\) of the update vector of the past time step to the current update vector:
\[\begin{split}\begin{align} \begin{split} v_t &= \gamma v_{t-1} + \eta \nabla_\theta J( \theta) \\ \theta &= \theta - v_t \end{split} \end{align}\end{split}\]References
- 1
Qian, N. (1999). On the momentum term in gradient descent learning algorithms. Neural Networks : The Official Journal of the International Neural Network Society, 12(1), 145–151. http://doi.org/10.1016/S0893-6080(98)00116-6
Methods
__init__
(lr[, train_vars, momentum, name])check_grads
(grads)load_states
(filename[, verbose])Load the model states.
nodes
([method, level, include_self])Collect all children nodes.
register_implicit_nodes
(nodes)register_implicit_vars
(variables)register_vars
([train_vars])save_states
(filename[, variables])Save the model states.
train_vars
([method, level, include_self])The shortcut for retrieving all trainable variables.
unique_name
([name, type_])Get the unique name for this object.
update
(grads)vars
([method, level, include_self])Collect all variables in this node and the children nodes.
Attributes
name