Momentum

Contents

Momentum#

class brainpy.optim.Momentum(lr, train_vars=None, momentum=0.9, weight_decay=None, name=None)[source]#

Momentum optimizer.

Momentum [1] is a method that helps accelerate SGD in the relevant direction and dampens oscillations. It does this by adding a fraction \(\gamma\) of the update vector of the past time step to the current update vector:

\[\begin{split}\begin{align} \begin{split} v_t &= \gamma v_{t-1} + \eta \nabla_\theta J( \theta) \\ \theta &= \theta - v_t \end{split} \end{align}\end{split}\]
Parameters:

lr (float, Scheduler) – learning rate.

References