MomentumNesterov

Contents

MomentumNesterov#

class brainpy.optim.MomentumNesterov(lr, train_vars=None, weight_decay=None, momentum=0.9, name=None)[source]#

Nesterov accelerated gradient optimizer [2].

\[\begin{split}\begin{align} \begin{split} v_t &= \gamma v_{t-1} + \eta \nabla_\theta J( \theta - \gamma v_{t-1} ) \\ \theta &= \theta - v_t \end{split} \end{align}\end{split}\]
Parameters:

lr (float, Scheduler) – learning rate.

References