SGD

Contents

SGD#

class brainpy.optim.SGD(lr, train_vars=None, weight_decay=None, name=None)[source]#

Stochastic gradient descent optimizer.

SGD performs a parameter update for training examples \(x\) and label \(y\):

\[\theta = \theta - \eta \cdot \nabla_\theta J(\theta; x; y)\]
Parameters:

lr (float, Scheduler) – learning rate.