brainpy.optim module

brainpy.optim module#

Optimizers#

Optimizer

Base Optimizer Class.

SGD

Stochastic gradient descent optimizer.

Momentum

Momentum optimizer.

MomentumNesterov

Nesterov accelerated gradient optimizer [2]_.

Adagrad

Optimizer that implements the Adagrad algorithm.

Adadelta

Optimizer that implements the Adadelta algorithm.

RMSProp

Optimizer that implements the RMSprop algorithm.

Adam

Optimizer that implements the Adam algorithm.

LARS

Layer-wise adaptive rate scaling (LARS) optimizer [1]_.

Adan

Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models [1]_.

AdamW

Adam with weight decay regularization [1]_.

Schedulers#

make_schedule

Scheduler

The learning rate scheduler.

Constant

StepLR

Decays the learning rate of each parameter group by gamma every step_size epochs.

MultiStepLR

Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones.

CosineAnnealingLR

Set the learning rate of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial lr and \(T_{cur}\) is the number of epochs since the last restart in SGDR:

CosineAnnealingWarmRestarts

Set the learning rate of each parameter group using a cosine annealing

ExponentialLR

Decays the learning rate of each parameter group by gamma every epoch.

ExponentialDecayLR

ExponentialDecay

InverseTimeDecayLR

InverseTimeDecay

PolynomialDecayLR

PolynomialDecay

PiecewiseConstantLR

PiecewiseConstant