brainpy.optim
module#
Optimizers#
|
Base Optimizer Class. |
|
Stochastic gradient descent optimizer. |
|
Momentum optimizer. |
|
Nesterov accelerated gradient optimizer [2]_. |
|
Optimizer that implements the Adagrad algorithm. |
|
Optimizer that implements the Adadelta algorithm. |
|
Optimizer that implements the RMSprop algorithm. |
|
Optimizer that implements the Adam algorithm. |
|
Layer-wise adaptive rate scaling (LARS) optimizer [1]_. |
|
Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models [1]_. |
|
Adam with weight decay regularization [1]_. |
Schedulers#
|
|
|
The learning rate scheduler. |
|
|
|
Decays the learning rate of each parameter group by gamma every step_size epochs. |
|
Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones. |
|
Set the learning rate of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial lr and \(T_{cur}\) is the number of epochs since the last restart in SGDR: |
|
Set the learning rate of each parameter group using a cosine annealing |
|
Decays the learning rate of each parameter group by gamma every epoch. |
|
|
|
|
|
|
|