Gradient Descent Optimizers#

@Chaoming Wang @Xiaoyu Chen

Gradient descent is one of the most popular optimization methods. At present, gradient descent optimizers, combined with the loss function, are the key to machine learning, especially deep learning. In this section, we are going to understand:

  • how to use optimizers in BrainPy?

  • how to customize your own optimizer?

import brainpy as bp
import brainpy.math as bm

# bp.math.set_platform('cpu')
bp.__version__
'2.3.0'
import matplotlib.pyplot as plt

Optimizers in BrainPy#

The basic optimizer class in BrainPy is brainpy.optimizers.Optimizer, which inludes the following optimizers:

  • SGD

  • Momentum

  • Nesterov momentum

  • Adagrad

  • Adadelta

  • RMSProp

  • Adam

All supported optimizers can be inspected through the brainpy.math.optimizers APIs.

Generally, an optimizer initialization receives the learning rate lr, the trainable variables train_vars, and other hyperparameters for the specific optimizer.

  • lr can be a float, or an instance of brainpy.optim.Scheduler.

  • train_vars should be a dict of Variable.

Here we launch a SGD optimizer.

a = bm.Variable(bm.ones((5, 4)))
b = bm.Variable(bm.zeros((3, 3)))

op = bp.optim.SGD(lr=0.001, train_vars={'a': a, 'b': b})

When you try to update the parameters, you must provide the corresponding gradients for each parameter in the update() method.

op.update({'a': bm.random.random(a.shape), 'b': bm.random.random(b.shape)})

print('a:', a)
print('b:', b)
a: Variable([[0.9993626 , 0.9997406 , 0.999853  , 0.999312  ],
          [0.9993036 , 0.99934477, 0.9998294 , 0.9997739 ],
          [0.99900717, 0.9997449 , 0.99976104, 0.99953616],
          [0.9995185 , 0.99917144, 0.9990044 , 0.99914813],
          [0.9997468 , 0.9999408 , 0.99917686, 0.9999825 ]], dtype=float32)
b: Variable([[-0.00034196, -0.00046545, -0.00027317],
          [-0.00045028, -0.00076825, -0.00026088],
          [-0.0007135 , -0.00020507, -0.00073902]], dtype=float32)

You can process the gradients before applying them. For example, we clip the graidents by the maximum L2-norm.

grads_pre = {'a': bm.random.random(a.shape), 'b': bm.random.random(b.shape)}

grads_pre
{'a': Array([[0.6356058 , 0.10750175, 0.93578255, 0.2557603 ],
        [0.77525663, 0.8615701 , 0.35919654, 0.6861898 ],
        [0.9569112 , 0.98981357, 0.3033744 , 0.62852013],
        [0.36589646, 0.86694443, 0.6335902 , 0.44947362],
        [0.01782513, 0.11465573, 0.5505476 , 0.56196713]], dtype=float32),
 'b': Array([[0.2326113 , 0.14437485, 0.6543677 ],
        [0.46068823, 0.9811108 , 0.30460846],
        [0.261765  , 0.71705794, 0.6173099 ]], dtype=float32)}
grads_post = bm.clip_by_norm(grads_pre, 1.)

grads_post
{'a': Array([[0.22753015, 0.0384828 , 0.33498552, 0.09155546],
        [0.2775215 , 0.30841944, 0.12858291, 0.24563788],
        [0.34254903, 0.3543272 , 0.10860006, 0.22499368],
        [0.13098131, 0.3103433 , 0.22680864, 0.16089973],
        [0.00638093, 0.04104374, 0.19708155, 0.20116945]], dtype=float32),
 'b': Array([[0.14066657, 0.08730751, 0.39571446],
        [0.27859107, 0.5933052 , 0.18420528],
        [0.15829663, 0.433625  , 0.3733046 ]], dtype=float32)}
op.update(grads_post)

print('a:', a)
print('b:', b)
a: Variable([[0.9991351 , 0.9997021 , 0.99951804, 0.99922043],
          [0.99902606, 0.9990364 , 0.99970084, 0.9995283 ],
          [0.9986646 , 0.99939054, 0.99965245, 0.99931115],
          [0.9993875 , 0.9988611 , 0.9987776 , 0.99898726],
          [0.9997404 , 0.99989974, 0.9989798 , 0.9997813 ]], dtype=float32)
b: Variable([[-0.00048263, -0.00055276, -0.00066889],
          [-0.00072887, -0.00136155, -0.00044508],
          [-0.00087179, -0.0006387 , -0.00111233]], dtype=float32)

Note

Optimizer usually has their own dynamically changed variables. If you JIT a function whose logic contains optimizer update, your dyn_vars in bm.jit() should include variables in Optimzier.vars().

op.vars()  # SGD optimzier only has an iterable `step` variable to record the training step
{'Constant0.step': Variable([2], dtype=int32)}
bp.optim.Momentum(lr=0.001, train_vars={'a': a, 'b': b}).vars()  # Momentum has velocity variables
{'Momentum0.a_v': Variable([[0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.]], dtype=float32),
 'Momentum0.b_v': Variable([[0., 0., 0.],
           [0., 0., 0.],
           [0., 0., 0.]], dtype=float32),
 'Constant1.step': Variable([0], dtype=int32)}
bp.optim.Adam(lr=0.001, train_vars={'a': a, 'b': b}).vars()  # Adam has more variables
{'Adam0.a_m': Variable([[0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.]], dtype=float32),
 'Adam0.b_m': Variable([[0., 0., 0.],
           [0., 0., 0.],
           [0., 0., 0.]], dtype=float32),
 'Adam0.a_v': Variable([[0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.],
           [0., 0., 0., 0.]], dtype=float32),
 'Adam0.b_v': Variable([[0., 0., 0.],
           [0., 0., 0.],
           [0., 0., 0.]], dtype=float32),
 'Constant2.step': Variable([0], dtype=int32)}

Creating A Self-Customized Optimizer#

To create your own optimization algorithm, simply inherit from bm.optimizers.Optimizer class and override the following methods:

  • __init__(): init function that receives the learning rate (lr) and trainable variables (train_vars). Do not forget to register your dynamical changed variables into implicit_vars.

  • update(grads): update function that computes the updated parameters.

The general structure is shown below:

class CustomizeOp(bp.optim.Optimizer):
    def __init__(self, lr, train_vars, *params, **other_params):
        super(CustomizeOp, self).__init__(lr, train_vars)
        
        # customize your initialization
        
    def update(self, grads):
        # customize your update logic
        pass

Schedulers#

Scheduler seeks to adjust the learning rate during training through reducing the learning rate according to a pre-defined schedule. Common learning rate schedules include time-based decay, step decay and exponential decay.

Here we set up an exponential decay scheduler, in which the learning rate will decay exponentially along the training step.

sc = bp.optim.ExponentialDecay(lr=0.1, decay_steps=2, decay_rate=0.99)
def show(steps, rates):
    plt.plot(steps, rates)
    plt.xlabel('Train Step')
    plt.ylabel('Learning Rate')
    plt.show()
steps = bm.arange(1000)
rates = sc(steps)

show(steps, rates)
../_images/d4c6e12211f0bb94f7f05195c0bb03fa1b2d186c95e208f8959d7281d009042a.png

After Optimizer initialization, the learning rate self.lr will always be an instance of bm.optimizers.Scheduler. A scalar float learning rate initialization will result in a Constant scheduler.

op.lr
Constant(0.001)

One can get the current learning rate value by calling Scheduler.__call__(i=None).

  • If i is not provided, the learning rate value will be evaluated at the built-in training step.

  • Otherwise, the learning rate value will be evaluated at the given step i.

op.lr()
0.001

In BrainPy, several commonly used learning rate schedulers are used:

  • Constant

  • ExponentialDecay

  • InverseTimeDecay

  • PolynomialDecay

  • PiecewiseConstant

For more details, please see the brainpy.math.optimizers APIs.

# InverseTimeDecay scheduler

rates = bp.optim.InverseTimeDecay(lr=0.01, decay_steps=10, decay_rate=0.999)(steps)
show(steps, rates)
../_images/e4e5977b81108bffea5108caed258da31cb0243fdff98d530cea6e070365787c.png
# PolynomialDecay scheduler

rates = bp.optim.PolynomialDecay(lr=0.01, decay_steps=10, final_lr=0.0001)(steps)
show(steps, rates)
../_images/6741c56d6b37a41cb910962467af3413fc2dbfe1a4846f3692fc772ef6576d14.png

Creating a Self-Customized Scheduler#

If users try to implement their own scheduler, simply inherit from bm.optimizers.Scheduler class and override the following methods:

  • __init__(): the init function.

  • __call__(i=None): the learning rate value evalution.

class CustomizeScheduler(bp.optim.Scheduler):
    def __init__(self, lr, *params, **other_params):
        super(CustomizeScheduler, self).__init__(lr)
        
        # customize your initialization
        
    def __call__(self, i=None):
        # customize your update logic
        pass