brainpy.optim.MultiStepLR#

class brainpy.optim.MultiStepLR(lr, milestones, gamma=0.1, last_epoch=-1)[source]#

Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr.

Parameters:
  • lr (float) – Initial learning rate.

  • milestones (sequence of int) – List of epoch indices. Must be increasing.

  • gamma (float) – Multiplicative factor of learning rate decay. Default: 0.1.

  • last_epoch (int) – The index of last epoch. Default: -1.

__init__(lr, milestones, gamma=0.1, last_epoch=-1)[source]#

Methods

__init__(lr, milestones[, gamma, last_epoch])

cpu()

Move all variable into the CPU device.

cuda()

Move all variables into the GPU device.

load_state_dict(state_dict[, warn, compatible])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables[, var_cls])

save_states(filename[, variables])

Save the model states.

state_dict()

Returns a dictionary containing a whole state of the module.

step_call()

step_epoch()

to(device)

Moves all variables into the given device.

tpu()

Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

tree_flatten()

Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

Unflatten the data to construct an object of this class.

unique_name([name, type_])

Get the unique name for this object.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.

Attributes

name

Name of the model.