brainpy.optim.LARS#

class brainpy.optim.LARS(lr, train_vars=None, momentum=0.9, weight_decay=0.0001, tc=0.001, eps=1e-05, name=None)[source]#

Layer-wise adaptive rate scaling (LARS) optimizer [1].

Layer-wise Adaptive Rate Scaling, or LARS, is a large batch optimization technique. There are two notable differences between LARS and other adaptive algorithms such as Adam or RMSProp: first, LARS uses a separate learning rate for each layer and not for each weight. And second, the magnitude of the update is controlled with respect to the weight norm for better control of training speed.

\[\begin{split}m_{t} = \beta_{1}m_{t-1} + \left(1-\beta_{1}\right)\left(g_{t} + \lambda{x_{t}}\right) \\ x_{t+1}^{\left(i\right)} = x_{t}^{\left(i\right)} - \eta_{t}\frac{\phi\left(|| x_{t}^{\left(i\right)} ||\right)}{|| m_{t}^{\left(i\right)} || }m_{t}^{\left(i\right)}\end{split}\]
Parameters:
  • lr (float, Scheduler) – learning rate.

  • momentum (float) – coefficient used for the moving average of the gradient.

  • weight_decay (float) – weight decay coefficient.

  • tc (float) – trust coefficient eta ( < 1) for trust ratio computation.

  • eps (float) – epsilon used for trust ratio computation.

References

__init__(lr, train_vars=None, momentum=0.9, weight_decay=0.0001, tc=0.001, eps=1e-05, name=None)[source]#

Methods

__init__(lr[, train_vars, momentum, ...])

check_grads(grads)

cpu()

Move all variable into the CPU device.

cuda()

Move all variables into the GPU device.

load_state_dict(state_dict[, warn, compatible])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables[, var_cls])

register_train_vars([train_vars])

register_vars([train_vars])

save_states(filename[, variables])

Save the model states.

state_dict()

Returns a dictionary containing a whole state of the module.

to(device)

Moves all variables into the given device.

tpu()

Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

tree_flatten()

Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

Unflatten the data to construct an object of this class.

unique_name([name, type_])

Get the unique name for this object.

update(grads)

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.

Attributes

name

Name of the model.