brainpy.optimizers.LARS#

class brainpy.optimizers.LARS(lr, train_vars=None, momentum=0.9, weight_decay=0.0001, tc=0.001, eps=1e-05, name=None)[source]#

Layer-wise adaptive rate scaling (LARS) optimizer.

Parameters
  • momentum (float) – coefficient used for the moving average of the gradient.

  • weight_decay (float) – weight decay coefficient.

  • tc (float) – trust coefficient eta ( < 1) for trust ratio computation.

  • eps (float) – epsilon used for trust ratio computation.

__init__(lr, train_vars=None, momentum=0.9, weight_decay=0.0001, tc=0.001, eps=1e-05, name=None)[source]#

Methods

__init__(lr[, train_vars, momentum, ...])

check_grads(grads)

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_implicit_nodes(*nodes, **named_nodes)

register_implicit_vars(*variables, ...)

register_vars([train_vars])

save_states(filename[, variables])

Save the model states.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

unique_name([name, type_])

Get the unique name for this object.

update(grads)

vars([method, level, include_self])

Collect all variables in this node and the children nodes.

Attributes

name

Name of the model.