brainpy.optim.SGD#

class brainpy.optim.SGD(lr, train_vars=None, weight_decay=None, name=None)[source]#

Stochastic gradient descent optimizer.

SGD performs a parameter update for training examples \(x\) and label \(y\):

\[\theta = \theta - \eta \cdot \nabla_\theta J(\theta; x; y)\]
Parameters

lr (float, Scheduler) – learning rate.

__init__(lr, train_vars=None, weight_decay=None, name=None)[source]#

Methods

__init__(lr[, train_vars, weight_decay, name])

check_grads(grads)

cpu()

Move all variable into the CPU device.

cuda()

Move all variables into the GPU device.

load_state_dict(state_dict[, warn])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables, ...)

register_train_vars([train_vars])

register_vars([train_vars])

save_states(filename[, variables])

Save the model states.

state_dict()

Returns a dictionary containing a whole state of the module.

to(device)

Moves all variables into the given device.

tpu()

Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

tree_flatten()

Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

New in version 2.3.1.

unique_name([name, type_])

Get the unique name for this object.

update(grads)

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.

Attributes

name

Name of the model.