brainpy.BPTT#

class brainpy.BPTT(target, loss_fun, optimizer=None, loss_has_aux=False, logger=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>, seed=None, shuffle_data=None, **kwargs)[source]#

The trainer implementing the back-propagation through time (BPTT) algorithm for training dyamical systems.

For more parameters, users should refer to DSRunner.

Parameters
  • target (DynamicalSystem) – The target model to train.

  • loss_fun (str, callable) –

    The loss function.

    • If it is a string, it should be the function chosen from brainpy.losses module.

    • Otherwise, a callable function which receives argument of (predicts, targets) should be provided.

    Note

    If monitors has been set in the trainer, the predicts contains two parts: the network history prediction outputs, and the monitored values.

    see BrainPy examples for more information.

  • loss_has_aux (bool) –

    To indicate whether the loss function returns auxiliary data expect the loss. Moreover, all auxiliary data should be a dict, whose key is used for logging item name and its data is used for the corresponding value. For example,

    def loss_fun(predicts, targets):
       return loss, {'acc': acc, 'spike_num': spike_num}
    

  • optimizer (Optimizer) – The optimizer used for training. Should be an instance of Optimizer.

  • numpy_mon_after_run (bool) – Make the monitored results as NumPy arrays.

  • logger (Any) – A file-like object (stream). Used to output the running results. Default is the current sys.stdout.

  • data_first_axis (str) – To indicate whether the first axis is the batch size (data_first_axis='B') or the time length (data_first_axis='T').

__init__(target, loss_fun, optimizer=None, loss_has_aux=False, logger=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>, seed=None, shuffle_data=None, **kwargs)#

Methods

__init__(target, loss_fun[, optimizer, ...])

cpu()

Move all variable into the CPU device.

cuda()

Move all variables into the GPU device.

fit(train_data[, test_data, num_epoch, ...])

Fit the target model according to the given training data.

get_hist_metric([phase, metric, which])

Get history losses.

load_state_dict(state_dict[, warn])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

predict(inputs[, reset_state, shared_args, ...])

Prediction function.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables, ...)

reset_state()

Reset state of the DSRunner.

run(*args, **kwargs)

Same as predict().

save_states(filename[, variables])

Save the model states.

state_dict()

Returns a dictionary containing a whole state of the module.

to(device)

Moves all variables into the given device.

tpu()

Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

tree_flatten()

Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

New in version 2.3.1.

unique_name([name, type_])

Get the unique name for this object.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.

Attributes

name

Name of the model.

test_losses

train_losses