brainpy.LoopOverTime#

class brainpy.LoopOverTime(target, out_vars=None, no_state=False, name=None)[source]#

Transform a single step DynamicalSystem into a multiple-step forward propagation BrainPyObject.

Note

This object transforms a DynamicalSystem into a BrainPyObject.

If the target has a batching mode, before sending the data into the wrapped object, reset the state (.reset_state(batch_size)) with the same batch size as in the given data.

For more flexible customization, we recommend users to use for_loop(), or DSRunner.

Examples

This model can be used for network training:

>>> import brainpy as bp
>>> import brainpy.math as bm
>>>
>>> n_time, n_batch, n_in = 30, 128, 100
>>> model = bp.Sequential(l1=bp.layers.RNNCell(n_in, 20),
>>>                       l2=bm.relu,
>>>                       l3=bp.layers.RNNCell(20, 2))
>>> over_time = bp.LoopOverTime(model)
>>> over_time.reset_state(n_batch)
(30, 128, 2)
>>>
>>> hist_l3 = over_time(bm.random.rand(n_time, n_batch, n_in), data_first_axis='T')
>>> print(hist_l3.shape)
>>>
>>> # monitor the "l1" layer state
>>> over_time = bp.LoopOverTime(model, out_vars=model['l1'].state)
>>> over_time.reset_state(n_batch)
>>> hist_l3, hist_l1 = over_time(bm.random.rand(n_time, n_batch, n_in), data_first_axis='T')
>>> print(hist_l3.shape)
(30, 128, 2)
>>> print(hist_l1.shape)
(30, 128, 20)

It is also able to used in brain simulation models:

>>> import brainpy as bp
>>> import brainpy.math as bm
>>> import matplotlib.pyplot as plt
>>>
>>> hh = bp.neurons.HH(1)
>>> over_time = bp.LoopOverTime(hh, out_vars=hh.V)
>>>
>>> # running with a given duration
>>> _, potentials = over_time(100.)
>>> plt.plot(bm.as_numpy(potentials), label='with given duration')
>>>
>>> # running with the given inputs
>>> _, potentials = over_time(bm.ones(1000) * 5)
>>> plt.plot(bm.as_numpy(potentials), label='with given inputs')
>>> plt.legend()
>>> plt.show()

(Source code, png, hires.png, pdf)

../../../_images/brainpy-LoopOverTime-1.png
Parameters
  • target (DynamicalSystem) – The target to transform.

  • no_state (bool) –

    Denoting whether the target has the shared argument or not.

    • For ANN layers which are no_state, like Dense or Conv2d, set no_state=True is high efficiently. This is because \(Y[t]\) only relies on \(X[t]\), and it is not necessary to calculate \(Y[t]\) step-bt-step. For this case, we reshape the input from shape = [T, N, *] to shape = [TN, *], send data to the object, and reshape output to shape = [T, N, *]. In this way, the calculation over different time is parralelized.

  • out_vars (PyTree) – The variables to monitor over the time loop.

  • name (str) – The transformed object name.

__init__(target, out_vars=None, no_state=False, name=None)[source]#

Methods

__init__(target[, out_vars, no_state, name])

cpu()

Move all variable into the CPU device.

cuda()

Move all variables into the GPU device.

load_state_dict(state_dict[, warn])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables, ...)

reset([batch_size])

Reset function which reset the whole variables in the model.

reset_state([batch_size])

save_states(filename[, variables])

Save the model states.

state_dict()

Returns a dictionary containing a whole state of the module.

to(device)

Moves all variables into the given device.

tpu()

Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

tree_flatten()

Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

New in version 2.3.1.

unique_name([name, type_])

Get the unique name for this object.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.

Attributes

name

Name of the model.