brainpy.LoopOverTime#

class brainpy.LoopOverTime(target, out_vars=None, no_state=False, t0=0.0, i0=0, dt=None, shared_arg=None, data_first_axis='T', name=None, jit=True, remat=False)[source]#

Transform a single step DynamicalSystem into a multiple-step forward propagation BrainPyObject.

Note

This object transforms a DynamicalSystem into a BrainPyObject.

If the target has a batching mode, before sending the data into the wrapped object, reset the state (.reset_state(batch_size)) with the same batch size as in the given data.

For more flexible customization, we recommend users to use for_loop(), or DSRunner.

Examples

This model can be used for network training:

>>> import brainpy as bp
>>> import brainpy.math as bm
>>>
>>> n_time, n_batch, n_in = 30, 128, 100
>>> model = bp.Sequential(l1=bp.layers.RNNCell(n_in, 20),
>>>                       l2=bm.relu,
>>>                       l3=bp.layers.RNNCell(20, 2))
>>> over_time = bp.LoopOverTime(model, data_first_axis='T')
>>> over_time.reset_state(n_batch)
(30, 128, 2)
>>>
>>> hist_l3 = over_time(bm.random.rand(n_time, n_batch, n_in))
>>> print(hist_l3.shape)
>>>
>>> # monitor the "l1" layer state
>>> over_time = bp.LoopOverTime(model, out_vars=model['l1'].state, data_first_axis='T')
>>> over_time.reset_state(n_batch)
>>> hist_l3, hist_l1 = over_time(bm.random.rand(n_time, n_batch, n_in))
>>> print(hist_l3.shape)
(30, 128, 2)
>>> print(hist_l1.shape)
(30, 128, 20)

It is also able to used in brain simulation models:

>>> import brainpy as bp
>>> import brainpy.math as bm
>>> import matplotlib.pyplot as plt
>>>
>>> hh = bp.neurons.HH(1)
>>> over_time = bp.LoopOverTime(hh, out_vars=hh.V)
>>>
>>> # running with a given duration
>>> _, potentials = over_time(100.)
>>> plt.plot(bm.as_numpy(potentials), label='with given duration')
>>>
>>> # running with the given inputs
>>> _, potentials = over_time(bm.ones(1000) * 5)
>>> plt.plot(bm.as_numpy(potentials), label='with given inputs')
>>> plt.legend()
>>> plt.show()

(Source code, png, hires.png, pdf)

../../../_images/brainpy-LoopOverTime-1.png
Parameters:
  • target (DynamicalSystem) – The target to transform.

  • no_state (bool) –

    Denoting whether the target has the shared argument or not.

    • For ANN layers which are no_state, like Dense or Conv2d, set no_state=True is high efficiently. This is because \(Y[t]\) only relies on \(X[t]\), and it is not necessary to calculate \(Y[t]\) step-bt-step. For this case, we reshape the input from shape = [T, N, *] to shape = [TN, *], send data to the object, and reshape output to shape = [T, N, *]. In this way, the calculation over different time is parralelized.

  • out_vars (PyTree) – The variables to monitor over the time loop.

  • t0 (float, optional) – The start time to run the system. If None, t will be no longer generated in the loop.

  • i0 (int, optional) – The start index to run the system. If None, i will be no longer generated in the loop.

  • dt (float) – The time step.

  • shared_arg (dict) – The shared arguments across the nodes. For instance, shared_arg={‘fit’: False} for the prediction phase.

  • data_first_axis (str) – Denote whether the input data is time major. If so, we treat the data as (time, batch, …) when the target is in Batching mode. Default is True.

  • name (str) – The transformed object name.

__init__(target, out_vars=None, no_state=False, t0=0.0, i0=0, dt=None, shared_arg=None, data_first_axis='T', name=None, jit=True, remat=False)[source]#

Methods

__init__(target[, out_vars, no_state, t0, ...])

clear_input()

cpu()

Move all variable into the CPU device.

cuda()

Move all variables into the GPU device.

get_delay_data(identifier, delay_step, *indices)

Get delay data according to the provided delay steps.

load_state_dict(state_dict[, warn, compatible])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_delay(identifier, delay_step, ...)

Register delay variable.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables[, var_cls])

reset(*args, **kwargs)

Reset function which reset the whole variables in the model.

reset_local_delays([nodes])

Reset local delay variables.

reset_state([batch_size])

Reset function which reset the states in the model.

save_states(filename[, variables])

Save the model states.

state_dict()

Returns a dictionary containing a whole state of the module.

to(device)

Moves all variables into the given device.

tpu()

Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

tree_flatten()

Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

Unflatten the data to construct an object of this class.

unique_name([name, type_])

Get the unique name for this object.

update(*args, **kwargs)

The function to specify the updating rule.

update_local_delays([nodes])

Update local delay variables.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.

Attributes

global_delay_data

Global delay data, which stores the delay variables and corresponding delay targets.

mode

Mode of the model, which is useful to control the multiple behaviors of the model.

name

Name of the model.