brainpy.nn.nodes.RC.Reservoir#

class brainpy.nn.nodes.RC.Reservoir(num_unit, leaky_rate=0.3, activation='tanh', activation_type='internal', ff_initializer=Normal(scale=0.1, seed=None), rec_initializer=Normal(scale=0.1, seed=None), fb_initializer=Normal(scale=0.1, seed=None), bias_initializer=ZeroInit, ff_connectivity=0.1, rec_connectivity=0.1, fb_connectivity=0.1, conn_type='dense', spectral_radius=None, noise_ff=0.0, noise_rec=0.0, noise_fb=0.0, noise_type='normal', seed=None, trainable=False, **kwargs)[source]#

Reservoir node, a pool of leaky-integrator neurons with random recurrent connections 1.

Parameters
  • num_unit (int) – The number of reservoir nodes.

  • ff_initializer (Initializer) – The initialization method for the feedforward connections.

  • rec_initializer (Initializer) – The initialization method for the recurrent connections.

  • fb_initializer (optional, Tensor, Initializer) – The initialization method for the feedback connections.

  • bias_initializer (optional, Tensor, Initializer) – The initialization method for the bias.

  • leaky_rate (float) – A float between 0 and 1.

  • activation (str, callable, optional) – Reservoir activation function. - If a str, should be a brainpy.math.activations function name. - If a callable, should be an element-wise operator on tensor.

  • activation_type (str) –

    • If “internal” (default), then leaky integration happens on states transformed by the activation function:

    \[r[n+1] = (1 - \alpha) \cdot r[t] + \alpha \cdot f(W_{ff} \cdot u[n] + W_{fb} \cdot b[n] + W_{rec} \cdot r[t])\]
    • If “external”, then leaky integration happens on internal states of each neuron, stored in an internal_state parameter (\(x\) in the equation below). A neuron internal state is the value of its state before applying the activation function \(f\):

      \[\begin{split}x[n+1] &= (1 - \alpha) \cdot x[t] + \alpha \cdot f(W_{ff} \cdot u[n] + W_{rec} \cdot r[t] + W_{fb} \cdot b[n]) \\ r[n+1] &= f(x[n+1])\end{split}\]

  • ff_connectivity (float, optional) – Connectivity of input neurons, i.e. ratio of input neurons connected to reservoir neurons. Must be in [0, 1], by default 0.1

  • rec_connectivity (float, optional) – Connectivity of recurrent weights matrix, i.e. ratio of reservoir neurons connected to other reservoir neurons, including themselves. Must be in [0, 1], by default 0.1

  • fb_connectivity (float, optional) – Connectivity of feedback neurons, i.e. ratio of feedabck neurons connected to reservoir neurons. Must be in [0, 1], by default 0.1

  • conn_type (str) – The connectivity type, can be “dense” or “sparse”.

  • spectral_radius (float, optional) – Spectral radius of recurrent weight matrix, by default None

  • noise_rec (float, optional) – Gain of noise applied to reservoir internal states, by default 0.0

  • noise_in (float, optional) – Gain of noise applied to feedforward signals, by default 0.0

  • noise_fb (float, optional) – Gain of noise applied to feedback signals, by default 0.0

  • noise_type (optional, str, callable) – Distribution of noise. Must be a random variable generator distribution (see brainpy.math.random.RandomState), by default “normal”.

  • seed (optional, int) – The seed for random sampling in this node.

References

1

Lukoševičius, Mantas. “A practical guide to applying echo state networks.” Neural networks: Tricks of the trade. Springer, Berlin, Heidelberg, 2012. 659-686.

__init__(num_unit, leaky_rate=0.3, activation='tanh', activation_type='internal', ff_initializer=Normal(scale=0.1, seed=None), rec_initializer=Normal(scale=0.1, seed=None), fb_initializer=Normal(scale=0.1, seed=None), bias_initializer=ZeroInit, ff_connectivity=0.1, rec_connectivity=0.1, fb_connectivity=0.1, conn_type='dense', spectral_radius=None, noise_ff=0.0, noise_rec=0.0, noise_fb=0.0, noise_type='normal', seed=None, trainable=False, **kwargs)[source]#

Methods

__init__(num_unit[, leaky_rate, activation, ...])

copy([name, shallow])

Returns a copy of the Node.

feedback(ff_output, **shared_kwargs)

The feedback computation function of a node.

forward(ff[, fb])

Feedforward output.

init_fb_conn()

Initialize feedback connections, weights, and variables.

init_fb_output([num_batch])

Set the initial node feedback state.

init_ff_conn()

Initialize feedforward connections, weights, and variables.

init_state([num_batch])

Set the initial node state.

initialize([num_batch])

Initialize the node.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

offline_fit(targets, ffs[, fbs])

Offline training interface.

online_fit(target, ff[, fb])

Online training fitting interface.

online_init()

Online training initialization interface.

register_implicit_nodes(nodes)

register_implicit_vars(variables)

save_states(filename[, variables])

Save the model states.

set_fb_output(state)

Safely set the feedback state of the node.

set_feedback_shapes(fb_shapes)

set_feedforward_shapes(feedforward_shapes)

set_output_shape(shape)

set_state(state)

Safely set the state of the node.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

unique_name([name, type_])

Get the unique name for this object.

vars([method, level, include_self])

Collect all variables in this node and the children nodes.

Attributes

data_pass

Offline fitting method.

fb_output

rtype

Optional[TypeVar(Tensor, JaxArray, ndarray)]

feedback_shapes

Output data size.

feedforward_shapes

Input data size.

is_feedback_input_supported

is_feedback_supported

is_initialized

rtype

bool

name

output_shape

Output data size.

state

Node current internal state.

state_trainable

Returns if the Node can be trained.

train_state

trainable

Returns if the Node can be trained.