brainpy.integrators.fde.GLShortMemory#

class brainpy.integrators.fde.GLShortMemory(f, alpha, inits, num_memory, dt=None, name=None, state_delays=None)[source]#

Efficient Computation of the Short-Memory Principle in Grünwald-Letnikov Method 1.

According to the explicit numerical approximation of Grünwald-Letnikov, the fractional-order derivative \(q\) for a discrete function \(f(t_K)\) can be described as follows:

\[{{}_{k-\frac{L_{m}}{h}}D_{t_{k}}^{q}}f(t_{k})\approx h^{-q} \sum\limits_{j=0}^{k}C_{j}^{q}f(t_{k-j})\]

where \(L_{m}\) is the memory lenght, \(h\) is the integration step size, and \(C_{j}^{q}\) are the binomial coefficients which are calculated recursively with

\[C_{0}^{q}=1,\ C_{j}^{q}=\left(1- \frac{1+q}{j}\right)C_{j-1}^{q},\ j=1,2, \ldots k.\]

Then, the numerical solution for a fractional-order differential equation (FODE) expressed in the form

\[D_{t_{k}}^{q}x(t_{k})=f(x(t_{k}))\]

can be obtained by

\[x(t_{k})=f(x(t_{k-1}))h^{q}- \sum\limits_{j=1}^{k}C_{j}^{q}x(t_{k-j}).\]

for \(0 < q < 1\). The above expression requires infinity memory length for numerical solution since the summation term depends on the discritized time \(t_k\). This implies relatively high simulation times.

To reduce the computational time, the upper bound of summation needs to be modified by \(k=v\), where

\[\begin{split}v=\begin{cases} k, & k\leq M,\\ L_{m}, & k > M. \end{cases}\end{split}\]

This is known as the short-memory principle, where \(M\) is the memory window with a width defined by \(M=\frac{L_{m}}{h}\). As was reported in 2, the accuracy increases by increaing the width of memory window.

Examples

>>> import brainpy as bp
>>>
>>> a, b, c = 10, 28, 8 / 3
>>> def lorenz(x, y, z, t):
>>>   dx = a * (y - x)
>>>   dy = x * (b - z) - y
>>>   dz = x * y - c * z
>>>   return dx, dy, dz
>>>
>>> integral = bp.fde.GLShortMemory(lorenz,
>>>                                 alpha=0.96,
>>>                                 num_step=500,
>>>                                 inits=[1., 0., 1.])
>>> runner = bp.integrators.IntegratorRunner(integral,
>>>                                          monitors=list('xyz'),
>>>                                          inits=[1., 0., 1.],
>>>                                          dt=0.005)
>>> runner.run(100.)
>>>
>>> import matplotlib.pyplot as plt
>>> plt.plot(runner.mon.x.flatten(), runner.mon.z.flatten())
>>> plt.show()
Parameters
  • f (callable) – The derivative function.

  • alpha (int, float, jnp.ndarray, bm.ndarray, sequence) – The fractional-order of the derivative function. Should be in the range of (0., 1.).

  • num_memory (int) –

    The length of the short memory.

    Changed in version 2.1.11.

  • inits (sequence) – A sequence of the initial values for variables.

  • dt (float, int) – The numerical precision.

  • name (str) – The integrator name.

References

1

Clemente-López, D., et al. “Efficient computation of the Grünwald-Letnikov method for arm-based implementations of fractional-order chaotic systems.” 2019 8th International Conference on Modern Circuits and Systems Technologies (MOCAST). IEEE, 2019.

2

M. F. Tolba, A. M. AbdelAty, N. S. Soliman, L. A. Said, A. H. Madian, A. T. Azar, et al., “FPGA implementation of two fractional order chaotic systems”, International Journal of Electronics and Communications, vol. 78, pp. 162-172, 2017.

__init__(f, alpha, inits, num_memory, dt=None, name=None, state_delays=None)[source]#

Methods

__init__(f, alpha, inits, num_memory[, dt, ...])

cpu()

Move all variable into the CPU device.

cuda()

Move all variables into the GPU device.

load_state_dict(state_dict[, warn])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables, ...)

reset(inits)

Reset function of the delay variables.

save_states(filename[, variables])

Save the model states.

set_integral(f)

Set the integral function.

state_dict()

Returns a dictionary containing a whole state of the module.

to(device)

Moves all variables into the given device.

tpu()

Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

tree_flatten()

Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

New in version 2.3.1.

unique_name([name, type_])

Get the unique name for this object.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.

Attributes

arguments

All arguments when calling the numer integrator of the differential equation.

binomial_coef

dt

The numerical integration precision.

integral

The integral function.

name

Name of the model.

parameters

The parameters defined in the differential equation.

state_delays

State delays.

variables

The variables defined in the differential equation.