class brainpy.integrators.fde.CaputoL1Schema(f, alpha, num_memory, inits, dt=None, name=None, state_delays=None)[source]#

The L1 scheme method for the numerical approximation of the Caputo fractional-order derivative equations [3].

For the fractional order \(0<\alpha<1\), let the fractional derivative of variable \(x(t)\) be

\[\frac{d^{\alpha} x}{d t^{\alpha}}=F(x, t)\]

The Caputo definition of the fractional derivative for variable \(x\) is

\[\frac{d^{\alpha} x}{d t^{\alpha}}=\frac{1}{\Gamma(1-\alpha)} \int_{0}^{t} \frac{x^{\prime}(u)}{(t-u)^{\alpha}} d u\]

where \(\Gamma\) is the Gamma function.

The fractional-order derivative is capable of integrating the activity of the function over all past activities weighted by a function that follows a power-law. Using one of the numerical methods, the L1 scheme method [3], the numerical approximation of the fractional-order derivative of \(x\) is

\[\frac{d^{\alpha} \chi}{d t^{\alpha}} \approx \frac{(d t)^{-\alpha}}{\Gamma(2-\alpha)}\left[\sum_{k=0}^{N-1}\left[x\left(t_{k+1}\right)- \mathrm{x}\left(t_{k}\right)\right]\left[(N-k)^{1-\alpha}-(N-1-k)^{1-\alpha}\right]\right]\]

Therefore, the numerical solution of original system is given by

\[x\left(t_{N}\right) \approx d t^{\alpha} \Gamma(2-\alpha) F(x, t)+x\left(t_{N-1}\right)- \left[\sum_{k=0}^{N-2}\left[x\left(t_{k+1}\right)-x\left(t_{k}\right)\right]\left[(N-k)^{1-\alpha}-(N-1-k)^{1-\alpha}\right]\right]\]

Hence, the solution of the fractional-order derivative can be described as the difference between the Markov term and the memory trace. The Markov term weighted by the gamma function is

\[\text { Markov term }=d t^{\alpha} \Gamma(2-\alpha) F(x, t)+x\left(t_{N-1}\right)\]

The memory trace (\(x\)-memory trace since it is related to variable \(x\)) is

\[\text { Memory trace }=\sum_{k=0}^{N-2}\left[x\left(t_{k+1}\right)-x\left(t_{k}\right)\right]\left[(N-k)^{1-\alpha}-(N-(k+1))^{1-\alpha}\right]\]

The memory trace integrates all the past activity and captures the long-term history of the system. For \(\alpha=1\), the memory trace is 0 for any time \(t\). When the fractional order \(\alpha\) is decreased from 1, the memory trace non-linearly increases from 0, and its dynamics strongly depends on time. Thus, the fractional order dynamics strongly deviates from the first order dynamics.


>>> import brainpy as bp
>>> a, b, c = 10, 28, 8 / 3
>>> def lorenz(x, y, z, t):
>>>   dx = a * (y - x)
>>>   dy = x * (b - z) - y
>>>   dz = x * y - c * z
>>>   return dx, dy, dz
>>> duration = 30.
>>> dt = 0.005
>>> inits = [1., 0., 1.]
>>> f = bp.fde.CaputoL1Schema(lorenz, alpha=0.99, num_memory=int(duration / dt), inits=inits)
>>> runner = bp.IntegratorRunner(f, monitors=list('xz'), dt=dt, inits=inits)
>>> import matplotlib.pyplot as plt
>>> plt.plot(runner.mon.x.flatten(), runner.mon.z.flatten())
  • f (callable) – The derivative function.

  • alpha (int, float, jnp.ndarray, bm.ndarray, sequence) – The fractional-order of the derivative function. Should be in the range of (0., 1.].

  • num_memory (int) – The total time step of the simulation.

  • inits (sequence) – A sequence of the initial values for variables.

  • dt (float, int) – The numerical precision.

  • name (str) – The integrator name.


__init__(f, alpha, num_memory, inits, dt=None, name=None, state_delays=None)[source]#


__init__(f, alpha, num_memory, inits[, dt, ...])


Move all variable into the CPU device.


Move all variables into the GPU device.

hists([var, numpy])

Get the recorded history values.

load_state_dict(state_dict[, warn, compatible])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables[, var_cls])


Reset function.

save_states(filename[, variables])

Save the model states.


Set the integral function.


Returns a dictionary containing a whole state of the module.


Moves all variables into the given device.


Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.


Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

Unflatten the data to construct an object of this class.

unique_name([name, type_])

Get the unique name for this object.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.



All arguments when calling the numer integrator of the differential equation.


The numerical integration precision.


The integral function.


Name of the model.


The parameters defined in the differential equation.


State delays.


The variables defined in the differential equation.