Numerical Solvers for Ordinary Differential Equations#

Colab Open in Kaggle

@Chaoming Wang @Xiaoyu Chen

Brain modeling toolkit provided in BrainPy is focused on differential equations. How to solve differential equations is the essence of the neurodynamics simulation. The exact algebraic solutions are only available for low-order differential equations. For the coupled high-dimensional non-linear brain dynamical systems, we need to resort to numerical methods for solving such differential equations.

This section will illustrate how to define ordinary differential quations (ODEs) and how to define the numerical integration methods for ODEs in BrainPy.

import brainpy as bp
import brainpy.math as bm
import matplotlib.pyplot as plt

bm.set_platform('cpu')

bp.__version__
'2.4.3.post4'

How to define ODE functions?#

BrainPy provides a convenient and intuitive way to define ODE systems. For the ODEs

\[\begin{split} {dx \over dt} = f_1(x, t, y, p_1)\\ {dy \over dt} = g_1(y, t, x, p_2) \end{split}\]

we can define them in a Python function:

def diff(x, y, t, p1, p2):
    dx = f1(x, t, y, p1)
    dy = g1(y, t, x, p2)
    return dx, dy

where t denotes the current time, x and y passed before t denote the dynamical variables, and p1 and p2 after t denote the parameters needed in this system. In the function body, the derivative f1 and g1 can be customized by the user’s need. Finally, the corresponding derivatives dx and dy are returned in the same order as that of the variables in the function arguments.

For each variabl, it can be a scalar (var_type = bp.integrators.SCALAR_VAR), a vector/matrix (var_type = bp.integrators.POP_VAR), or a system (var_type = bp.integrators.SYSTEM_VAR). The “system” means that the argument x denotes an array of variables. Take the above example as the demonstration again, we can redefine it as:

def diff(xy, t, p1, p2):
    x, y = xy
    dx = f1(x, t, y, p1)
    dy = g1(y, t, x, p2)
    return bm.array([dx, dy])

How to define the numerical integration for ODEs?#

After the definition of ODE functions, it is very easy to define the numerical integration for these functions. We just need to put a decorator bp.odeint above the ODE function.

@bp.odeint
def diff(x, y, t, p1, p2):
    dx = f1(x, t, y, p1)
    dy = g1(y, t, x, p2)
    return dx, dy

After wrapping it by bp.odeint, the function becomes an instance of ODEintegrator.

isinstance(diff, bp.ode.ODEIntegrator)
True

bp.odeint receives several arguments:

  • “method”: A string, used to specify the numerical methods to integrate the ODE functions. The default method is Euler.

diff
<brainpy._src.integrators.ode.explicit_rk.Euler at 0x1eb69e6c040>
  • “dt”: A float, used to set the default numerical precision. The default “dt” is 0.1.

diff.dt
0.1
  • “show_code”: bool, to indicate whether to show the numerical integration code. Let’s take Euler method and RK4 method as the illustrated examples.

@bp.odeint(method='euler', show_code=True, dt=0.01)
def diff(x, y, t, p1, p2):
    dx = f1(x, t, y, p1)
    dy = g1(y, t, x, p2)
    return dx, dy

diff
def brainpy_itg_of_ode8_diff(x, y, t, p1, p2, dt=0.01):
  dx_k1, dy_k1 = f(x, y, t, p1, p2)
  x_new = x + dx_k1 * dt * 1
  y_new = y + dy_k1 * dt * 1
  return x_new, y_new

{'f': <function diff at 0x000001EB6B392B90>}
<brainpy._src.integrators.ode.explicit_rk.Euler at 0x1eb6b488370>
@bp.odeint(method='rk4', show_code=True, dt=0.1)
def diff(x, y, t, p1, p2):
    dx = f1(x, t, y, p1)
    dy = g1(y, t, x, p2)
    return dx, dy

diff
def brainpy_itg_of_ode9_diff(x, y, t, p1, p2, dt=0.1):
  dx_k1, dy_k1 = f(x, y, t, p1, p2)
  k2_x_arg = x + dt * dx_k1 * 0.5
  k2_y_arg = y + dt * dy_k1 * 0.5
  k2_t_arg = t + dt * 0.5
  dx_k2, dy_k2 = f(k2_x_arg, k2_y_arg, k2_t_arg, p1, p2)
  k3_x_arg = x + dt * dx_k2 * 0.5
  k3_y_arg = y + dt * dy_k2 * 0.5
  k3_t_arg = t + dt * 0.5
  dx_k3, dy_k3 = f(k3_x_arg, k3_y_arg, k3_t_arg, p1, p2)
  k4_x_arg = x + dt * dx_k3
  k4_y_arg = y + dt * dy_k3
  k4_t_arg = t + dt
  dx_k4, dy_k4 = f(k4_x_arg, k4_y_arg, k4_t_arg, p1, p2)
  x_new = x + dx_k1 * dt * 1/6 + dx_k2 * dt * 1/3 + dx_k3 * dt * 1/3 + dx_k4 * dt * 1/6
  y_new = y + dy_k1 * dt * 1/6 + dy_k2 * dt * 1/3 + dy_k3 * dt * 1/3 + dy_k4 * dt * 1/6
  return x_new, y_new

{'f': <function diff at 0x000001EB6B393010>}
<brainpy._src.integrators.ode.explicit_rk.RK4 at 0x1eb6b1c0220>

Two Examples#

Example 1: FitzHugh–Nagumo model#

Now, let’s take the well known FitzHugh–Nagumo model as an exmaple to illustrate how to define ODE solvers for brain modeling. The FitzHugh–Nagumo model (FHN) model has two dynamical variables, which are governed by the following equations:

\[\begin{split} \begin{split} \tau {\dot {w}}&=v+a-bw\\ {\dot {v}} &=v-{\frac {v^{3}}{3}}-w+I_{\rm {ext}} \end{split} \end{split}\]

For this FHN model, we can code it in BrainPy like this:

@bp.odeint(dt=0.01)
def integral(V, w, t, Iext, a, b, tau):
    dw = (V + a - b * w) / tau
    dV = V - V * V * V / 3 - w + Iext
    return dV, dw

After defining the numerical solver, the solution of the ODE system in the given times can be easily solved. For example, for the given parameters,

a = 0.7;   b = 0.8;   tau = 12.5;   Iext = 1.

the solution of the FHN model between 0 and 100 ms can be approximated by

jit_integral = bm.jit(integral)

hist_times = bm.arange(0, 100, 0.01)
hist_V = []
V, w = 0., 0.
for t in hist_times:
    V, w = jit_integral(V, w, t, Iext, a, b, tau)
    hist_V.append(V)

plt.plot(hist_times, hist_V)
plt.show()
../_images/650c62a3ff934b143935c3bc1b836e5cd679ba25fa5751cfb2bd9f3dde2da1e6.png

This manual loop in Python code is usually slow. In BrainPy, we provide a structural runner for integrators: brainpy.integrators.IntegratorRunner, which can benefit from the JIT compilation.

runner = bp.IntegratorRunner(
    integral,
    monitors=['V'],
    inits=dict(V=0., w=0.),
    dt=0.01
)
runner.run(100., args=dict(a=a, b=b, tau=tau, Iext=Iext))

plt.plot(runner.mon.ts, runner.mon.V)
plt.show()
../_images/650c62a3ff934b143935c3bc1b836e5cd679ba25fa5751cfb2bd9f3dde2da1e6.png

Example 2: Hodgkin–Huxley model#

Another more complex example is the classical Hodgkin–Huxley neuron model. In HH model, four dynamical variables (V, m, n, h) are used for modeling the initiation and propagation of the action potential. Specifically, they are governed by the following equations:

\[\begin{split} \begin{aligned} C_{m} \frac{d V}{d t} &=-\bar{g}_{\mathrm{K}} n^{4}\left(V-V_{K}\right)- \bar{g}_{\mathrm{Na}} m^{3} h\left(V-V_{N a}\right)-\bar{g}_{l}\left(V-V_{l}\right)+I_{s y n} \\ \frac{d m}{d t} &=\alpha_{m}(V)(1-m)-\beta_{m}(V) m \\ \frac{d h}{d t} &=\alpha_{h}(V)(1-h)-\beta_{h}(V) h \\ \frac{d n}{d t} &=\alpha_{n}(V)(1-n)-\beta_{n}(V) n \end{aligned} \end{split}\]

In BrainPy, such dynamical system can be coded as:

@bp.odeint(method='rk4', dt=0.01)
def integral(V, m, h, n, t, Iext, gNa, ENa, gK, EK, gL, EL, C):
    alpha = 0.1 * (V + 40) / (1 - bm.exp(-(V + 40) / 10))
    beta = 4.0 * bm.exp(-(V + 65) / 18)
    dmdt = alpha * (1 - m) - beta * m

    alpha = 0.07 * bm.exp(-(V + 65) / 20.)
    beta = 1 / (1 + bm.exp(-(V + 35) / 10))
    dhdt = alpha * (1 - h) - beta * h

    alpha = 0.01 * (V + 55) / (1 - bm.exp(-(V + 55) / 10))
    beta = 0.125 * bm.exp(-(V + 65) / 80)
    dndt = alpha * (1 - n) - beta * n

    I_Na = (gNa * m ** 3.0 * h) * (V - ENa)
    I_K = (gK * n ** 4.0) * (V - EK)
    I_leak = gL * (V - EL)
    dVdt = (- I_Na - I_K - I_leak + Iext) / C

    return dVdt, dmdt, dhdt, dndt

Same as the FHN model, we can also integrate the HH model in the given parameters and time interval:

Iext = 10.;   ENa = 50.;   EK = -77.;   EL = -54.387
C = 1.0;      gNa = 120.;  gK = 36.;    gL = 0.03
runner = bp.IntegratorRunner(
    integral,
    monitors=list('Vmhn'),
    inits=[0., 0., 0., 0.],
    dt=0.01
)
runner.run(100., args=dict(Iext=Iext, gNa=gNa, ENa=ENa, gK=gK, EK=EK, gL=gL, EL=EL, C=C),)

plt.subplot(211)
plt.plot(runner.mon.ts, runner.mon.V, label='V')
plt.legend()
plt.subplot(212)
plt.plot(runner.mon.ts, runner.mon.m, label='m')
plt.plot(runner.mon.ts, runner.mon.h, label='h')
plt.plot(runner.mon.ts, runner.mon.n, label='n')
plt.legend()
plt.show()
../_images/ebd81368880a7e518a84b2e98f8ed10c4565ca3ec08caed0c6693d7603372ec6.png

Provided ODE Numerical Solvers#

BrainPy provides several types of numerical methods for ODEs, including explicit Runge-Kutta methods, adaptive Runge-Kutta methods, and Exponential Euler methods.

1. Explicit Runge-Kutta (RK) methods for ODEs#

The first category of ODE numerical integration support is the explicit Runge-Kutta (RK) methods. RK methods are a huge family of numerical methods with a wide variety of trade-offs: efficiency, accuracy, stability, etc. The supported RK methods are listed in the following table:

Methods

Keywords

Euler

euler

Midpoint

midpoint

Heun’s second-order method

heun2

Ralston’s second-order method

ralston2

RK2

rk2

RK3

rk3

RK4

rk4

Heun’s third-order method

heun3

Ralston’s third-order method

ralston3

Third-order Strong Stability Preserving Runge-Kutta

ssprk3

Ralston’s fourth-order method

ralston4

Runge-Kutta 3/8-rule fourth-order method

rk4_38rule

Users can utilize these methods by specifying the method option in brainpy.odeint() with their corresponding keyword. For example:

@bp.odeint(method='rk4')
def int_v(v, t, p):
    # do something
    return v

int_v
<brainpy._src.integrators.ode.explicit_rk.RK4 at 0x1eb6b41c880>

Or, you can directly instance your favorite integrator:

@bp.ode.RK4
def int_v(v, t, p):
    # do something
    return v

int_v
<brainpy._src.integrators.ode.explicit_rk.RK4 at 0x1eb6c65ab00>
def derivative(v, t, p):
    # do something
    return v

int_v = bp.ode.RK4(derivative, dt=0.01)
int_v
<brainpy._src.integrators.ode.explicit_rk.RK4 at 0x1eb6b41f430>

2. Adaptive Runge-Kutta (RK) methods for ODEs#

The second category of ODE numerical support is the adaptive RK methods. What’s different from the explicit RK methods is that adaptive methods are designed to produce an estimate of the local truncation error in a single Runge-Kutta step, then such error can be used to adaptively control the numerical step size. Specifically, if \(error > tol\), then replace \(dt\) with \(dt_{new}\) and repeat the step. Therefore, adaptive RK methods allow a varied step size. In BrainPy, the following adaptive RK methods are provided in BrainPy:

Methods

keywords

Runge–Kutta–Fehlberg 4(5)

rkf45

Runge–Kutta–Fehlberg 1(2)

rkf12

Dormand–Prince method

rkdp

Cash–Karp method

ck

Bogacki–Shampine method

bs

Heun–Euler method

heun_euler

In default, the above methods are not adaptive, unless users provide a keyword adaptive=True in brainpy.odeint(). When users use the adaptive RK methods for numerical integration, the instantaneously adjusted stepsize dt will be appended in the functional arguments. Moreover, the tolerance tol for stepsize adjustment can also be modified. Let’s take the Lorenz system as the example:

# adaptively adjust step-size

@bm.jit
@bp.odeint(method='rkf45', 
           adaptive=True, # active the "adaptive" option
           tol=0.001) # set the tolerance
def lorenz(x, y, z, t, sigma, beta, rho):
    dx = sigma * (y - x)
    dy = x * (rho - z) - y
    dz = x * y - beta * z
    return dx, dy, dz
times = bm.arange(0, 100, 0.01)
hist_x, hist_y, hist_z, hist_dt = [], [], [], []
x, y, z, dt = bm.array([1]), bm.array([1]), bm.array([1]), 0.05
for t in times:
    # should provide one more argument "dt" when using the adaptive rk method
    x, y, z, dt = lorenz(x, y, z, t, sigma=10, beta=8/3, rho=28, dt=dt)  
    hist_x.append(x.value)
    hist_y.append(y.value)
    hist_z.append(z.value)
    hist_dt.append(dt)
hist_x = bm.array(hist_x).flatten()
hist_y = bm.array(hist_y).flatten()
hist_z = bm.array(hist_z).flatten()
hist_dt = bm.array(hist_dt)
fig = plt.figure()
ax = plt.subplot(projection='3d')
plt.plot(hist_x, hist_y, hist_z)
ax.set_xlabel('x')
ax.set_xlabel('y')
ax.set_xlabel('z')

fig = plt.figure()
plt.plot(hist_dt[:100])
plt.xlabel('Step No.')
plt.ylabel('Adaptive dt')
plt.show()
../_images/9f5fb19eb55646e19c0645f02e32df039df0c245cb9f17d47494862f43cbda70.png ../_images/4b1821568940e27b26d489eeddf2bbcdf8b401a743b7a20c89d42ced042a99ce.png

3. Exponential Euler methods for ODEs#

Finally, BrainPy provides Exponential integrators for ODEs. For you ODE systems, we highly recommend you to use Exponential Euler methods. Exponential Euler method provided in BrainPy uses automatic differentiation to find linear part.

Methods

keywords

Exponential Euler

exp_euler

Let’s take a linear system as the theoretical demonstration,

\[ {dy \over dt} = A - By \]

the exponential Euler schema is given by:

\[ y(t+dt) = y(t) e^{-B*dt} + {A \over B}(1 - e^{-B*dt}) \]

As you can see, for such linear systems, the exponential Euler schema is nearly the exact solution.

However, using Exponential Euler method requires us to write each derivative function separately. Otherwise, the automatic differentiation will lead to wrong results.

Interestingly, the computational expensive neuron model — Hodgkin–Huxley model — is a linear-like ODE system. You will find that by using the Exponential Euler method, the numerical step can be greatly enlarged to save the computation time.

\[\begin{split} \begin{aligned} C_{m}{\frac {d V}{dt}}&= -\left[{\bar {g}}_{\text{K}}n^{4} + {\bar {g}}_{\text{Na}}m^{3}h + {\bar {g}}_{l} \right] V +{\bar {g}}_{\text{K}}n^{4} V_{K} + {\bar {g}}_{\text{Na}}m^{3}h V_{Na} + {\bar {g}}_{l} V_{l} + I_{syn} \\ {\frac {dm}{dt}} &= \left[-\alpha _{m}(V)-\beta _{m}(V)\right]m + \alpha _{m}(V) \\ {\frac {dh}{dt}} &= \left[-\alpha _{h}(V)-\beta _{h}(V)\right]h + \alpha _{h}(V) \\ {\frac {dn}{dt}} &= \left[-\alpha _{n}(V)-\beta _{n}(V)\right]n + \alpha _{n}(V) \\ \end{aligned} \end{split}\]
Iext=10.;   ENa=50.;   EK=-77.;   EL=-54.387
C=1.0;      gNa=120.;  gK=36.;    gL=0.03
def dm(m, t, V):
    alpha = 0.1 * (V + 40) / (1 - bm.exp(-(V + 40) / 10))
    beta = 4.0 * bm.exp(-(V + 65) / 18)
    dmdt = alpha * (1 - m) - beta * m
    return dmdt
def dh(h, t, V):
    alpha = 0.07 * bm.exp(-(V + 65) / 20.)
    beta = 1 / (1 + bm.exp(-(V + 35) / 10))
    dhdt = alpha * (1 - h) - beta * h
    return dhdt
def dn(n, t, V):
    alpha = 0.01 * (V + 55) / (1 - bm.exp(-(V + 55) / 10))
    beta = 0.125 * bm.exp(-(V + 65) / 80)
    dndt = alpha * (1 - n) - beta * n
    return dndt
def dV(V, t, m, h, n, Iext):
    I_Na = (gNa * m ** 3.0 * h) * (V - ENa)
    I_K = (gK * n ** 4.0) * (V - EK)
    I_leak = gL * (V - EL)
    dVdt = (- I_Na - I_K - I_leak + Iext) / C
    return dVdt

Although we define HH differential equations as separable functions, relying on brainpy.JointEq, we can numerically integrate these equations jointly.

hh_derivative = bp.JointEq([dV, dm, dh, dn])
def run(method, Iext=10., dt=0.1):
    integral = bp.odeint(hh_derivative, method=method)

    runner = bp.IntegratorRunner(
        integral,
        monitors=list('Vmhn'),
        inits=[0., 0., 0., 0.],
        args=dict(Iext=Iext),
        dt=dt
    )
    runner.run(100.)

    plt.subplot(211)
    plt.plot(runner.mon.ts, runner.mon.V, label='V')
    plt.legend()
    plt.subplot(212)
    plt.plot(runner.mon.ts, runner.mon.m, label='m')
    plt.plot(runner.mon.ts, runner.mon.h, label='h')
    plt.plot(runner.mon.ts, runner.mon.n, label='n')
    plt.legend()
    plt.show()

Euler Method: not able to complete the integral when the time step is a bit larger

run('euler', Iext=10, dt=0.02)
../_images/3bf71341cdad315f06e8d6e8f2a147f062046688c61a292bfeda9a8b4a0c988c.png
run('euler', Iext=10, dt=0.1)
../_images/bc296c517c09b072f2a45efb9ebb9db3cc45889610d0e26817b4dec12c8004bd.png

RK4 Method: better than the Euler method, but still requires the times step to be small

run('rk4', Iext=10, dt=0.1)
../_images/5906d509b2ea03a48e19631600356b70692f3fe1f249b65811b97e85978c7321.png
run('rk4', Iext=10, dt=0.2)
../_images/6d499379b39136014d01eee0a6a0a742e2b24e459fd19efbcd9b1f1786aab773.png

Exponential Euler Method: allows larger time step and generates accurate results

run('exp_euler', Iext=10, dt=0.2)
../_images/18f374cf6b8cc072d8672b8c6886479d87577295a419a6ed32b3b999020208c6.png