# Build Neurons¶

In BrainPy, the *definition* and *usage* of the neuron model is separated from each other. In such a way, users can recycle the defined models to generate different neuron groups, or can use models defined by other people. Specifically, two class should be used:

`brainpy.NeuType`

: Define the abstract neuron model.`brainpy.NeuGroup`

: Use the abstract neuron model to generate a concrete neuron group.

```
[1]:
```

```
import sys
sys.path.append('../../')
import brainpy as bp
import numpy as np
bp.profile.set(dt=0.01)
```

## brainpy.NeuType¶

Three items should be specified to initialize a `NeuType`

:

`ST`

: The neuronal state.`name`

: The neuron model name.`steps`

: The step functions to update at each time step.

Two kinds of definition provided in BrainPy to define a `NeuType`

:

`vector-based`

: Neuron state`ST`

represents the state of a group of neurons. And each item in`ST`

is a vector,`scalar-based`

: Neuron state`ST`

represents the state of a single neuron. And, each item in`ST`

is a scalar.

The definition logic of `scalar-based`

models may be more straightforward than `vector-based`

models. We will see this in the example of LIF model.

### Hodgkin-Huxley model¶

Let’s first take the Hodgkin-Huxley (HH) neuron model as an example to see how to define a `NeuType`

in BrainPy.

```
[2]:
```

```
# parameters we need #
# ------------------ #
C = 1.0 # Membrane capacity per unit area (assumed constant).
g_Na = 120. # Voltage-controlled conductance per unit area
# associated with the Sodium (Na) ion-channel.
E_Na = 50. # The equilibrium potentials for the sodium ions.
E_K = -77. # The equilibrium potentials for the potassium ions.
g_K = 36. # Voltage-controlled conductance per unit area
# associated with the Potassium (K) ion-channel.
E_Leak = -54.402 # The equilibrium potentials for the potassium ions.
g_Leak = 0.003 # Conductance per unit area associated with the leak channels.
Vth = 20. # membrane potential threshold for spike
```

Four differential equations exist in HH neuron model. Please check Differential equations to see how BrainPy supports differential equations.

For \(m\) channel, the difinition of the corresponding equations can be:

```
[3]:
```

```
@bp.integrate
def int_m(m, t, V):
alpha = 0.1 * (V + 40) / (1 - np.exp(-(V + 40) / 10))
beta = 4.0 * np.exp(-(V + 65) / 18)
dmdt = alpha * (1 - m) - beta * m
return dmdt
```

The \(h\) channel is defined as:

```
[4]:
```

```
@bp.integrate
def int_h(h, t, V):
alpha = 0.07 * np.exp(-(V + 65) / 20.)
beta = 1 / (1 + np.exp(-(V + 35) / 10))
dhdt = alpha * (1 - h) - beta * h
return dhdt
```

The \(n\) channel is defined as:

```
[5]:
```

```
@bp.integrate
def int_n(n, t, V):
alpha = 0.01 * (V + 55) / (1 - np.exp(-(V + 55) / 10))
beta = 0.125 * np.exp(-(V + 65) / 80)
dndt = alpha * (1 - n) - beta * n
return dndt
```

The membrane potential \(V\) is defined as:

```
[6]:
```

```
@bp.integrate
def int_V(V, t, m, h, n, Isyn):
INa = g_Na * m ** 3 * h * (V - E_Na)
IK = g_K * n ** 4 * (V - E_K)
IL = g_Leak * (V - E_Leak)
dvdt = (- INa - IK - IL + Isyn) / C
return dvdt
```

In BrainPy, most of the integration of differential equations are implemented by the numerical methods, such as Euler, Exponential Euler, RK2, RK4 (please see Numerical integrators). Therefore, after defining the differential equations, the next important thing is to define the update logic for each variable from the current time point to next.

Here, let’s first define the state of a HH model. We provide a data structure `brainpy.types.NeuState`

to support the neuron state management.

```
[7]:
```

```
ST = bp.types.NeuState(
{'V': -65., # denotes membrane potential.
'm': 0., # denotes potassium channel activation probability.
'h': 0., # denotes sodium channel activation probability.
'n': 0., # denotes sodium channel inactivation probability.
'spike': 0., # denotes spiking state.
'input': 0. # denotes synaptic input.
}
)
```

In `ST`

, the dynamical variable \(V\), \(m\), \(h\), and \(n\) are inluded. We also care about whether the neuron provide a \(spike\) at current time. Moreover, we define a \(input\) item to receive the synaptic inputs and the external inputs.

Based on the neuron state `ST`

, the update logic of the HH model from the current time point (\(t\)) to the next time point \((t + dt)\) can be defined as:

```
[8]:
```

```
def update(ST, _t):
m = np.clip(int_m(ST['m'], _t, ST['V']), 0., 1.)
h = np.clip(int_h(ST['h'], _t, ST['V']), 0., 1.)
n = np.clip(int_n(ST['n'], _t, ST['V']), 0., 1.)
V = int_V(ST['V'], _t, ST['m'], ST['h'], ST['n'], ST['input'])
ST['spike'] = np.logical_and(ST['V'] < Vth, V >= Vth)
ST['V'] = V
ST['m'] = m
ST['h'] = h
ST['n'] = n
ST['input'] = 0.
```

In this example, the `update()`

function of HH model needs two data:

`ST`

: The neuron state.`_t`

: The system time at current point.

Putting together, a HH neuron model is defined as:

```
[9]:
```

```
HH = bp.NeuType(name='HH_neuron',
ST=ST,
steps=update,
mode='vector')
```

Here, we should note that we just define an abstract HH neuron model. This model can run with any number of neurons, and with any geometry (one dimension, or two dimension). Only after define a concrete neuron group, can we run it or use it to construct a network.

### LIF model (vector-based)¶

Here, same with HH model defined above, let’s define a vector-based LIF model. The formal equations of a LIF model is given by:

where \(V\) is the membrane potential, \(V_{rest}\) is the rest membrane potential, \(V_{th}\) is the spike threshold, \(\tau_m\) is the time constant, \(\tau_{ref}\) is the refractory time period, and \(I\) is the time-variant synaptic inputs.

Let’s define the following item in neuron state:

`V`

: The membrane potential.`input`

: The synaptic input.`spike`

: Whether produce a spike.`refractory`

: Whether the neuron is in refractory state.`t_last_spike`

: The last spike time for calculating refractory state.

```
[10]:
```

```
ST = bp.types.NeuState(
{'V': 0, # membrane potential
'input':0, # synaptic input
'spike':0, # spike state
'refractory': 0, # refractory state
't_last_spike': -1e7 # last spike time
}
)
```

Assume the items in the neuron state `ST`

of a LIF model are vectors, the update logic of vector-based LIF neuron model is:

```
[11]:
```

```
tau_m=10.; Vr=0.; Vth=10.; tau_ref=0.
@bp.integrate
def int_f(V, t, Isyn):
return (-V + Vr + Isyn) / tau_m
def update(ST, _t_):
V = int_f(ST['V'], _t_, ST['input'])
is_ref = _t_ - ST['t_last_spike'] < tau_ref
V = np.where(is_ref, ST['V'], V)
is_spike = V > Vth
spike_idx = np.where(is_spike)[0]
if len(spike_idx):
V[spike_idx] = Vr
is_ref[spike_idx] = 1.
ST['t_last_spike'][spike_idx] = _t_
ST['V'] = V
ST['spike'] = is_spike
ST['refractory'] = is_ref
ST['input'] = 0.
lif = bp.NeuType(name='LIF',
ST=ST,
steps=update,
mode='vector')
```

Here, for vector-based LIF model, we must differentiate the states for each neuron at every time point. For neurons in refractory period (`is_ref`

), we must keep its \(V\) unchange. For neurons in spiking state (`is_spike`

), we must reset its membrane potential. So, it looks like the definition of vector-based LIF mode is somewhat complex. However, the good news is that BrainPy support the difinition of neuron models in scalar mode, which means at each time point, your model difinition
can only consider the behavior of one single neuron. Let’s take a look.

### LIF model (scalar-based)¶

```
[12]:
```

```
def update(ST, _t):
if _t - ST['t_last_spike'] > tau_ref:
V = int_f(ST['V'], _t, ST['input'])
if V >= Vth:
V = Vr
ST['t_last_spike'] = _t
ST['spike'] = True
ST['V'] = V
else:
ST['spike'] = False
ST['input'] = 0.
lif = bp.NeuType(name='LIF',
ST=ST,
steps=update,
mode='scalar')
```

As you can see, the scalar-based LIF model is intuitive and straightforward in BrainPy. If the neuron is not in refractory period (`_t - ST['t_last_spike'] > tau_ref`

), integrate the membrane potential by calling `int_f()`

. If the neuron reaches the spike threshold (`V >= Vth`

), then reset the membrane potential (`V = Vr`

) and set the spike state is `True`

.

### Reconcile the scalar-based and vector-based model¶

Although BrainPy provides a convenient way to define neuron models in scalar mode, it doesn’t mean that we should give up the vector-based model. Actually, when the user run the model without JIT acceleration (or `profile.set(jit=False)`

), the vector-based model is much more efficient than the scalar-based model, for the powerful array-oriented programming support in NumPy or PyTorch (in future BrainPy will support PyTorch). On the contrary, the scalar-based LIF model is usually efficient than
the vector-based ones in `JIT`

mode. This is thanks to the JIT acceleration of for loop provided in Numba. Therefore, we recommend you choose different difinition mode for different running backend. For example, we can define a unified LIF model with the explicit backend judgement:

```
[13]:
```

```
def get_lif(tau_m=10., Vr=0., Vth=10., tau_ref=0.):
ST = bp.types.NeuState({'V': 0, 'input':0, 'spike':0,
'refractory': 0, 't_last_spike': -1e7})
@bp.integrate
def int_f(V, t, Isyn):
return (-V + Vr + Isyn) / tau_m
if bp.profile.is_jit():
def update(ST, _t):
if _t - ST['t_last_spike'] > tau_ref:
V = int_f(ST['V'], _t, ST['input'])
if V >= Vth:
V = Vr
ST['t_last_spike'] = _t
ST['spike'] = True
ST['V'] = V
else:
ST['spike'] = False
ST['input'] = 0.
lif = bp.NeuType(name='LIF', ST=ST, steps=update, mode='scalar')
else:
def update(ST, _t):
V = int_f(ST['V'], _t, ST['input'])
is_ref = _t - ST['t_last_spike'] < tau_ref
V = np.where(is_ref, ST['V'], V)
is_spike = V > Vth
spike_idx = np.where(is_spike)[0]
if len(spike_idx):
V[spike_idx] = Vr
is_ref[spike_idx] = 1.
ST['t_last_spike'][spike_idx] = _t
ST['V'] = V
ST['spike'] = is_spike
ST['refractory'] = is_ref
ST['input'] = 0.
lif = bp.NeuType(name='LIF', ST=ST, steps=update, mode='vector')
return lif
```

In such a way, you can alway get the most efficient way to run your LIF neuron groups:

Choose the JIT accelerator as the running backend, get the scalar-based LIF model.

Choose the array-oriented accelrator

`NumPy`

or`PyTorch`

as the running backend, get the vector-based LIF model.

## brainpy.NeuGroup¶

After we talk about `brainpy.NeuType`

, the uasge of `brainpy.NeuGroup`

is a piece of cake. This is because in a real project the most efforts we pay is the difinition of the models, and BrainPy provide a very convenient way to use your defined models. Specifically, a `brainpy.NeuGroup`

receives the following specifications:

`model`

: The neuron type will be used to generate a neuron group.`geometry`

: The geometry of the neuron group. Can be a int, or a tuple/list of int.`monitors`

: The items to monitor (record the history values.)

Let’s take our defined HH model as an example.

```
[14]:
```

```
group = bp.NeuGroup(HH, geometry=10, monitors=['V', 'm', 'n', 'h'])
```

Each group has a powerful function: `.run()`

. In this function, it receives the following arguments:

`duration`

: Specify the simulation duration. Can be a tuple with`(start time, end time)`

. Or it can be a int to specify the duration`length`

(then the default start time is`0`

).`inputs`

: Specify the inputs for each model component. With the format of`(target, value, [operation])`

. The default operation is`+`

, which means the input`value`

will be added to the`target`

. Or, the operation can be`-`

,`*`

,`/`

, or`=`

.

```
[15]:
```

```
group.run(100., inputs=('ST.input', 5.), report=True)
```

```
Compilation used 0.0000 s.
Start running ...
Run 10.0% used 0.104 s.
Run 20.0% used 0.208 s.
Run 30.0% used 0.317 s.
Run 40.0% used 0.420 s.
Run 50.0% used 0.522 s.
Run 60.0% used 0.632 s.
Run 70.0% used 0.738 s.
Run 80.0% used 0.844 s.
Run 90.0% used 0.952 s.
Run 100.0% used 1.059 s.
Simulation is done in 1.059 s.
```

```
[16]:
```

```
bp.visualize.line_plot(group.mon.ts, group.mon.V, show=True)
```

```
[17]:
```

```
bp.visualize.line_plot(group.mon.ts, group.mon.m, legend='m')
bp.visualize.line_plot(group.mon.ts, group.mon.h, legend='h')
bp.visualize.line_plot(group.mon.ts, group.mon.n, legend='n', show=True)
```