brainpy.neurons.ExpIF#

class brainpy.neurons.ExpIF(*args, input_var=True, noise=None, spike_fun=None, **kwargs)[source]#

Exponential integrate-and-fire neuron model.

Model Descriptions

In the exponential integrate-and-fire model [1], the differential equation for the membrane potential is given by

\[\begin{split}\tau\frac{d V}{d t}= - (V-V_{rest}) + \Delta_T e^{\frac{V-V_T}{\Delta_T}} + RI(t), \\ \text{after} \, V(t) \gt V_{th}, V(t) = V_{reset} \, \text{last} \, \tau_{ref} \, \text{ms}\end{split}\]

This equation has an exponential nonlinearity with “sharpness” parameter \(\Delta_{T}\) and “threshold” \(\vartheta_{rh}\).

The moment when the membrane potential reaches the numerical threshold \(V_{th}\) defines the firing time \(t^{(f)}\). After firing, the membrane potential is reset to \(V_{rest}\) and integration restarts at time \(t^{(f)}+\tau_{\rm ref}\), where \(\tau_{\rm ref}\) is an absolute refractory time. If the numerical threshold is chosen sufficiently high, \(V_{th}\gg v+\Delta_T\), its exact value does not play any role. The reason is that the upswing of the action potential for \(v\gg v +\Delta_{T}\) is so rapid, that it goes to infinity in an incredibly short time. The threshold \(V_{th}\) is introduced mainly for numerical convenience. For a formal mathematical analysis of the model, the threshold can be pushed to infinity.

The model was first introduced by Nicolas Fourcaud-Trocmé, David Hansel, Carl van Vreeswijk and Nicolas Brunel [1]. The exponential nonlinearity was later confirmed by Badel et al. [3]. It is one of the prominent examples of a precise theoretical prediction in computational neuroscience that was later confirmed by experimental neuroscience.

Two important remarks:

  • (i) The right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data [3]. In this sense the exponential nonlinearity is not an arbitrary choice but directly supported by experimental evidence.

  • (ii) Even though it is a nonlinear model, it is simple enough to calculate the firing rate for constant input, and the linear response to fluctuations, even in the presence of input noise [4].

Model Examples

>>> import brainpy as bp
>>> group = bp.neurons.ExpIF(1)
>>> runner = bp.DSRunner(group, monitors=['V'], inputs=('input', 10.))
>>> runner.run(300., )
>>> bp.visualize.line_plot(runner.mon.ts, runner.mon.V, ylabel='V', show=True)

Model Parameters

Parameter

Init Value

Unit

Explanation

V_rest

-65

mV

Resting potential.

V_reset

-68

mV

Reset potential after spike.

V_th

-30

mV

Threshold potential of spike.

V_T

-59.9

mV

Threshold potential of generating action potential.

delta_T

3.48

Spike slope factor.

R

1

Membrane resistance.

tau

10

Membrane time constant. Compute by R * C.

tau_ref

1.7

Refractory period length.

Model Variables

Variables name

Initial Value

Explanation

V

0

Membrane potential.

input

0

External and synaptic input current.

spike

False

Flag to mark whether the neuron is spiking.

refractory

False

Flag to mark whether the neuron is in refractory period.

t_last_spike

-1e7

Last spike time stamp.

References

__init__(*args, input_var=True, noise=None, spike_fun=None, **kwargs)[source]#

Methods

__init__(*args[, input_var, noise, spike_fun])

add_aft_update(key, fun)

Add the after update into this node

add_bef_update(key, fun)

Add the before update into this node

add_inp_fun(key, fun)

Add an input function.

clear_input()

Empty function of clearing inputs.

cpu()

Move all variable into the CPU device.

cuda()

Move all variables into the GPU device.

derivative(V, t, I)

get_aft_update(key)

Get the after update of this node by the given key.

get_batch_shape([batch_size])

get_bef_update(key)

Get the before update of this node by the given key.

get_delay_data(identifier, delay_pos, *indices)

Get delay data according to the provided delay steps.

get_delay_var(name)

get_inp_fun(key)

Get the input function.

get_local_delay(var_name, delay_name)

Get the delay at the given identifier (name).

has_aft_update(key)

Whether this node has the after update of the given key.

has_bef_update(key)

Whether this node has the before update of the given key.

init_param(param[, shape, sharding])

Initialize parameters.

init_variable(var_data, batch_or_mode[, ...])

Initialize variables.

inv_scaling(x[, scale])

jit_step_run(i, *args, **kwargs)

The jitted step function for running.

load_state(state_dict, **kwargs)

Load states from a dictionary.

load_state_dict(state_dict[, warn, compatible])

Copy parameters and buffers from state_dict into this module and its descendants.

nodes([method, level, include_self])

Collect all children nodes.

offset_scaling(x[, bias, scale])

register_delay(identifier, delay_step, ...)

Register delay variable.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables[, var_cls])

register_local_delay(var_name, delay_name[, ...])

Register local relay at the given delay time.

reset(*args, **kwargs)

Reset function which reset the whole variables in the model (including its children models).

reset_local_delays([nodes])

Reset local delay variables.

reset_state([batch_size])

Reset function which resets local states in this model.

return_info()

save_state(**kwargs)

Save states as a dictionary.

setattr(key, value)

rtype:

None

state_dict(**kwargs)

Returns a dictionary containing a whole state of the module.

std_scaling(x[, scale])

step_run(i, *args, **kwargs)

The step run function.

sum_inputs(*args[, init, label])

Summarize all inputs by the defined input functions .cur_inputs.

to(device)

Moves all variables into the given device.

tpu()

Move all variables into the TPU device.

tracing_variable(name, init, shape[, ...])

Initialize the variable which can be traced during computations and transformations.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

tree_flatten()

Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

Unflatten the data to construct an object of this class.

unique_name([name, type_])

Get the unique name for this object.

update([x])

The function to specify the updating rule.

update_local_delays([nodes])

Update local delay variables.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.

Attributes

mode

Mode of the model, which is useful to control the multiple behaviors of the model.

name

Name of the model.

spk_dtype

supported_modes

Supported computing modes.

varshape

The shape of variables in the neuron group.

cur_inputs