class brainpy.synapses.DualExponential(pre, post, conn, stp=None, output=None, comp_method='dense', g_max=1.0, tau_decay=10.0, tau_rise=1.0, delay_step=None, method='exp_auto', name=None, mode=None, stop_spike_gradient=False)[source]#

Dual exponential synapse model.

Model Descriptions

The dual exponential synapse model [1], also named as difference of two exponentials model, is given by:

\[g_{\mathrm{syn}}(t)=g_{\mathrm{max}} \frac{\tau_{1} \tau_{2}}{ \tau_{1}-\tau_{2}}\left(\exp \left(-\frac{t-t_{0}}{\tau_{1}}\right) -\exp \left(-\frac{t-t_{0}}{\tau_{2}}\right)\right)\]

where \(\tau_1\) is the time constant of the decay phase, \(\tau_2\) is the time constant of the rise phase, \(t_0\) is the time of the pre-synaptic spike, \(g_{\mathrm{max}}\) is the maximal conductance.

However, in practice, this formula is hard to implement. The equivalent solution is two coupled linear differential equations [2]:

\[\begin{split}\begin{aligned} &g_{\mathrm{syn}}(t)=g_{\mathrm{max}} g * \mathrm{STP} \\ &\frac{d g}{d t}=-\frac{g}{\tau_{\mathrm{decay}}}+h \\ &\frac{d h}{d t}=-\frac{h}{\tau_{\text {rise }}}+ \delta\left(t_{0}-t\right), \end{aligned}\end{split}\]

where \(\mathrm{STP}\) is used to model the short-term plasticity effect of synapses.

Model Examples

>>> import brainpy as bp
>>> from brainpy import neurons, synapses, synouts
>>> import matplotlib.pyplot as plt
>>> neu1 = neurons.LIF(1)
>>> neu2 = neurons.LIF(1)
>>> syn1 = synapses.DualExponential(neu1, neu2, bp.connect.All2All(), output=synouts.CUBA())
>>> net = bp.Network(pre=neu1, syn=syn1, post=neu2)
>>> runner = bp.DSRunner(net, inputs=[('pre.input', 25.)], monitors=['pre.V', 'post.V', 'syn.g', 'syn.h'])
>>> fig, gs = bp.visualize.get_figure(2, 1, 3, 8)
>>> fig.add_subplot(gs[0, 0])
>>> plt.plot(runner.mon.ts, runner.mon['pre.V'], label='pre-V')
>>> plt.plot(runner.mon.ts, runner.mon['post.V'], label='post-V')
>>> plt.legend()
>>> fig.add_subplot(gs[1, 0])
>>> plt.plot(runner.mon.ts, runner.mon['syn.g'], label='g')
>>> plt.plot(runner.mon.ts, runner.mon['syn.h'], label='h')
>>> plt.legend()
  • pre (NeuDyn) – The pre-synaptic neuron group.

  • post (NeuDyn) – The post-synaptic neuron group.

  • conn (optional, ArrayType, dict of (str, ndarray), TwoEndConnector) – The synaptic connections.

  • comp_method (str) – The connection type used for model speed optimization. It can be sparse and dense. The default is sparse.

  • delay_step (int, ArrayType, Initializer, Callable) – The delay length. It should be the value of \(\mathrm{delay\_time / dt}\).

  • tau_decay (float, ArrayArray, ndarray) – The time constant of the synaptic decay phase. [ms]

  • tau_rise (float, ArrayArray, ndarray) – The time constant of the synaptic rise phase. [ms]

  • g_max (float, ArrayType, Initializer, Callable) – The synaptic strength (the maximum conductance). Default is 1.

  • name (str) – The name of this synaptic projection.

  • method (str) – The numerical integration methods.


__init__(pre, post, conn, stp=None, output=None, comp_method='dense', g_max=1.0, tau_decay=10.0, tau_rise=1.0, delay_step=None, method='exp_auto', name=None, mode=None, stop_spike_gradient=False)[source]#


__init__(pre, post, conn[, stp, output, ...])

add_aft_update(key, fun)

Add the after update into this node

add_bef_update(key, fun)

Add the before update into this node

add_inp_fun(key, fun)

Add an input function.


Check whether post group satisfies the requirement.


Check whether pre group satisfies the requirement.

clear_input(*args, **kwargs)

Empty function of clearing inputs.


Move all variable into the CPU device.


Move all variables into the GPU device.


Get the after update of this node by the given key.


Get the before update of this node by the given key.

get_delay_data(identifier, delay_pos, *indices)

Get delay data according to the provided delay steps.



Get the input function.

get_local_delay(var_name, delay_name)

Get the delay at the given identifier (name).


Whether this node has the after update of the given key.


Whether this node has the before update of the given key.

jit_step_run(i, *args, **kwargs)

The jitted step function for running.

load_state(state_dict, **kwargs)

Load states from a dictionary.

load_state_dict(state_dict[, warn, compatible])

Copy parameters and buffers from state_dict into this module and its descendants.

nodes([method, level, include_self])

Collect all children nodes.

register_delay(identifier, delay_step, ...)

Register delay variable.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables[, var_cls])

register_local_delay(var_name, delay_name[, ...])

Register local relay at the given delay time.

reset(*args, **kwargs)

Reset function which reset the whole variables in the model (including its children models).


Reset local delay variables.

reset_state(*args, **kwargs)

Reset function which resets local states in this model.


Save states as a dictionary.

setattr(key, value)




Returns a dictionary containing a whole state of the module.

step_run(i, *args, **kwargs)

The step run function.

sum_inputs(*args[, init, label])

Summarize all inputs by the defined input functions .cur_inputs.


Moves all variables into the given device.


Move all variables into the TPU device.

tracing_variable(name, init, shape[, ...])

Initialize the variable which can be traced during computations and transformations.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.


Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

Unflatten the data to construct an object of this class.

unique_name([name, type_])

Get the unique name for this object.


The function to specify the updating rule.


Update local delay variables.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.




Mode of the model, which is useful to control the multiple behaviors of the model.


Name of the model.


Supported computing modes.