class brainpy.synapses.AMPA(pre, post, conn, output=COBA, stp=None, comp_method='dense', g_max=0.42, delay_step=None, alpha=0.98, beta=0.18, T=0.5, T_duration=0.5, method='exp_auto', name=None, mode=None, stop_spike_gradient=False)[source]#

AMPA synapse model.

Model Descriptions

AMPA receptor is an ionotropic receptor, which is an ion channel. When it is bound by neurotransmitters, it will immediately open the ion channel, causing the change of membrane potential of postsynaptic neurons.

A classical model is to use the Markov process to model ion channel switch. Here \(g\) represents the probability of channel opening, \(1-g\) represents the probability of ion channel closing, and \(\alpha\) and \(\beta\) are the transition probability. Because neurotransmitters can open ion channels, the transfer probability from \(1-g\) to \(g\) is affected by the concentration of neurotransmitters. We denote the concentration of neurotransmitters as \([T]\) and get the following Markov process.


We obtained the following formula when describing the process by a differential equation.

\[\frac{ds}{dt} =\alpha[T](1-g)-\beta g\]

where \(\alpha [T]\) denotes the transition probability from state \((1-g)\) to state \((g)\); and \(\beta\) represents the transition probability of the other direction. \(\alpha\) is the binding constant. \(\beta\) is the unbinding constant. \([T]\) is the neurotransmitter concentration, and has the duration of 0.5 ms.

Moreover, the post-synaptic current on the post-synaptic neuron is formulated as

\[I_{syn} = g_{max} g (V-E)\]

where \(g_{max}\) is the maximum conductance, and E is the reverse potential.

Model Examples

>>> import brainpy as bp
>>> from brainpy import neurons, synapses
>>> import matplotlib.pyplot as plt
>>> neu1 = neurons.HH(1)
>>> neu2 = neurons.HH(1)
>>> syn1 = synapses.AMPA(neu1, neu2, bp.connect.All2All())
>>> net = bp.Network(pre=neu1, syn=syn1, post=neu2)
>>> runner = bp.DSRunner(net, inputs=[('pre.input', 5.)], monitors=['pre.V', 'post.V', 'syn.g'])
>>> fig, gs = bp.visualize.get_figure(2, 1, 3, 8)
>>> fig.add_subplot(gs[0, 0])
>>> plt.plot(runner.mon.ts, runner.mon['pre.V'], label='pre-V')
>>> plt.plot(runner.mon.ts, runner.mon['post.V'], label='post-V')
>>> plt.legend()
>>> fig.add_subplot(gs[1, 0])
>>> plt.plot(runner.mon.ts, runner.mon['syn.g'], label='g')
>>> plt.legend()

(Source code, png, hires.png, pdf)

  • pre (NeuGroup) – The pre-synaptic neuron group.

  • post (NeuGroup) – The post-synaptic neuron group.

  • conn (optional, ArrayType, dict of (str, ndarray), TwoEndConnector) – The synaptic connections.

  • comp_method (str) – The connection type used for model speed optimization. It can be sparse and dense. The default is dense.

  • delay_step (int, ArrayType, Initializer, Callable) – The delay length. It should be the value of \(\mathrm{delay\_time / dt}\).

  • E (float, ArrayType) –

    The reversal potential for the synaptic current. [mV]

    Deprecated since version 2.1.13: E is deprecated in AMPA model. Please define E with brainpy.dyn.synouts.COBA. This parameter will be removed since 2.2.0

  • g_max (float, ArrayType, Initializer, Callable) – The synaptic strength (the maximum conductance). Default is 1.

  • alpha (float, ArrayType) – Binding constant.

  • beta (float, ArrayType) – Unbinding constant.

  • T (float, ArrayType) – Transmitter concentration when synapse is triggered by a pre-synaptic spike.. Default 1 [mM].

  • T_duration (float, ArrayType) – Transmitter concentration duration time after being triggered. Default 1 [ms]

  • name (str) – The name of this synaptic projection.

  • method (str) – The numerical integration methods.


__init__(pre, post, conn, output=COBA, stp=None, comp_method='dense', g_max=0.42, delay_step=None, alpha=0.98, beta=0.18, T=0.5, T_duration=0.5, method='exp_auto', name=None, mode=None, stop_spike_gradient=False)[source]#


__init__(pre, post, conn[, output, stp, ...])


Check whether post group satisfies the requirement.


Check whether pre group satisfies the requirement.



Move all variable into the CPU device.


Move all variables into the GPU device.

dg(g, t, TT)

get_delay_data(identifier, delay_step, *indices)

Get delay data according to the provided delay steps.

load_state_dict(state_dict[, warn, compatible])

Copy parameters and buffers from state_dict into this module and its descendants.

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_delay(identifier, delay_step, ...)

Register delay variable.

register_implicit_nodes(*nodes[, node_cls])

register_implicit_vars(*variables[, var_cls])

reset(*args, **kwargs)

Reset function which reset the whole variables in the model.


Reset local delay variables.


Reset function which reset the states in the model.

save_states(filename[, variables])

Save the model states.


Returns a dictionary containing a whole state of the module.


Moves all variables into the given device.


Move all variables into the TPU device.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.


Flattens the object as a PyTree.

tree_unflatten(aux, dynamic_values)

Unflatten the data to construct an object of this class.

unique_name([name, type_])

Get the unique name for this object.

update(tdi[, pre_spike])

The function to specify the updating rule.


Update local delay variables.

vars([method, level, include_self, ...])

Collect all variables in this node and the children nodes.



Global delay data, which stores the delay variables and corresponding delay targets.


Mode of the model, which is useful to control the multiple behaviors of the model.


Name of the model.