Efficient Synaptic Computation

In a real project, the most of simulation time spends on the computation of the synapses. Therefore, figuring out what is the most efficient way to do synaptic computation is a necessary step to accelerate your computational project. Here, let’s take an E/I balance network as an example to illustrate how to code an efficient synaptic computation.

[1]:
import brainpy as bp

import numpy as np
[2]:
import warnings
warnings.filterwarnings("ignore")

The E/I balance network COBA is adopted from (Vogels & Abbott, 2005) [1].

[3]:
# Parameters for network structure
num = 4000
num_exc = int(num * 0.75)
num_inh = int(num * 0.25)

Neuron Model

In COBA network, each integrate-and-fire neuron is characterized by a time constant, \(\tau\) = 20 ms, and a resting membrane potential, \(V_{rest}\) = -60 mV. Whenever the membrane potential crosses a spiking threshold of -50 mV, an action potential is generated and the membrane potential is reset to the resting potential, where it remains clamped for a 5 ms refractory period. The membrane voltages are calculated as follows:

\[\tau {dV \over dt} = (V_{rest} - V) + g_{exc}(E_{exc} - V) + g_{inh}(E_{inh} - V)\]

where reversal potentials are \(E_{exc} = 0\) mV and \(E_{inh} = -80\) mV.

[4]:
# Parameters for the neuron
tau = 20  # ms
Vt = -50  # mV
Vr = -60  # mV
El = -60  # mV
ref_time = 5.0  # refractory time, ms
I = 20.
[5]:
class LIF(bp.NeuGroup):
    target_backend = ['numpy', 'numba', 'numba-cuda']

    @staticmethod
    def dev_V(V, t, Iexc):
        dV = (Iexc + El - V) / tau
        return dV

    def __init__(self, size, **kwargs):
        # variables
        self.V = bp.ops.zeros(size)
        self.spike = bp.ops.zeros(size)
        self.input = bp.ops.zeros(size)
        self.t_last_spike = bp.ops.ones(size) * -1e7

        # initialize
        self.int_V = bp.odeint(self.dev_V)
        super(LIF, self).__init__(size=size, **kwargs)

    def update(self, _t):
        for i in range(self.num):
            self.spike[i] = 0.
            if (_t - self.t_last_spike[i]) > ref_time:
                V = self.int_V(self.V[i], _t, self.input[i])
                if V >= Vt:
                    self.V[i] = Vr
                    self.spike[i] = 1.
                    self.t_last_spike[i] = _t
                else:
                    self.V[i] = V
            self.input[i] = I

Synapse Model

In COBA network, when a neuron fires, the appropriate synaptic variable of its postsynaptic targets are increased, \(g_{exc} \gets g_{exc} + \Delta g_{exc}\) for an excitatory presynaptic neuron and \(g_{inh} \gets g_{inh} + \Delta g_{inh}\) for an inhibitory presynaptic neuron. Otherwise, these parameters obey the following equations:

\[\begin{split}\tau_{exc} {dg_{exc} \over dt} = -g_{exc} \quad (1) \\ \tau_{inh} {dg_{inh} \over dt} = -g_{inh} \quad (2)\end{split}\]

with synaptic time constants \(\tau_{exc} = 5\) ms, \(\tau_{inh} = 10\) ms, \(\Delta g_{exc} = 0.6\) and \(\Delta g_{inh} = 6.7\).

[6]:
# Parameters for the synapse
tau_exc = 5  # ms
tau_inh = 10  # ms
E_exc = 0.  # mV
E_inh = -80.  # mV
delta_exc = 0.6  # excitatory synaptic weight
delta_inh = 6.7  # inhibitory synaptic weight
[7]:
def run_net(neu_model, syn_model, backend='numba'):
    bp.backend.set(backend)

    E = neu_model(num_exc, monitors=['spike'])
    E.V = np.random.randn(num_exc) * 5. + Vr
    I = neu_model(num_inh, monitors=['spike'])
    I.V = np.random.randn(num_inh) * 5. + Vr
    E2E = syn_model(pre=E, post=E, conn=bp.connect.FixedProb(0.02),
                    tau=tau_exc, weight=delta_exc, E=E_exc)
    E2I = syn_model(pre=E, post=I, conn=bp.connect.FixedProb(0.02),
                    tau=tau_exc, weight=delta_exc, E=E_exc)
    I2E = syn_model(pre=I, post=E, conn=bp.connect.FixedProb(0.02),
                    tau=tau_inh, weight=delta_inh, E=E_inh)
    I2I = syn_model(pre=I, post=I, conn=bp.connect.FixedProb(0.02),
                    tau=tau_inh, weight=delta_inh, E=E_inh)

    net = bp.Network(E, I, E2E, E2I, I2E, I2I)
    t = net.run(100., report=True)

    fig, gs = bp.visualize.get_figure(row_num=5, col_num=1, row_len=1, col_len=10)
    fig.add_subplot(gs[:4, 0])
    bp.visualize.raster_plot(E.mon.ts, E.mon.spike, ylabel='E Group', xlabel='')
    fig.add_subplot(gs[4, 0])
    bp.visualize.raster_plot(I.mon.ts, I.mon.spike, ylabel='I Group', show=True)

    return t

Matrix-based connection

The matrix-based synaptic connection is one of the most intuitive way to build synaptic computations. The connection matrix between two neuron groups can be easily obtained through the function of connector.requires('conn_mat') (details please see Synaptic Connectivity). Each connection matrix is an array with the shape of (num_pre, num_post), like

449f721e11214f3495f30a61a674eb24

Based on conn_mat, the updating logic of the above synapses can be coded as:

[8]:
class SynMat1(bp.TwoEndConn):
    target_backend = ['numpy', 'numba']

    @staticmethod
    def dev_g(g, t, tau):
        dg = - g / tau
        return dg

    def __init__(self, pre, post, conn, tau, weight, E, **kwargs):
        # parameters
        self.tau = tau
        self.weight = weight
        self.E = E

        # p1: connections
        self.conn = conn(pre.size, post.size)
        self.conn_mat = self.conn.requires('conn_mat')

        # variables
        self.g = bp.ops.zeros(self.conn_mat.shape)

        # initialize
        self.int_g = bp.odeint(self.dev_g)
        super(SynMat1, self).__init__(pre=pre, post=post, **kwargs)

    def update(self, _t):
        self.g = self.int_g(self.g, _t, self.tau)
        # p2
        spike_on_syn = np.expand_dims(self.pre.spike, 1) * self.conn_mat
        # p3
        self.g += spike_on_syn * self.weight
        # p4
        self.post.input += np.sum(self.g, axis=0) * (self.E - self.post.V)

In the above defined SynMat1 class, at “p1” line we requires a “conn_mat” structure for the later synaptic computation; at “p2” we get spikes for each synaptic connections according to “conn_mat” and “presynaptic spikes”; then at “p3”, the spike-triggered synaptic variables are added onto its postsynaptic targets; at final “p4” code line, all connected synaptic values are summed to get the current effective conductance by np.sum(self.g, axis=0).

Now, let’s inspect the performance of this matrix-based synapse.

[9]:
t_syn_mat1 = run_net(neu_model=LIF, syn_model=SynMat1)
Compilation used 5.9524 s.
Start running ...
Run 10.0% used 18.150 s.
Run 20.0% used 36.369 s.
Run 30.0% used 55.145 s.
Run 40.0% used 73.973 s.
Run 50.0% used 92.088 s.
Run 60.0% used 110.387 s.
Run 70.0% used 128.580 s.
Run 80.0% used 146.776 s.
Run 90.0% used 165.066 s.
Run 100.0% used 183.273 s.
Simulation is done in 183.273 s.

../_images/tutorials_efficient_synaptic_computation_17_1.png

This matrix-based synapse structure is very inefficient, because 99.9% time were wasted on the synaptic computation. We can inspect this by only running the neuron group models.

[10]:
group = LIF(num, monitors=['spike'])
group.V = np.random.randn(num) * 5. + Vr
group.run(100., inputs=('input', 5.), report=True)
Compilation used 0.2648 s.
Start running ...
Run 10.0% used 0.000 s.
Run 20.0% used 0.000 s.
Run 30.0% used 0.010 s.
Run 40.0% used 0.010 s.
Run 50.0% used 0.010 s.
Run 60.0% used 0.010 s.
Run 70.0% used 0.020 s.
Run 80.0% used 0.020 s.
Run 90.0% used 0.020 s.
Run 100.0% used 0.030 s.
Simulation is done in 0.030 s.

[10]:
0.03049778938293457

As you can see, the neuron group only spends 0.030 s to run. After normalized by the total running time 183.273 s, the neuron group running only accounts for about 0.016369 percent.

Event-based updating

The inefficiency in the above matrix-based computation comes from the horrendous waste of time on synaptic computation. First, it is uncommon for a neuron to generate a spike; Second, in a group of neuron, the generated spikes (self.pre.spike) are usually sparse. Therefore, at many time points, there are many zeros in self.pre.spike, which results self.g add many unnecessary zeros (self.g += spike_on_syn * self.weight).

Alternatively, we can update self.g only when the pre-synaptic neuron produces a spike event (this is called as the event-based updating method):

[11]:
class SynMat2(bp.TwoEndConn):
    target_backend = ['numpy', 'numba']

    @staticmethod
    def dev_g(g, t, tau):
        dg = - g / tau
        return dg

    def __init__(self, pre, post, conn, tau, weight, E, **kwargs):
        # parameters
        self.tau = tau
        self.weight = weight
        self.E = E

        # connections
        self.conn = conn(pre.size, post.size)
        self.conn_mat = self.conn.requires('conn_mat')

        # variables
        self.g = bp.ops.zeros(self.conn_mat.shape)

        # initialize
        self.int_g = bp.odeint(self.dev_g)
        super(SynMat2, self).__init__(pre=pre, post=post, **kwargs)

    def update(self, _t):
        self.g = self.int_g(self.g, _t, self.tau)
        # p1
        for pre_i, spike in enumerate(self.pre.spike):
            if spike:
                self.g[pre_i] += self.conn_mat[pre_i] * self.weight
        self.post.input += np.sum(self.g, axis=0) * (self.E - self.post.V)

Compared to SynMat1, we replace “p2” and “p3” in SynMat1 with “p1” in SynMat2. Now, the updating logic is only when the pre-synaptic neuron emits a spike (if spike), the connected post-synaptic state g will be updated (self.g[pre_i] += self.conn_mat[pre_i] * self.weight).

[12]:
t_syn_mat2 = run_net(neu_model=LIF, syn_model=SynMat2)
Compilation used 5.4851 s.
Start running ...
Run 10.0% used 9.456 s.
Run 20.0% used 18.824 s.
Run 30.0% used 28.511 s.
Run 40.0% used 38.090 s.
Run 50.0% used 47.822 s.
Run 60.0% used 57.400 s.
Run 70.0% used 66.850 s.
Run 80.0% used 76.544 s.
Run 90.0% used 86.045 s.
Run 100.0% used 95.620 s.
Simulation is done in 95.620 s.

../_images/tutorials_efficient_synaptic_computation_25_1.png

Such event-based matrix connection boosts the running speed nearly 2 times (compare 183.273 s with 95.620 s), but it’s not good enough.

Vector-based connection

Matrix-based synaptic computation may be straightforward, but can cause severe wasted RAM memory and inefficient computation. Imaging you want to connect 10,000 pre-synaptic neurons to 10,000 post-synaptic neurons with a 10% random connection probability. Using matrix, you need \(10^8\) floats to save the synaptic state, and at each update step, you need do computation on \(10^8\) floats. Actually, the number of values you really needed is only \(10^7\). See, there is a huge memory waste and computing resource inefficiency.

pre_ids and post_ids

An effective method to solve this problem is to use vector to store the connectivity between neuron groups and the corresponding synaptic states. For the above defined connectivity conn_mat, we can align the connected pre-synaptic neurons and the post-synaptic neurons by two one-dimensional arrays: pre_ids and post_ids,

d977ca12ca834afa851b80f06c4a624c

In such a way, we only need two vectors (pre_ids and post_ids, each has \(10^7\) floats) to store the synaptic connectivity. And, at each time step, we just need update a synaptic state vector with \(10^7\) floats.

[13]:
class SynVec1(bp.TwoEndConn):
    target_backend = ['numpy', 'numba', 'numba-cuda']

    @staticmethod
    def dev_g(g, t, tau):
        dg = - g / tau
        return dg

    def __init__(self, pre, post, conn, tau, weight, E, **kwargs):
        # parameters
        self.tau = tau
        self.weight = weight
        self.E = E

        # connections
        self.conn = conn(pre.size, post.size)
        self.pre_ids, self.post_ids = self.conn.requires('pre_ids', 'post_ids')
        self.num = len(self.pre_ids)

        # variables
        self.g = bp.ops.zeros(self.num)

        # initialize
        self.int_g = bp.odeint(self.dev_g)
        super(SynVec1, self).__init__(pre=pre, post=post, **kwargs)

    def update(self, _t):
        self.g = self.int_g(self.g, _t, self.tau)
        # p1: update
        for syn_i in range(self.num):
            pre_i = self.pre_ids[syn_i]
            if self.pre.spike[pre_i]:
                self.g[syn_i] += self.weight
        # p2: output
        for syn_i in range(self.num):
            post_i = self.post_ids[syn_i]
            self.post.input[post_i] += self.g[syn_i] * (self.E - self.post.V[post_i])

In SynVec1 class, we first update the synaptic state with “p1” code block, in which the synaptic state self.g[syn_i] is updated when the pre-synaptic neuron generates a spike (if self.pre.spike[pre_i]); then, at “p2” code block, we output the synaptic states onto the post-synaptic neurons.

[14]:
t_syn_vec1 = run_net(neu_model=LIF, syn_model=SynVec1)
Compilation used 2.4805 s.
Start running ...
Run 10.0% used 0.190 s.
Run 20.0% used 0.391 s.
Run 30.0% used 0.611 s.
Run 40.0% used 0.802 s.
Run 50.0% used 0.995 s.
Run 60.0% used 1.185 s.
Run 70.0% used 1.391 s.
Run 80.0% used 1.621 s.
Run 90.0% used 1.819 s.
Run 100.0% used 2.009 s.
Simulation is done in 2.009 s.

../_images/tutorials_efficient_synaptic_computation_35_1.png

Great! Transform the matrix-based connection into the vector-based connection makes us get a huge speed boost (2.009 s vs 95.620 s). However, there also exists redundant part in SynVec1 class. This is because a pre-synaptic neuron may connect to many post-synaptic neurons and thus at each step updating we will judge a pre-synaptic neuron whether generates a spike many times (self.pre.spike[pre_i]).

pre2syn and post2syn

In order to solve the above problem, here we create another two synaptic structures pre2syn and post2syn to help us retrieve the synapse states which connected with the pre-synaptic neuron \(i\) and the post-synaptic neuron \(j\).

In a pre2syn list, each pre2syn[i] stores the synaptic state indexes projected from the pre-synaptic neuron \(i\).

2b0d6f5f1c6343bf8d4261474c29dc90

Similarly, we can create a post2syn list to indicate the connections between synapses and post-synaptic neurons. For each post-synaptic neuron \(j\), post2syn[j] stores the indexes of synaptic elements which connected to the post neuron \(j\).

628b110961a145ff8ed55c38dc965bc7

Based on these connectivity mappings, we can define another version of synapse model by using pre2syn and post2syn:

[15]:
class SynVec2(bp.TwoEndConn):
    target_backend = ['numpy', 'numba']

    @staticmethod
    def dev_g(g, t, tau):
        dg = - g / tau
        return dg

    def __init__(self, pre, post, conn, tau, weight, E, **kwargs):
        # parameters
        self.tau = tau
        self.weight = weight
        self.E = E

        # connections
        self.conn = conn(pre.size, post.size)
        self.pre_ids, self.pre2syn, self.post2syn = self.conn.requires('pre_ids', 'pre2syn', 'post2syn')
        self.num = len(self.pre_ids)

        # variables
        self.g = bp.ops.zeros(self.num)

        # initialize
        self.int_g = bp.odeint(self.dev_g)
        super(SynVec2, self).__init__(pre=pre, post=post, **kwargs)

    def update(self, _t):
        self.g = self.int_g(self.g, _t, self.tau)
        # p1: update
        for pre_i in range(self.pre.num):
            if self.pre.spike[pre_i]:
                for syn_i in self.pre2syn[pre_i]:
                    self.g[syn_i] += self.weight
        # p2: output
        for post_i in range(self.post.num):
            for syn_i in self.post2syn[post_i]:
                self.post.input[post_i] += self.g[syn_i] * (self.E - self.post.V[post_i])

In this SynVec2 class, at “p1” code-block, we update synaptic states by the for-loop with the size of pre-synaptic number. If the pre-synaptic neuron elicits a spike self.pre.spike[pre_i], we will for-loop its connected synaptic states by for syn_i in self.pre2syn[pre_i]. In such a way, we only need to judge the pre-synaptic neuron pre_i spike state once. Similarly, at “p2” code-block, the synaptic output is also implemented with the post-synaptic neuron for-loop.

[16]:
t_syn_vec2 = run_net(neu_model=LIF, syn_model=SynVec2)
Compilation used 2.7670 s.
Start running ...
Run 10.0% used 0.180 s.
Run 20.0% used 0.383 s.
Run 30.0% used 0.573 s.
Run 40.0% used 0.764 s.
Run 50.0% used 0.994 s.
Run 60.0% used 1.186 s.
Run 70.0% used 1.378 s.
Run 80.0% used 1.568 s.
Run 90.0% used 1.765 s.
Run 100.0% used 1.995 s.
Simulation is done in 1.995 s.

../_images/tutorials_efficient_synaptic_computation_46_1.png

We only got a small increase in speed performance (1.995 s vs 2.009 s). This is because the optimization of the “update” block has run its course. Currently, the most of the running costs spend on the “output” block.

pre2post and post2pre

Notice that for this kind of synapse model, the synaptic states \(g\) onto a post-synaptic neuron can be modeled together. This is because the synaptic state evolution according to the differential equation (1) and (2) after the pre-synaptic spikes can be superposed. This means that we can declare a synaptic state self.g with the shape of post.num, not the shape of the synapse number.

In order to achieve this goal, we create another two synaptic structures (pre2post and post2pre) which establish the direct mapping between the pre-synaptic neurons and the post-synaptic neurons can be established. pre2post contains the connected post-synaptic neurons indexes, in which pre2post[i] retrieves the post neuron ids projected from pre-synaptic neuron \(i\). post2pre contains the pre-synaptic neurons indexes, in which post2pre[j] retrieves the pre-syanptic neuron ids which project to post-synaptic neuron \(j\).

Also,

77735c5a38294825b9f548b6e802841e

617688f0be6741b0afd89c63eca97d11

[17]:
class SynVec3(bp.TwoEndConn):
    target_backend = ['numpy', 'numba']

    @staticmethod
    def dev_g(g, t, tau):
        dg = - g / tau
        return dg

    def __init__(self, pre, post, conn, tau, weight, E, **kwargs):
        # parameters
        self.tau = tau
        self.weight = weight
        self.E = E

        # connections
        self.conn = conn(pre.size, post.size)
        self.pre2post = self.conn.requires('pre2post')

        # variables
        self.g = bp.ops.zeros(post.num)

        # initialize
        self.int_g = bp.odeint(self.dev_g)
        super(SynVec3, self).__init__(pre=pre, post=post, **kwargs)

    def update(self, _t):
        self.g = self.int_g(self.g, _t, self.tau)
        # p1: update
        for pre_i in range(self.pre.num):
            if self.pre.spike[pre_i]:
                for post_i in self.pre2post[pre_i]:
                    self.g[post_i] += self.weight
        # p2: output
        self.post.input += self.g * (self.E - self.post.V)

In SynVec3 class, we require a pre2post structure, and then at “p1” code-block, when the pre-synaptic neuron pre_i emits a spike, the connected post-synaptic neurons’ state self.g[post_i] will increase the conductance.

[18]:
t_syn_vec3 = run_net(neu_model=LIF, syn_model=SynVec3)
Compilation used 2.7279 s.
Start running ...
Run 10.0% used 0.000 s.
Run 20.0% used 0.010 s.
Run 30.0% used 0.020 s.
Run 40.0% used 0.020 s.
Run 50.0% used 0.030 s.
Run 60.0% used 0.040 s.
Run 70.0% used 0.040 s.
Run 80.0% used 0.050 s.
Run 90.0% used 0.060 s.
Run 100.0% used 0.060 s.
Simulation is done in 0.060 s.

../_images/tutorials_efficient_synaptic_computation_56_1.png

Yeah, the running speed gets a huge boosting (0.060 s vs 1.995 s), which demonstrates the super effectiveness of this kind of synaptic computation.

pre_slice and post_slice

However, it is not perfect. This is because pre2syn, post2syn, pre2post and post2pre are all the data with the list type, which can not be directly deployed to GPU devices. What the GPU device prefers are only arrays.

To solve this problem, we, instead, can create a post_slice connection structure which stores the start and the end position on the synpase state for each connected post-synaptic neuron \(j\). post_slice can be implemented by aligning the pre ids according to the sequential post id \(0, 1, 2, ...\) (look the following illustrating figure). For each post neuron \(j\), start, end = post_slice[j] retrieves the start/end position of the connected synapse states.

105fd23953604fc19e43e248434fb453

Therefore, an alternative updating logic of pre2syn and post2syn (in SynVec2 class) can be replaced by post_slice and pre_ids:

[19]:
class SynVec4(bp.TwoEndConn):
    target_backend = ['numpy', 'numba', 'numba-cuda']

    @staticmethod
    def dev_g(g, t, tau):
        dg = - g / tau
        return dg

    def __init__(self, pre, post, conn, tau, weight, E, **kwargs):
        # parameters
        self.tau = tau
        self.weight = weight
        self.E = E

        # connections
        self.conn = conn(pre.size, post.size)
        self.pre_ids, self.post_slice = self.conn.requires('pre_ids', 'post_slice')
        self.num = len(self.pre_ids)

        # variables
        self.g = bp.ops.zeros(self.num)

        # initialize
        self.int_g = bp.odeint(self.dev_g)
        super(SynVec4, self).__init__(pre=pre, post=post, **kwargs)

    def update(self, _t):
        self.g = self.int_g(self.g, _t, self.tau)
        # p1: update
        for syn_i in range(self.num):
            pre_i = self.pre_ids[syn_i]
            if self.pre.spike[pre_i]:
                self.g[syn_i] += self.weight
        # p2: output
        for post_i in range(self.post.num):
            start, end = self.post_slice[post_i]
            for syn_i in range(start, end):
                self.post.input[post_i] += self.g[syn_i] * (self.E - self.post.V[post_i])
[20]:
t_syn_vec4 = run_net(neu_model=LIF, syn_model=SynVec4)
Compilation used 2.5473 s.
Start running ...
Run 10.0% used 0.190 s.
Run 20.0% used 0.372 s.
Run 30.0% used 0.552 s.
Run 40.0% used 0.744 s.
Run 50.0% used 0.934 s.
Run 60.0% used 1.158 s.
Run 70.0% used 1.351 s.
Run 80.0% used 1.538 s.
Run 90.0% used 1.718 s.
Run 100.0% used 1.915 s.
Simulation is done in 1.915 s.

../_images/tutorials_efficient_synaptic_computation_64_1.png

Similarly, a connection mapping pre_slice can also be implemented, in which for each pre-synaptic neuron \(i\), start, end = pre_slice[i] retrieves the start/end position of the connected synapse states.

fc70961b71624871a48a0c86dd4c5598

Moreover, an alternative updating logic of pre2post (in SynVec3 class) can also be replaced by pre_slice and post_ids:

[21]:
class SynVec5(bp.TwoEndConn):
    target_backend = ['numpy', 'numba', 'numba-cuda']

    @staticmethod
    def dev_g(g, t, tau):
        dg = - g / tau
        return dg

    def __init__(self, pre, post, conn, tau, weight, E, **kwargs):
        # parameters
        self.tau = tau
        self.weight = weight
        self.E = E

        # connections
        self.conn = conn(pre.size, post.size)
        self.pre_slice, self.post_ids = self.conn.requires('pre_slice', 'post_ids')

        # variables
        self.g = bp.ops.zeros(post.num)

        # initialize
        self.int_g = bp.odeint(self.dev_g)
        super(SynVec5, self).__init__(pre=pre, post=post, **kwargs)

    def update(self, _t):
        self.g = self.int_g(self.g, _t, self.tau)
        # p1: update
        for pre_i in range(self.pre.num):
            if self.pre.spike[pre_i]:
                start, end = self.pre_slice[pre_i]
                for post_i in self.post_ids[start: end]:
                    self.g[post_i] += self.weight
        # p2: output
        self.post.input += self.g * (self.E - self.post.V)
[22]:
t_syn_vec5 = run_net(neu_model=LIF, syn_model=SynVec5)
Compilation used 2.9655 s.
Start running ...
Run 10.0% used 0.000 s.
Run 20.0% used 0.010 s.
Run 30.0% used 0.020 s.
Run 40.0% used 0.020 s.
Run 50.0% used 0.030 s.
Run 60.0% used 0.040 s.
Run 70.0% used 0.040 s.
Run 80.0% used 0.050 s.
Run 90.0% used 0.060 s.
Run 100.0% used 0.060 s.
Simulation is done in 0.060 s.

../_images/tutorials_efficient_synaptic_computation_69_1.png

Speed comparison

In this tutorial, we introduce nine different synaptic connection structures:

  1. conn_mat : The connection matrix with the shape of (pre_num, post_num).

  2. pre_ids: The connected pre-synaptic neuron indexes, a vector with the shape pf syn_num.

  3. post_ids: The connected post-synaptic neuron indexes, a vector with the shape pf syn_num.

  4. pre2syn: A list (with the length of pre_num) contains the synaptic indexes connected by each pre-synaptic neuron. pre2syn[i] denotes the synapse ids connected by the pre-synaptic neuron \(i\).

  5. post2syn: A list (with the length of post_num) contains the synaptic indexes connected by each post-synaptic neuron. post2syn[j] denotes the synapse ids connected by the post-synaptic neuron \(j\).

  6. pre2post: A list (with the length of pre_num) contains the post-synaptic indexes connected by each pre-synaptic neuron. pre2post[i] retrieves the post neurons connected by the pre neuron \(i\).

  7. post2pre: A list (with the length of post_num) contains the pre-synaptic indexes connected by each post-synaptic neuron. post2pre[j] retrieves the pre neurons connected by the post neuron \(j\).

  8. pre_slice: A two dimensional matrix with the shape of (pre_num, 2) stores the start and end positions on the synapse state for each connected pre-synaptic neuron \(i\) .

  9. post_slice: A two dimensional matrix with the shape of (post_num, 2) stores the start and end positions on the synapse state for each connected post-synaptic neuron \(j\) .

We illustrate their efficiency by a spare randomly connected E/I balance network COBA [1]. We summarize their speed in the following comparison figure:

[23]:
names = ['mat 1',    'mat 2',    'vec 1',    'vec 2',    'vec 3',    'vec 4',    'vec 5']
times = [t_syn_mat1, t_syn_mat2, t_syn_vec1, t_syn_vec2, t_syn_vec3, t_syn_vec4, t_syn_vec5]
xs = list(range(len(times)))
[27]:
import matplotlib.pyplot as plt

def autolabel(rects):
    """Attach a text label above each bar in *rects*, displaying its height."""
    for rect in rects:
        height = rect.get_height()
        ax.annotate(f'{height:.3f}',
                    xy=(rect.get_x() + rect.get_width() / 2, height),
                    xytext=(0, 0.5),  # 3 points vertical offset
                    textcoords="offset points",
                    ha='center', va='bottom')

fig, gs = bp.visualize.get_figure(1, 1, 4, 5)

ax = fig.add_subplot(gs[0, 0])
rects = ax.bar(xs, times)
ax.set_xticks(xs)
ax.set_xticklabels(names)
ax.set_yscale('log')
plt.ylabel('Running Time [s]')
autolabel(rects)
../_images/tutorials_efficient_synaptic_computation_74_0.png

However, the speed comparison presented here does not mean that the vector-based connection is always better than the matrix-based connection. Vector-based synaptic model is well suitable to run on the JIT compilers like Numba. Whereas the matrix-based synaptic model is best to run on the array- or tensor-oriented backend such like NumPy, PyTorch, TensorFlow, and is highly suitable to solve problems for dense connections, such like all-to-all connection.


References:

[1] Vogels, T. P. and Abbott, L. F. (2005), Signal propagation and logic gating in networks of integrate-and-fire neurons., J. Neurosci., 25, 46, 10786–95

Author: