Building Network Models#

@Xiaoyu Chen @Chaoming Wang

In previous sections, it has been illustrated how to define neuron models by brainpy.dyn.NeuGroup and synapse models by brainpy.dyn.TwoEndConn. This section will introduce brainpy.dyn.Network, which is the base class used to build network models.

In essence, brainpy.dyn.Network is a container, whose function is to compose the individual elements. It is a subclass of a more general class: brainpy.dyn.Container.

In below, we take an excitation-inhibition (E-I) balanced network model as an example to illustrate how to compose the LIF neurons and Exponential synapses defined in previous tutorials to build a network.

import brainpy as bp

# bp.math.set_platform('cpu')


Excitation-Inhibition (E-I) Balanced Network#

The E-I balanced network was first proposed to explain the irregular firing patterns of cortical neurons and comfirmed by experimental data. The network [1] we are going to implement consists of excitatory (E) neurons and inhibitory (I) neurons, the ratio of which is about 4 : 1. The biggest difference between excitatory and inhibitory neurons is the reversal potential - the reversal potential of inhibitory neurons is much lower than that of excitatory neurons. Besides, the membrane time constant of inhibitory neurons is longer than that of excitatory neurons, which indicates that inhibitory neurons have slower dynamics.

[1] Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J. M., et al. (2007), Simulation of networks of spiking neurons: a review of tools and strategies., J. Comput. Neurosci., 23, 3, 349–98.

# BrianPy has some built-in canonical neuron and synapse models

LIF = bp.neurons.LIF
Exponential = bp.synapses.Exponential

Two ways to define network models#

There are several ways to define a Network model.

1. Defining a network as a class#

The first way to define a network model is like follows.

class EINet(bp.Network):
  def __init__(self, num_exc, num_inh, method='exp_auto', **kwargs):
    super(EINet, self).__init__(**kwargs)

    # neurons
    pars = dict(V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.)
    E = LIF(num_exc, **pars, method=method)
    I = LIF(num_inh, **pars, method=method)
    E.V.value = bp.math.random.randn(num_exc) * 2 - 55.
    I.V.value = bp.math.random.randn(num_inh) * 2 - 55.

    # synapses
    w_e = 0.6  # excitatory synaptic weight
    w_i = 6.7  # inhibitory synaptic weight
    E_pars = dict(output=bp.synouts.COBA(E=0.), g_max=w_e, tau=5.)
    I_pars = dict(output=bp.synouts.COBA(E=-80.), g_max=w_i, tau=10.)
    # Neurons connect to each other randomly with a connection probability of 2%
    self.E2E = Exponential(E, E, bp.conn.FixedProb(prob=0.02), **E_pars, method=method)
    self.E2I = Exponential(E, I, bp.conn.FixedProb(prob=0.02), **E_pars, method=method)
    self.I2E = Exponential(I, E, bp.conn.FixedProb(prob=0.02), **I_pars, method=method)
    self.I2I = Exponential(I, I, bp.conn.FixedProb(prob=0.02), **I_pars, method=method)

    self.E = E
    self.I = I

In an instance of brainpy.dyn.Network, all self. accessed elements can be gathered by the .nodes() function automatically.

EINet(8, 2).nodes(level=-1).subset(bp.DynamicalSystem)
{'EINet0': EINet(),
 'Exponential0': Exponential(name=Exponential0, mode=NonBatchingMode, 
             pre=LIF(name=LIF0, mode=NonBatchingMode, size=(8,)), 
             post=LIF(name=LIF0, mode=NonBatchingMode, size=(8,))),
 'Exponential1': Exponential(name=Exponential1, mode=NonBatchingMode, 
             pre=LIF(name=LIF0, mode=NonBatchingMode, size=(8,)), 
             post=LIF(name=LIF1, mode=NonBatchingMode, size=(2,))),
 'Exponential2': Exponential(name=Exponential2, mode=NonBatchingMode, 
             pre=LIF(name=LIF1, mode=NonBatchingMode, size=(2,)), 
             post=LIF(name=LIF0, mode=NonBatchingMode, size=(8,))),
 'Exponential3': Exponential(name=Exponential3, mode=NonBatchingMode, 
             pre=LIF(name=LIF1, mode=NonBatchingMode, size=(2,)), 
             post=LIF(name=LIF1, mode=NonBatchingMode, size=(2,))),
 'LIF0': LIF(name=LIF0, mode=NonBatchingMode, size=(8,)),
 'LIF1': LIF(name=LIF1, mode=NonBatchingMode, size=(2,)),

Note in the above EINet, we do not define the update() function. This is because any subclass of brainpy.dyn.Network has a default update function, in which it automatically gathers the elements defined in this network and sequentially runs the update function of each element.

Let’s try to simulate our defined EINet model.

net = EINet(3200, 800, method='exp_auto')  # "method": the numerical integrator method

runner = bp.DSRunner(net,
                         monitors=['E.spike', 'I.spike'],
                         inputs=[('E.input', 20.), ('I.input', 20.)])
t =
print(f'Used time {t} s')

# visualization
bp.visualize.raster_plot(runner.mon.ts, runner.mon['E.spike'],
                         title='Spikes of Excitatory Neurons', show=True)
bp.visualize.raster_plot(runner.mon.ts, runner.mon['I.spike'],
                         title='Spikes of Inhibitory Neurons', show=True)
Used time None s
../_images/16f64f02ffb29e33cdd4a85e9a27b6d12a407f779c7d4215246830e8518ca445.png ../_images/3087ac52fb2ec61d8cf39782ae5c5ff4f8a79bc841dc95978710998148095d51.png

2. Instantiating a network directly#

Another way to instantiate a network model is directly pass the elements into the constructor of brainpy.Network. It receives *args and **kwargs arguments.

# neurons
pars = dict(V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.)
E = LIF(3200, **pars)
I = LIF(800, **pars)
E.V.value = bp.math.random.randn(E.num) * 2 - 55.
I.V.value = bp.math.random.randn(I.num) * 2 - 55.

# synapses
E_pars = dict(output=bp.synouts.COBA(E=0.), g_max=0.6, tau=5.)
I_pars = dict(output=bp.synouts.COBA(E=-80.), g_max=6.7, tau=10.)
E2E = Exponential(E, E, bp.conn.FixedProb(prob=0.02), **E_pars)
E2I = Exponential(E, I, bp.conn.FixedProb(prob=0.02), **E_pars)
I2E = Exponential(I, E, bp.conn.FixedProb(prob=0.02), **I_pars)
I2I = Exponential(I, I, bp.conn.FixedProb(prob=0.02), **I_pars)

# Network
net2 = bp.Network(E2E, E2I, I2E, I2I, exc_group=E, inh_group=I)

All elements are passed as **kwargs argument can be accessed by the provided keys. This will affect the following dynamics simulation.

LIF(name=LIF4, mode=NonBatchingMode, size=(3200,))
LIF(name=LIF5, mode=NonBatchingMode, size=(800,))

After construction, the simulation goes the same way:

runner = bp.DSRunner(net2,
                         monitors=['exc_group.spike', 'inh_group.spike'],
                         inputs=[('exc_group.input', 20.), ('inh_group.input', 20.)])
t =
print(f'Used time {t} s')

# visualization
bp.visualize.raster_plot(runner.mon.ts, runner.mon['exc_group.spike'],
                         title='Spikes of Excitatory Neurons', show=True)
bp.visualize.raster_plot(runner.mon.ts, runner.mon['inh_group.spike'],
                         title='Spikes of Inhibitory Neurons', show=True)
Used time None s
../_images/c934a0427b1600456c0a3801446fa5bc2efe05ff79356597cc48ed05ae127ef5.png ../_images/eaffe89ef815429bda45e0bd3eedbd44462a834f29413c65f8e0a6c19f1a3a9c.png

Customizing update function#

If you want to control your updating logic in a network, you can overwrite the updating function update(tdi) and customize it by yourself.

For the above E/I balanced network model, we can define its update function as:

class EINetV2(bp.Network):
  def __init__(self, num_exc, num_inh, method='exp_auto', **kwargs):
    super(EINetV2, self).__init__(**kwargs)

    # neurons
    self.N = LIF(num_exc + num_inh,
                 V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.,
                 method=method, V_initializer=bp.init.Normal(-55., 2.))

    # synapses
    self.Esyn = bp.synapses.Exponential(self.N[:num_exc], self.N,
                                        g_max=0.6, tau=5.,
    self.Isyn = bp.synapses.Exponential(self.N[num_exc:], self.N,
                                        g_max=6.7, tau=10.,

  def update(self, tdi):
    self.update_local_delays()  # IMPORTANT

In the above, we define one population, and create E (excitatory) and I (inhibitory) projections within this population. Then, we first update synapse models by calling self.Esyn(tdi) and self.Isyn(tdi). This operation can ensure that all synapse inputs can be gathered onto neuron models before we update neurons. After updating synapses, we update the state of neurons by calling self.N(tdi). Finally, it’s worthy to note that we need to update all delays used in this network through self.update_local_delays(). This is because delay variables relying on neurons. Once upon neuronal states have been updated, we need to update delays according to these new values of neuronal states.

net = EINetV2(3200, 800)
runner = bp.DSRunner(net, monitors={'spikes': net.N.spike}, inputs=[(net.N.input, 20.)])
bp.visualize.raster_plot(runner.mon.ts, runner.mon['spikes'], show=True)