Creat Custom Layers

To implement a custom layer in BrainPy, you will have to write a Python class that subclasses brainpy.simulation.layers.Module and implement at least one method: update(). This method computes the output of the module given its input.

import brainpy as bp
bp.set_platform('cpu')

import brainpy.simulation.layers as nn
import brainpy.math.jax as bm
bp.math.use_backend('jax')

The following is an example implementation of a layer that multiplies its input by 2:

class DoubleLayer(nn.Module):
    def update(self, x):
        return 2 * x

This is all that’s required to implement a functioning custom module class in BrainPy.

A layer with parameters

If the layer has parameters, these should be initialized in the constructor. In BrainPy, we recommend you to mark parameters as brainpy.math.TrainVar.

To show how this can be used, here is a layer that multiplies its input by a matrix W (much like a typical fully connected layer in a neural network would). This matrix is a parameter of the layer. The shape of the matrix will be (num_input, num_hidden), where num_input is the number of input features and num_hidden has to be specified when the layer is created.

class DotLayer(nn.Module):
    def __init__(self, num_input, num_hidden, W=bp.initialize.Normal(), **kwargs):
        super(DotLayer, self).__init__(**kwargs)
        self.num_input = num_input
        self.num_hidden = num_hidden
        self.W = bm.TrainVar(W([num_input, num_hidden]))

    def update(self, x):
        return bm.dot(x, self.W)

A few things are worth noting here: when overriding the constructor, we need to call the superclass constructor on the first line. This is important to ensure the layer functions properly. Note that we pass **kwargs - although this is not strictly necessary, it enables some other cool features, such as making it possible to give the layer a name:

l_dot = DotLayer(10, 50, name='my_dot_layer')

A layer with multiple behaviors

Some layers can have multiple behaviors. For example, a layer implementing dropout should be able to be switched on or off. During training, we want it to apply dropout noise to its input and scale up the remaining values, but during evaluation we don’t want it to do anything.

For this purpose, the update() method takes optional keyword arguments (kwargs). When update() is called to compute an expression for the output of a network, all specified keyword arguments are passed to the update() methods of all layers in the network.

class Dropout(nn.Module):
    def __init__(self, prob, seed=None, **kwargs):
        super(Dropout, self).__init__(**kwargs)
        self.prob = prob
        self.rng = bm.random.RandomState(seed=seed)

    def update(self, x, **kwargs):
        if kwargs.get('train', True):
            keep_mask = self.rng.bernoulli(self.prob, x.shape)
            return bm.where(keep_mask, x / self.prob, 0.)
        else:
            return x