Unified Operations

BrainPy is targeted on multiple backends. Flexible switch between various backends needs to solve the problem of how to unify the different operations between them.

[1]:
import brainpy as bp

Intrinsically, BrainPy needs several necessary operations for numerical solvers, dynamics simulation and data construction.

[2]:
# necessary operations for numerical solvers

bp.ops.OPS_FOR_SOLVER
[2]:
['normal', 'sum', 'exp', 'shape']
[3]:
# necessary operations for neurodynamics simulation

bp.ops.OPS_FOR_SIMULATION
[3]:
['as_tensor', 'zeros', 'ones', 'arange', 'concatenate', 'where', 'reshape']
[4]:
# necessary data types

bp.ops.OPS_OF_DTYPE
[4]:
['bool', 'int', 'int32', 'int64', 'float', 'float32', 'float64']

However, if want to unify more commonly used operations, users can use brainpy.ops.set_buffer(backend, **operations) to set operation buffers.

For example, if users want to implement unified clip and square operations across different backends, ones can define like this:

[5]:
# NumPy

import numpy as np

bp.ops.set_buffer('numpy', clip=np.clip, sqrt=np.sqrt)
[6]:
# PyTorch

try:
    import torch

    bp.ops.set_buffer('pytorch', clip=torch.clamp, sqrt=torch.sqrt)

except ModuleNotFoundError:
    pass
[7]:
# TensorFlow

try:
    import tensorflow as tf

    bp.ops.set_buffer('tensorflow', clip=tf.clip_by_value, sqrt=tf.math.sqrt)

except ModuleNotFoundError:
    pass
[8]:
# Numba

try:
    import numba as nb

    @nb.njit
    def nb_clip(x, x_min, x_max):
        x = np.maximum(x, x_min)
        x = np.minimum(x, x_max)
        return x

    bp.ops.set_buffer('numba', clip=nb_clip, sqrt=np.sqrt)
    bp.ops.set_buffer('numba-parallel', clip=nb_clip, sqrt=np.sqrt)

except ModuleNotFoundError:
    pass
[9]:
# Numba-CUDA

try:
    import math
    import numba as nb
    from numba import cuda

    @cuda.jit(devicde=True)
    def cuda_clip(x, x_min, x_max):
        if x < x_min: return x_min
        elif x > x_max: return x_max
        else: return x

    bp.ops.set_buffer('numba-cuda', clip=nb_clip, sqrt=math.sqrt)

except ModuleNotFoundError:
    pass

After the buffer setting, users can use the unified operation to define models which will automatically works natively with buffered backends.

[10]:
def test(arr):
    return bp.ops.sqrt(bp.ops.clip(arr, 0., 1.))
[11]:
bp.backend.set('numpy')

test(bp.ops.as_tensor([-1, 0.5, 2.]))
[11]:
array([0.        , 0.70710678, 1.        ])
[12]:
bp.backend.set('pytorch')

test(bp.ops.as_tensor([-1, 0.5, 2.]))
[12]:
tensor([0.0000, 0.7071, 1.0000])
[13]:
bp.backend.set('tensorflow')

test(bp.ops.as_tensor([-1, 0.5, 2.]))
[13]:
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([0.        , 0.70710677, 1.        ], dtype=float32)>
[14]:
bp.backend.set('numba')

test(bp.ops.as_tensor([-1, 0.5, 2.]))
[14]:
array([0.        , 0.70710678, 1.        ])

Author: