# PReLU#

class brainpy.dnn.PReLU(num_parameters=1, init=0.25, dtype=None)[source]#

Applies the element-wise function:

$\text{PReLU}(x) = \max(0,x) + a * \min(0,x)$

or

$\begin{split}\text{PReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ ax, & \text{ otherwise } \end{cases}\end{split}$

Here $$a$$ is a learnable parameter. When called without arguments, bp.dnn.PReLU() uses a single parameter $$a$$ across all input channels. If called with bp.dnn.PReLU(nChannels), a separate $$a$$ is used for each input channel.

Note

weight decay should not be used when learning $$a$$ for good performance.

Note

Channel dim is the 2nd dim of input. When input has dims < 2, then there is no channel dim and the number of channels = 1.

Parameters:
• num_parameters (int) – number of $$a$$ to learn. Although it takes an int as input, there is only two values are legitimate: 1, or the number of channels at input. Default: 1

• init (float) – the initial value of $$a$$. Default: 0.25

Shape:
• Input: $$( *)$$ where * means, any number of additional dimensions.

• Output: $$(*)$$, same shape as the input.

weight#

the learnable weights of shape (num_parameters).

Type:

Tensor

Examples:

>>> import brainpy as bp
>>> import brainpy.math as bm
>>> m = bp.dnn.PReLU()
>>> input = bm.random.randn(2)
>>> output = m(input)

update(input)[source]#

The function to specify the updating rule.

Return type:

TypeVar(ArrayType, Array, Variable, TrainVar, Array, ndarray)