Activation Functions#

This module provides commonly used activation functions.

Activation functions are a critical part of the design of a neural network. The choice of activation function in the hidden layer will control how well the network model learns the training dataset. The choice of activation function in the output layer will define the type of predictions the model can make.

celu(x[, alpha])

Continuously-differentiable exponential linear unit activation.

elu(x[, alpha])

Exponential linear unit activation function.

gelu(x[, approximate])

Gaussian error linear unit activation function.

glu(x[, axis])

Gated linear unit activation function.

hard_tanh(x)

Hard \(\mathrm{tanh}\) activation function.

hard_sigmoid(x)

Hard Sigmoid activation function.

hard_silu(x)

Hard SiLU activation function

hard_swish(x)

Hard SiLU activation function

leaky_relu(x[, negative_slope])

Leaky rectified linear unit activation function.

log_sigmoid(x)

Log-sigmoid activation function.

log_softmax(x[, axis])

Log-Softmax function.

one_hot(x, num_classes, *[, dtype, axis])

One-hot encodes the given indicies.

normalize(x[, axis, mean, variance, epsilon])

Normalizes an array by subtracting mean and dividing by sqrt(var).

relu(x)

relu6(x)

Rectified Linear Unit 6 activation function.

sigmoid(x)

Sigmoid activation function.

soft_sign(x)

Soft-sign activation function.

softmax(x[, axis])

Softmax function.

softplus(x)

Softplus activation function.

silu(x)

SiLU activation function.

swish(x)

SiLU activation function.

selu(x)

Scaled exponential linear unit activation.

identity(x)

tanh(x)