brainpy.optimizers.RMSProp#

class brainpy.optimizers.RMSProp(lr, train_vars=None, epsilon=1e-06, rho=0.9, name=None)[source]#

Optimizer that implements the RMSprop algorithm.

RMSprop 5 and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad’s radically diminishing learning rates.

The gist of RMSprop is to:

  • Maintain a moving (discounted) average of the square of gradients

  • Divide the gradient by the root of this average

\[\begin{split}\begin{split}c_t &= \rho c_{t-1} + (1-\rho)*g^2\\ p_t &= \frac{\eta}{\sqrt{c_t + \epsilon}} * g \end{split}\end{split}\]

The centered version additionally maintains a moving average of the gradients, and uses that average to estimate the variance.

References

5

Tieleman, T. and Hinton, G. (2012): Neural Networks for Machine Learning, Lecture 6.5 - rmsprop. Coursera. http://www.youtube.com/watch?v=O3sxAc4hxZU (formula @5:20)

__init__(lr, train_vars=None, epsilon=1e-06, rho=0.9, name=None)[source]#

Methods

__init__(lr[, train_vars, epsilon, rho, name])

check_grads(grads)

load_states(filename[, verbose])

Load the model states.

nodes([method, level, include_self])

Collect all children nodes.

register_implicit_nodes(nodes)

register_implicit_vars(variables)

register_vars([train_vars])

save_states(filename[, variables])

Save the model states.

train_vars([method, level, include_self])

The shortcut for retrieving all trainable variables.

unique_name([name, type_])

Get the unique name for this object.

update(grads)

vars([method, level, include_self])

Collect all variables in this node and the children nodes.

Attributes

name