# Long short-term memory

The Long Short-Term Memory (LSTM) cell can process data sequentially and keep its hidden state through time.

Long short-term memory (LSTM) units are units of a recurrent neural network (RNN). An RNN composed of LSTM units is often called an LSTM network. A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell.

LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the exploding and vanishing gradient problems that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in numerous applications[citation needed].

## HistoryEdit

LSTM was proposed in 1997 by Sepp Hochreiter and Jürgen Schmidhuber[1] and improved in 2000 by Felix Gers' team.[2]

Among other successes, LSTM achieved record results in natural language text compression,[3] unsegmented connected handwriting recognition[4] and won the ICDAR handwriting competition (2009). LSTM networks were a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset (2013).[5]

As of 2016, major technology companies including Google, Apple, and Microsoft were using LSTM as fundamental components in new products.[6] For example, Google used LSTM for speech recognition on the smartphone,[7][8] for the smart assistant Allo[9] and for Google Translate.[10][11] Apple uses LSTM for the "Quicktype" function on the iPhone[12][13] and for Siri.[14] Amazon uses LSTM for Amazon Alexa.[15]

In 2017 Microsoft reported reaching 95.1% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".[16]

## ArchitecturesEdit

There are several architectures of LSTM units. A common architecture is composed of a memory cell, an input gate, an output gate and a forget gate.

An LSTM cell takes an input and stores it for some period of time. This is equivalent to applying the identity function (${\displaystyle f(x)=x)}$  to the input. Because the derivative of the identity function is constant, when an LSTM network is trained with backpropagation through time, the gradient does not vanish.

The activation function of the LSTM gates is often the logistic function. Intuitively, the input gate controls the extent to which a new value flows into the cell, the forget gate controls the extent to which a value remains in the cell and the output gate controls the extent to which the value in the cell is used to compute the output activation of the LSTM unit.

There are connections into and out of the LSTM gates, a few of which are recurrent. The weights of these connections, which need to be learned during training, determine how the gates operate.

## VariantsEdit

In the equations below, the lowercase variables represent vectors. Matrices ${\displaystyle W_{q}}$  and ${\displaystyle U_{q}}$  contain, respectively, the weights of the input and recurrent connections, where ${\displaystyle q}$  can either be the input gate ${\displaystyle i}$ , output gate ${\displaystyle o}$ , the forget gate ${\displaystyle f}$  or the memory cell ${\displaystyle c}$ , depending on the activation being calculated.

### LSTM with a forget gateEdit

The compact forms of the equations for the forward pass of an LSTM unit with a forget gate are:[1][2]

{\displaystyle {\begin{aligned}f_{t}&=\sigma _{g}(W_{f}x_{t}+U_{f}h_{t-1}+b_{f})\\i_{t}&=\sigma _{g}(W_{i}x_{t}+U_{i}h_{t-1}+b_{i})\\o_{t}&=\sigma _{g}(W_{o}x_{t}+U_{o}h_{t-1}+b_{o})\\c_{t}&=f_{t}\circ c_{t-1}+i_{t}\circ \sigma _{c}(W_{c}x_{t}+U_{c}h_{t-1}+b_{c})\\h_{t}&=o_{t}\circ \sigma _{h}(c_{t})\end{aligned}}}

where the initial values are ${\displaystyle c_{0}=0}$  and ${\displaystyle h_{0}=0}$  and the operator ${\displaystyle \circ }$  denotes the Hadamard product (element-wise product). The subscript ${\displaystyle t}$  indexes the time step.

#### VariablesEdit

• ${\displaystyle x_{t}\in \mathbb {R} ^{d}}$ : input vector to the LSTM unit
• ${\displaystyle f_{t}\in \mathbb {R} ^{h}}$ : forget gate's activation vector
• ${\displaystyle i_{t}\in \mathbb {R} ^{h}}$ : input gate's activation vector
• ${\displaystyle o_{t}\in \mathbb {R} ^{h}}$ : output gate's activation vector
• ${\displaystyle h_{t}\in \mathbb {R} ^{h}}$ : output vector of the LSTM unit
• ${\displaystyle c_{t}\in \mathbb {R} ^{h}}$ : cell state vector
• ${\displaystyle W\in \mathbb {R} ^{h\times d}}$ , ${\displaystyle U\in \mathbb {R} ^{h\times h}}$  and ${\displaystyle b\in \mathbb {R} ^{h}}$ : weight matrices and bias vector parameters which need to be learned during training

where the superscripts ${\displaystyle d}$  and ${\displaystyle h}$  refer to the number of input features and number of hidden units, respectively.

#### Activation functionsEdit

• ${\displaystyle \sigma _{g}}$ : sigmoid function.
• ${\displaystyle \sigma _{c}}$ : hyperbolic tangent function.
• ${\displaystyle \sigma _{h}}$ : hyperbolic tangent function or, as the peephole LSTM paper[which?] suggests, ${\displaystyle \sigma _{h}(x)=x}$ .[17][18]

### Peephole LSTMEdit

A peephole LSTM unit with input (i.e. ${\displaystyle i}$ ), output (i.e. ${\displaystyle o}$ ), and forget (i.e. ${\displaystyle f}$ ) gates. Each of these gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. ${\displaystyle i_{t},o_{t}}$  and ${\displaystyle f_{t}}$  represent the activations of respectively the input, output and forget gates, at time step ${\displaystyle t}$ . The 3 exit arrows from the memory cell ${\displaystyle c}$  to the 3 gates ${\displaystyle i,o}$  and ${\displaystyle f}$  represent the peephole connections. These peephole connections actually denote the contributions of the activation of the memory cell ${\displaystyle c}$  at time step ${\displaystyle t-1}$ , i.e. the contribution of ${\displaystyle c_{t-1}}$  (and not ${\displaystyle c_{t}}$ , as the picture may suggest). In other words, the gates ${\displaystyle i,o}$  and ${\displaystyle f}$  calculate their activations at time step ${\displaystyle t}$  (i.e., respectively, ${\displaystyle i_{t},o_{t}}$  and ${\displaystyle f_{t}}$ ) also considering the activation of the memory cell ${\displaystyle c}$  at time step ${\displaystyle t-1}$ , i.e. ${\displaystyle c_{t-1}}$ . The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes ${\displaystyle c_{t}}$ . The little circles containing a ${\displaystyle \times }$  symbol represent an element-wise multiplication between its inputs. The big circles containing an S-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum. There are many other kinds of LSTMs as well.[19]

The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM).[17][18] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[20] ${\displaystyle h_{t-1}}$  is not used, ${\displaystyle c_{t-1}}$  is used instead in most places.

{\displaystyle {\begin{aligned}f_{t}&=\sigma _{g}(W_{f}x_{t}+U_{f}c_{t-1}+b_{f})\\i_{t}&=\sigma _{g}(W_{i}x_{t}+U_{i}c_{t-1}+b_{i})\\o_{t}&=\sigma _{g}(W_{o}x_{t}+U_{o}c_{t-1}+b_{o})\\c_{t}&=f_{t}\circ c_{t-1}+i_{t}\circ \sigma _{c}(W_{c}x_{t}+U_{c}c_{t-1}+b_{c})\\h_{t}&=o_{t}\circ \sigma _{h}(c_{t})\end{aligned}}}

### Peephole convolutional LSTMEdit

Peephole convolutional LSTM.[21] The ${\displaystyle *}$  denotes the convolution operator.

{\displaystyle {\begin{aligned}f_{t}&=\sigma _{g}(W_{f}*x_{t}+U_{f}*h_{t-1}+V_{f}\circ c_{t-1}+b_{f})\\i_{t}&=\sigma _{g}(W_{i}*x_{t}+U_{i}*h_{t-1}+V_{i}\circ c_{t-1}+b_{i})\\o_{t}&=\sigma _{g}(W_{o}*x_{t}+U_{o}*h_{t-1}+V_{o}\circ c_{t-1}+b_{o})\\c_{t}&=f_{t}\circ c_{t-1}+i_{t}\circ \sigma _{c}(W_{c}*x_{t}+U_{c}*h_{t-1}+b_{c})\\h_{t}&=o_{t}\circ \sigma _{h}(c_{t})\end{aligned}}}

## TrainingEdit

To minimize LSTM's total error on a set of training sequences, iterative gradient descent such as backpropagation through time can be used to change each weight in proportion to the derivative of the error with respect to it. A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to ${\displaystyle \lim _{n\to \infty }W^{n}=0}$  if the spectral radius of ${\displaystyle W}$  is smaller than 1.[22][23] With LSTM units, however, when error values are back-propagated from the output, the error remains in the unit's memory. This "error carousel" continuously feeds error back to each of the gates until they learn to cut off the value. Thus, regular backpropagation is effective at training an LSTM unit to remember values for long durations.

LSTM can also be trained by a combination of artificial evolution for weights to the hidden units, and pseudo-inverse or support vector machines for weights to the output units.[24] In reinforcement learning applications LSTM can be trained by policy gradient methods, evolution strategies or genetic algorithms[citation needed].

### CTC score functionEdit

Many applications use stacks of LSTM RNNs[25] and train them by connectionist temporal classification (CTC)[26] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.

## ApplicationsEdit

Applications of LSTM include:

LSTM has Turing completeness in the sense that given enough network units it can compute any result that a conventional computer can compute, provided it has the proper weight matrix, which may be viewed as its program[citation needed][further explanation needed].