Gated Recurrent Unit
189
Gated Recurrent Unit
GRU (Gated Recurrent Unit) is an advancement of the standard RNN (Recurrent Neural Network). It was introduced by Kyunghyun Cho et al in 2014. GRU uses less memory and is faster than LSTM, but LSTM is more accurate when using datasets with longer sequences.