##plugins.themes.bootstrap3.article.main##

A. Abirami

S. David Samuel Azariya

Abstract

Recurrent neural networks (RNNs) have been shown to be highly effective for sequence-based tasks in recent years. However, because of long-term dependen-cies and the infamous disappearing and expanding gradient concerns, training RNNs presents unique difficulties. Backpropagation through Time (BPTT), his-torically used to optimize these networks, shows its limits in more complicated situations. This research presents a unique BPTT modification known as FO-BPTT (Fractional-Order BPTT). FO-BPTT improves stability and convergence by addressing some of the inherent constraints of ordinary BPTT and utilizing the robust mathematical framework of fractional-order calculus. Our exhaustive tests on several datasets show that FO-BPTT performs better than its conventional cousin in several benchmarks. Furthermore, our results point to a significant role for fractional order in shaping learning dynamics, opening up new avenues for hyperparameter optimization. This study not only lays the road for improved RNN training but also suggests that fractional-order calculus may be helpful in other neural network paradigms.

##plugins.themes.bootstrap3.article.details##