N-BEATS — The First Interpretable Deep Learning Model That Worked for Time Series Forecasting

An easy-to-understand deep dive into how N-BEATS works and how you can use it.

Jonte Dancker
Towards Data Science

--

Architecture of N-BEATS (Image taken from Oreshkin et al.).

Time series forecasting has been the only area in which Deep Learning and Transformers did not outperform other models.

Looking at the Makridakis M-competition, the winning solutions always relied on statistical models. Until the M4 competition, winning solutions were either pure statistical or a hybrid of ML and statistical models. Pure ML approaches barely surpassed the competition baseline.

This changed with a paper published by Oreshkin, et al. in 2020. The authors published N-BEATS, a promising pure Deep Learning approach. The model beat the winning solution of the M4 competition. It was the first pure Deep Learning approach that outperformed well-established statistical approaches.

N-BEATS stands for Neural Basis Expansion Analysis for Interpretable Time Series.

In this article, I will go through the architecture behind N-BEATS. But do not be afraid, the deep dive will be easy-to-understand. I also show you how we can make the deep learning approach interpretable. However, it is not enough to only understand how N-BEATS works. Thus, I will show you how…

--

--

Expert in time series forecasting and analysis | Writing about my data science side projects and sharing my learnings