Data-Centric Benchmarking of Neural Network Architectures for the Univariate Time Series Forecasting Task
Philipp Schlieper (),
Mischa Dombrowski,
An Nguyen,
Dario Zanca and
Bjoern Eskofier
Additional contact information
Philipp Schlieper: Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University, 91052 Erlangen, Germany
Mischa Dombrowski: Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University, 91052 Erlangen, Germany
An Nguyen: Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University, 91052 Erlangen, Germany
Dario Zanca: Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University, 91052 Erlangen, Germany
Bjoern Eskofier: Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University, 91052 Erlangen, Germany
Forecasting, 2024, vol. 6, issue 3, 1-30
Abstract:
Time series forecasting has witnessed a rapid proliferation of novel neural network approaches in recent times. However, performances in terms of benchmarking results are generally not consistent, and it is complicated to determine in which cases one approach fits better than another. Therefore, we propose adopting a data-centric perspective for benchmarking neural network architectures on time series forecasting by generating ad hoc synthetic datasets. In particular, we combine sinusoidal functions to synthesize univariate time series data for multi-input-multi-output prediction tasks. We compare the most popular architectures for time series, namely long short-term memory (LSTM) networks, convolutional neural networks (CNNs), and transformers, and directly connect their performance with different controlled data characteristics, such as the sequence length, noise and frequency, and delay length. Our findings suggest that transformers are the best architecture for dealing with different delay lengths. In contrast, for different noise and frequency levels and different sequence lengths, LSTM is the best-performing architecture by a significant amount. Based on our insights, we derive recommendations which allow machine learning (ML) practitioners to decide which architecture to apply, given the dataset’s characteristics.
Keywords: deep learning; time series; neural networks; model selection; data synthesis; univariate forecasting (search for similar items in EconPapers)
JEL-codes: A1 B4 C0 C1 C2 C3 C4 C5 C8 M0 Q2 Q3 Q4 (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2571-9394/6/3/37/pdf (application/pdf)
https://www.mdpi.com/2571-9394/6/3/37/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jforec:v:6:y:2024:i:3:p:37-747:d:1464453
Access Statistics for this article
Forecasting is currently edited by Ms. Joss Chen
More articles in Forecasting from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().