A Temporal Linear Network for Time Series Forecasting
Rémi Genet and
Hugo Inzirillo
Additional contact information
Rémi Genet: DRM - Dauphine Recherches en Management - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique
Hugo Inzirillo: CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique
Working Papers from HAL
Abstract:
Recent research has challenged the necessity of complex deep learning architectures for time series forecasting, demonstrating that simple linear models can often outperform sophisticated approaches. Building upon this insight, we introduce a novel architecture the Temporal Linear Net (TLN), thatextends the capabilities of linear models while maintaining interpretability and computational efficiency. TLN is designed to effectively capture both temporal and feature-wise dependenciesin multivariate time series data. Our approach is a variant of TSMixer that maintains strict linearity throughout its architecture. TSMixer removes activation functions, introduces specialized kernel initializations, and incorporates dilated convolutions to handle various time scales, while preserving the linear nature of the model. Unlike transformer-based models that may losetemporal information due to their permutation-invariant nature, TLN explicitly preserves and leverages the temporal structure of the input data. A key innovation of TLN is its ability to compute an equivalent linear model, offering a level of interpretability not found in more complex architectures such as TSMixer. This feature allows for seamless conversion between the full TLN model and its linear equivalent, facilitating both training flexibility and inference optimization. Importantly, we demonstrate that TLN outperforms standard linear regression models. This superior performanceis attributed to the unique training structure of the TLN and the inherent relationship of weights established during model construction, which can not be found in conventional regression approaches. Our findings suggest that TLN strikes a balance between the simplicity of linear models and the expressiveness needed for complex time series forecasting tasks. It offers improvedinterpretability compared to TSMixer while demonstrating greater resilience in multivariate cases than basic linear models and standard linear regression. This work contributes to the ongoingdiscussion about the trade-offs between model complexity and performance in time series analysis, and opens new possibilites for developing efficient and interpretable forecasting models that leverage the strengths of neural network architectures and linear models.
Keywords: Machine; Learning (search for similar items in EconPapers)
Date: 2025-02-01
Note: View the original document on HAL open archive server: https://hal.science/hal-04923919v1
References: Add references at CitEc
Citations:
Downloads: (external link)
https://hal.science/hal-04923919v1/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:wpaper:hal-04923919
Access Statistics for this paper
More papers in Working Papers from HAL
Bibliographic data for series maintained by CCSD ().