EconPapers    
Economics at your fingertips  
 

Regularised gradient boosting for financial time-series modelling

Alexandros Agapitos (), Anthony Brabazon () and Michael O’Neill ()
Additional contact information
Alexandros Agapitos: University College Dublin
Anthony Brabazon: University College Dublin
Michael O’Neill: University College Dublin

Computational Management Science, 2017, vol. 14, issue 3, No 4, 367-391

Abstract: Abstract Gradient Boosting (GB) learns an additive expansion of simple basis-models. This is accomplished by iteratively fitting an elementary model to the negative gradient of a loss function with respect to the expansion’s values at each training data-point evaluated at each iteration. For the case of squared-error loss function, the negative gradient takes the form of an ordinary residual for a given training data-point. Studies have demonstrated that running GB for hundreds of iterations can lead to overfitting, while a number of authors showed that by adding noise to the training data, generalisation is impaired even with relatively few basis-models. Regularisation is realised through the shrinkage of every newly-added basis-model to the expansion. This paper demonstrates that GB with shrinkage-based regularisation is still prone to overfitting in noisy datasets. We use a transformation based on a sigmoidal function for reducing the influence of extreme values in the residuals of a GB iteration without removing them from the training set. This extension is built on top of shrinkage-based regularisation. Simulations using synthetic, noisy data show that the proposed method slows-down overfitting and reduces the generalisation error of regularised GB. The proposed method is then applied to the inherently noisy domain of financial time-series modelling. Results suggest that for the majority of datasets the method generalises better when compared against standard regularised GB, as well as against a range of other time-series modelling methods.

Keywords: Boosting algorithms; Gradient boosting; Stagewise additive modelling; Regularisation; Financial time-series modelling; Financial forecasting; Feedforward neural networks; Noisy data; Ensemble learning (search for similar items in EconPapers)
Date: 2017
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10287-017-0280-y Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:comgts:v:14:y:2017:i:3:d:10.1007_s10287-017-0280-y

Ordering information: This journal article can be ordered from
http://www.springer. ... ch/journal/10287/PS2

DOI: 10.1007/s10287-017-0280-y

Access Statistics for this article

Computational Management Science is currently edited by Ruediger Schultz

More articles in Computational Management Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:comgts:v:14:y:2017:i:3:d:10.1007_s10287-017-0280-y