EconPapers    
Economics at your fingertips  
 

A new class of information criteria for improved prediction in the presence of training/validation data heterogeneity

Javier E. Flores (), Joseph E. Cavanaugh () and Andrew A. Neath ()
Additional contact information
Javier E. Flores: Pacific Northwest National Laboratory
Joseph E. Cavanaugh: University of Iowa
Andrew A. Neath: Southern Illinois University Edwardsville

Computational Statistics, 2025, vol. 40, issue 5, No 5, 2389-2423

Abstract: Abstract Information criteria provide a cogent approach for identifying models that provide an optimal balance between the competing objectives of goodness-of-fit and parsimony. Models that better conform to a dataset are often more complex, yet such models are plagued by greater variability in estimation and prediction. Conversely, overly simplistic models reduce variability at the cost of increases in bias. Asymptotically efficient criteria are those that, for large samples, select the fitted candidate model whose predictors minimize the mean squared prediction error, optimizing between prediction bias and variability. In the context of prediction, asymptotically efficient criteria are thus a preferred tool for model selection, with the Akaike information criterion (AIC) being among the most widely used. However, asymptotic efficiency relies upon the assumption of a panel of validation data generated independently from, but identically to, the set of training data. We argue that assuming identically distributed training and validation data is misaligned with the premise of prediction and often violated in practice. This is most apparent in a regression context, where assuming training/validation data homogeneity requires identical panels of regressors. We therefore develop a new class of predictive information criteria (PIC) that do not assume training/validation data homogeneity and are shown to generalize AIC to the more practically relevant setting of training/validation data heterogeneity. The analytic properties and predictive performance of these new criteria are explored within the traditional regression framework. We consider both simulated and real-data settings. Software for implementing these methods is provided in the R package, picR, available through CRAN.

Keywords: Akaike information criterion; Asymptotic efficiency; Bias/variability trade-off; Kullback discrepancy; Kullback–Leibler information; Model selection; Predictive modeling (search for similar items in EconPapers)
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s00180-024-01559-1 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:compst:v:40:y:2025:i:5:d:10.1007_s00180-024-01559-1

Ordering information: This journal article can be ordered from
http://www.springer.com/statistics/journal/180/PS2

DOI: 10.1007/s00180-024-01559-1

Access Statistics for this article

Computational Statistics is currently edited by Wataru Sakamoto, Ricardo Cao and Jürgen Symanzik

More articles in Computational Statistics from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-06-21
Handle: RePEc:spr:compst:v:40:y:2025:i:5:d:10.1007_s00180-024-01559-1