EconPapers    
Economics at your fingertips  
 

Off-Training-Set Error for the Gibbs and the Bayes Optimal Generalizers

Tal Grossman, Emanuel Knill and David Wolpert

Working Papers from Santa Fe Institute

Abstract: In this paper we analyze the average off-training-set behavior of the Bayes-optimal and Gibbs learning algorithms. We do this by exploiting the concept of refinement, which concerns the relationship between probability distributions. For non-uniform sampling distributions the expected off-training-set error for both learning algorithms can increase with training set size. However, we show in this paper that for uniform sampling and either algorithm, the expected error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algorithm stays constant. In addition, we show that when the target function is fixed, expected off-training-set error can increase with training set size if and only if the expected error averaged over all targets decreases with training set size. Our results hold for arbitrary noise and arbitrary loss functions.

Date: 1995-02
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:wop:safiwp:95-02-023

Access Statistics for this paper

More papers in Working Papers from Santa Fe Institute Contact information at EDIRC.
Bibliographic data for series maintained by Thomas Krichel ().

 
Page updated 2025-03-22
Handle: RePEc:wop:safiwp:95-02-023