EconPapers    
Economics at your fingertips  
 

On the Use of Evidence in Neural Networks

David H. Wolpert

Working Papers from Santa Fe Institute

Abstract: The Bayesian ``evidence'' approximation, which is closely related to generalized maximum likelihood, has recently been employed to determine the noise and weight-penalty terms for training neural nets. This paper shows that it is far simpler to perform the exact calculation than it is to set up the evidence approximation. Moreover, unlike that approximation, the exact result does not have to be re-calculated for every new data set. Nor does it require the running of complex numerical computer code (the exact result is closed form). In addition, it turns out that for neural nets, the evidence procedure's MAP estimate is {\it in toto} approximation error. Another advantage of the exact analysis is that it does not lead to incorrect intuition, like the claim that one can ``evaluate different priors in light of the data.'' This paper ends by discussing sufficiency conditions for the evidence approximation to hold, along with the implications of those conditions. Although couched in terms of neural nets, the anlaysis of this paper holds for any Bayesian interpolation problem.

Date: 1993-02
References: Add references at CitEc
Citations: View citations in EconPapers (1)

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:wop:safiwp:93-02-007

Access Statistics for this paper

More papers in Working Papers from Santa Fe Institute Contact information at EDIRC.
Bibliographic data for series maintained by Thomas Krichel ().

 
Page updated 2025-03-22
Handle: RePEc:wop:safiwp:93-02-007