EconPapers    
Economics at your fingertips  
 

Performances in supervised learning

Marco A.P. Idiart

Physica A: Statistical Mechanics and its Applications, 2000, vol. 285, issue 3, 566-578

Abstract: The performance of supervised learning algorithms is discussed in terms of computational effort. We show using numerical simulations that several off-line algorithms, implemented as iterated on-line algorithms, designed to reach at least the border of the version space reproduce the 0.5/α behavior of generalization error attributed to maximal stability algorithms. However, if the cost of attaining certain generalization is measured by quantities related to computation, like the number of example presentations or the number of synaptic corrections, the performance of these algorithms falls below most on-line strategies. We also show that mixed strategies for the training set presentation does not improve learning.

Date: 2000
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0378437100002910
Full text for ScienceDirect subscribers only. Journal offers the option of making the article available online on Science direct for a fee of $3,000

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:phsmap:v:285:y:2000:i:3:p:566-578

DOI: 10.1016/S0378-4371(00)00291-0

Access Statistics for this article

Physica A: Statistical Mechanics and its Applications is currently edited by K. A. Dawson, J. O. Indekeu, H.E. Stanley and C. Tsallis

More articles in Physica A: Statistical Mechanics and its Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:phsmap:v:285:y:2000:i:3:p:566-578