Biased learning in Boolean perceptrons
Osame Kinouchi and
Nestor Caticha
Physica A: Statistical Mechanics and its Applications, 1992, vol. 185, issue 1, 411-416
Abstract:
The generalization ability of Hebbian Boolean perceptrons can be improved by a kind of feedback mechanism in which the student net judges the difficulty of a new example from its previous experience. It is shown that by giving a higher weight to the hard examples both generalization and learning abilities can be increased. Analytical as well as numerical results are presented for both cases where the examples are drawn at random or selected in an intelligent form.
Date: 1992
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/0378437192904826
Full text for ScienceDirect subscribers only. Journal offers the option of making the article available online on Science direct for a fee of $3,000
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:phsmap:v:185:y:1992:i:1:p:411-416
DOI: 10.1016/0378-4371(92)90482-6
Access Statistics for this article
Physica A: Statistical Mechanics and its Applications is currently edited by K. A. Dawson, J. O. Indekeu, H.E. Stanley and C. Tsallis
More articles in Physica A: Statistical Mechanics and its Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().