EconPapers    
Economics at your fingertips  
 

Emergent scale invariance in neural networks

M.I. Katsnelson, V. Vanchurin and T. Westerhout

Physica A: Statistical Mechanics and its Applications, 2023, vol. 610, issue C

Abstract: We demonstrate, both analytically and numerically, that learning dynamics of neural networks is generically attracted towards a scale-invariant state. The effect can be modeled with quartic interactions between non-trainable variables (e.g. states of neurons) and trainable variables (e.g. weight matrix). Non-trainable variables are rapidly driven towards stochastic equilibrium and trainable variables are slowly driven towards learning equilibrium described by a scale-invariant distribution on a wide range of scales.

Keywords: Statistical physics; Scale invariance; Deep learning; Neural networks; Saddle-point approximation; Theory of learning (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0378437122009591
Full text for ScienceDirect subscribers only. Journal offers the option of making the article available online on Science direct for a fee of $3,000

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:phsmap:v:610:y:2023:i:c:s0378437122009591

DOI: 10.1016/j.physa.2022.128401

Access Statistics for this article

Physica A: Statistical Mechanics and its Applications is currently edited by K. A. Dawson, J. O. Indekeu, H.E. Stanley and C. Tsallis

More articles in Physica A: Statistical Mechanics and its Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:phsmap:v:610:y:2023:i:c:s0378437122009591