EconPapers    
Economics at your fingertips  
 

The Continuous Formulation of Shallow Neural Networks as Wasserstein-Type Gradient Flows

Xavier Fernández-Real () and Alessio Figalli ()
Additional contact information
Xavier Fernández-Real: EPFL
Alessio Figalli: ETH Zürich, Department of Mathematics

A chapter in Analysis at Large, 2022, pp 29-57 from Springer

Abstract: Abstract It has been recently observed that the training of a single hidden layer artificial neural network can be reinterpreted as a Wasserstein gradient flow for the weights for the error functional. In the limit, as the number of parameters tends to infinity, this gives rise to a family of parabolic equations. This survey aims to discuss this relation, focusing on the associated theoretical aspects appealing to the mathematical community and providing a list of interesting open problems.

Keywords: Machine learning; Continuous formulation; Gradient flow; Wasserstein distance; 35Q49; 49Q22; 68T07 (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-3-031-05331-3_3

Ordering information: This item can be ordered from
http://www.springer.com/9783031053313

DOI: 10.1007/978-3-031-05331-3_3

Access Statistics for this chapter

More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-11-30
Handle: RePEc:spr:sprchp:978-3-031-05331-3_3