EconPapers    
Economics at your fingertips  
 

Local Lipschitz Bounds of Deep Neural Networks

Calypso Herrera, Florian Krach and Josef Teichmann

Papers from arXiv.org

Abstract: The Lipschitz constant is an important quantity that arises in analysing the convergence of gradient-based optimization methods. It is generally unclear how to estimate the Lipschitz constant of a complex model. Thus, this paper studies an important problem that may be useful to the broader area of non-convex optimization. The main result provides a local upper bound on the Lipschitz constants of a multi-layer feed-forward neural network and its gradient. Moreover, lower bounds are established as well, which are used to show that it is impossible to derive global upper bounds for the Lipschitz constants. In contrast to previous works, we compute the Lipschitz constants with respect to the network parameters and not with respect to the inputs. These constants are needed for the theoretical description of many step size schedulers of gradient based optimization schemes and their convergence analysis. The idea is both simple and effective. The results are extended to a generalization of neural networks, continuously deep neural networks, which are described by controlled ODEs.

Date: 2020-04, Revised 2023-02
New Economics Papers: this item is included in nep-big and nep-cmp
References: Add references at CitEc
Citations: View citations in EconPapers (3)

Downloads: (external link)
http://arxiv.org/pdf/2004.13135 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2004.13135

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2024-12-28
Handle: RePEc:arx:papers:2004.13135