EconPapers    
Economics at your fingertips  
 

Volume Algorithm: Training Neural Networks and Solving Difficult Linear Programs

Mourad Baiou (), Francisco Barahona () and Joao Goncalves ()
Additional contact information
Mourad Baiou: Université Clermont Auvergne, CNRS, Clermont Auvergne INP
Francisco Barahona: IBM T. J. Watson Research Center
Joao Goncalves: IBM T. J. Watson Research Center

Chapter Chapter 12 in Optimization Essentials, 2024, pp 355-392 from Springer

Abstract: Abstract Subgradient methods have a long history of applications in Optimization. They are known for producing fast approximate solutions in cases where more exact methods are prohibitive to use. The Volume algorithm (VA) is a subgradient method, it was developed to deal with large-scale linear programs that were too difficult for traditional algorithms. Its main advantage is that it produces approximate primal solutions as well as dual solutions. It has also shown fast convergence in practice. Here we review this method and present experiments in three different areas, which demonstrate its good performance and wide applicability. First we deal with training neural networks. We show that replacing the widely used Stochastic Gradient Descent method with the VA leads to very good results. Then we study bus routing in a medium size city. Here we tackle a linear programming relaxation that is too large to be treated with traditional linear programming algorithms. Here the VA gives tight lower bounds, and the approximate primal solution is used as the starting point for several heuristics that produce the final routing. Finally we turn our attention to the p-median problem for restructuring semistructured data. We apply the VA to the linear programming relaxation, and we decompose it in a simple way so it can be treated in parallel. We use a parallel implementation combined with a heuristic. This heuristic starts with the approximate primal solution and derives an integer solution. We show experiments with large instances coming from semistructured databases. For the majority of these cases the gap from optimality was less than 1%.

Keywords: Subgradient method; Volume algorithm; Training neural networks; Bus routing; p-median; Semistructured data; 68T07; 90C25 (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:isochp:978-981-99-5491-9_12

Ordering information: This item can be ordered from
http://www.springer.com/9789819954919

DOI: 10.1007/978-981-99-5491-9_12

Access Statistics for this chapter

More chapters in International Series in Operations Research & Management Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:spr:isochp:978-981-99-5491-9_12