EconPapers    
Economics at your fingertips  
 

Optimizing over an Ensemble of Trained Neural Networks

Keliang Wang (keliang.wang@uconn.edu), Leonardo Lozano (leolozano@uc.edu), Carlos Cardonha (carlos.cardonha@uconn.edu) and David Bergman (david.bergman@uconn.edu)
Additional contact information
Keliang Wang: Operations and Information Management, University of Connecticut, Storrs, Connecticut 06260
Leonardo Lozano: Operations, Business Analytics & Information Systems, University of Cincinnati, Cincinnati, Ohio 45221
Carlos Cardonha: Operations and Information Management, University of Connecticut, Storrs, Connecticut 06260
David Bergman: Operations and Information Management, University of Connecticut, Storrs, Connecticut 06260

INFORMS Journal on Computing, 2023, vol. 35, issue 3, 652-674

Abstract: We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit (ReLU) activation. Recent literature has explored the use of a single neural network to model either uncertain or complex elements within an objective function. However, it is well known that ensembles of neural networks produce more stable predictions and have better generalizability than models with single neural networks, which motivates the investigation of ensembles of neural networks rather than single neural networks in decision-making pipelines. We study how to incorporate a neural network ensemble as the objective function of an optimization model and explore computational approaches for the ensuing problem. We present a mixed-integer linear program based on existing popular big- M formulations for optimizing over a single neural network. We develop a two-phase approach for our model that combines preprocessing procedures to tighten bounds for critical neurons in the neural networks with a Lagrangian relaxation-based branch-and-bound approach. Experimental evaluations of our solution methods suggest that using ensembles of neural networks yields more stable and higher quality solutions, compared with single neural networks, and that our optimization algorithm outperforms (the adaption of) a state-of-the-art approach in terms of computational time and optimality gaps.

Keywords: mixed-integer linear programming; neural networks; preprocessing techniques; Benders decomposition (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://dx.doi.org/10.1287/ijoc.2023.1285 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orijoc:v:35:y:2023:i:3:p:652-674

Access Statistics for this article

More articles in INFORMS Journal on Computing from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher (casher@informs.org).

 
Page updated 2025-01-18
Handle: RePEc:inm:orijoc:v:35:y:2023:i:3:p:652-674