Discrete simulation optimization for tuning machine learning method hyperparameters
Varun Ramamohan,
Shobhit Singhal,
Aditya Raj Gupta and
Nomesh Bhojkumar Bolia
Journal of Simulation, 2024, vol. 18, issue 5, 745-765
Abstract:
An important aspect of machine learning (ML) involves controlling the learning process for the ML method in question to maximize its performance. Hyperparameter tuning (HPT) involves selecting suitable ML method parameters that control its learning process. Given that HPT can be conceptualized as a black box optimization problem subject to stochasticity, simulation optimization (SO) methods appear well suited to this purpose. Therefore, we conceptualize HPT as a discrete SO problem and demonstrate the use of the Kim and Nelson (KN) ranking and selection method, and the stochastic ruler (SR) and the adaptive hyperbox (AH) random search methods for HPT. We also construct the theoretical basis for applying the KN method. We demonstrate the application of the KN and the SR methods to a wide variety of machine learning models, including deep neural network models. We then successfully benchmark the KN, SR and the AH methods against multiple state-of-the-art HPT methods.
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
http://hdl.handle.net/10.1080/17477778.2023.2219401 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:taf:tjsmxx:v:18:y:2024:i:5:p:745-765
Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/tjsm20
DOI: 10.1080/17477778.2023.2219401
Access Statistics for this article
Journal of Simulation is currently edited by Christine Currie
More articles in Journal of Simulation from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().