EconPapers    
Economics at your fingertips  
 

Technical Note—Trading Off Quick versus Slow Actions in Optimal Search

Steven M. Shechter (), Farhad Ghassemi (), Yasin Gocgun () and Martin L. Puterman ()
Additional contact information
Steven M. Shechter: Sauder School of Business, University of British Columbia, Vancouver BC V6T 1Z2, Canada
Farhad Ghassemi: Amazon.com, Inc., Seattle, WA 98108
Yasin Gocgun: Department of Industrial Engineering, Istanbul Kemerburgaz University, 34217 Istanbul, Turkey
Martin L. Puterman: Sauder School of Business, University of British Columbia, Vancouver BC V6T 1Z2, Canada

Operations Research, 2015, vol. 63, issue 2, 353-362

Abstract: We consider the search for a target whose precise location is uncertain. The search region is divided into grid cells, and the searcher decides which cell to visit next and whether to search it quickly or slowly. A quick search of a cell containing the target may damage it, resulting in a failed search, or it may locate the target safely. If the target is not in the cell, the search continues over the remaining cells. If a slow search is performed on a cell, then the search ends in failure with a fixed probability regardless of whether or not the target is in that cell (e.g., because of enemy fire while performing the slow search). If the slow search survives this failure possibility, then the search ends in success if the target is in that cell; otherwise, the search continues over the remaining cells. We seek to minimize the probability of the search ending in failure and consider two types of rules for visiting cells: the unconstrained search, in which the searcher may visit cells in any order, and the constrained search, in which the searcher may only visit adjacent cells (e.g., up, down, left, or right of cells already visited). We prove that the optimal policy for the unconstrained search is to search quickly some initial set of cells with the lowest probabilities of containing the target before slowly searching the remaining cells in decreasing order of probabilities. For the special case in which a quick search on a cell containing the target damages it with certainty, the optimal policy is to search all cells slowly, in decreasing order of probabilities. We use the optimal solution of the unconstrained search in a branch-and-bound optimal solution algorithm for the constrained search. For larger instances, we evaluate heuristics and approximate dynamic programming approaches for finding good solutions.

Keywords: optimal search; Markov decision process; approximate dynamic programming; branch-and-bound (search for similar items in EconPapers)
Date: 2015
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
http://dx.doi.org/10.1287/opre.2015.1349 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:63:y:2015:i:2:p:353-362

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:63:y:2015:i:2:p:353-362