EconPapers    
Economics at your fingertips  
 

Partial-monotone adaptive submodular maximization

Shaojie Tang () and Jing Yuan ()
Additional contact information
Shaojie Tang: University of Texas at Dallas
Jing Yuan: University of North Texas

Journal of Combinatorial Optimization, 2023, vol. 45, issue 1, No 38, 13 pages

Abstract: Abstract Many AI/Machine learning problems require adaptively selecting a sequence of items, each selected item might provide some feedback that is valuable for making better selections in the future, with the goal of maximizing an adaptive submodular function. Most of existing studies in this field focus on either monotone case or non-monotone case. Specifically, if the utility function is monotone and adaptive submodular, Golovin and Krause (J Artif Intell Res 42:427–486, 2011) developed $$(1-1/e)$$ ( 1 - 1 / e ) approximation solution subject to a cardinality constraint. For the cardinality-constrained non-monotone case, Tang (Theor Comput Sci 850:249–261, 2021) showed that a random greedy policy attains an approximation ratio of 1/e. In this work, we generalize the above mentioned results by studying the partial-monotone adaptive submodular maximization problem. To this end, we introduce the notation of adaptive monotonicity ratio $$m\in [0,1]$$ m ∈ [ 0 , 1 ] to measure the degree of monotonicity of a function. Our main result is to show that for the case of cardinality constraints, if the utility function has an adaptive monotonicity ratio of m and it is adaptive submodular, then a random greedy policy attains an approximation ratio of $$m(1-1/e)+(1-m)(1/e)$$ m ( 1 - 1 / e ) + ( 1 - m ) ( 1 / e ) . Notably this result recovers the aforementioned $$(1-1/e)$$ ( 1 - 1 / e ) and 1/e approximation ratios when $$m = 1$$ m = 1 and $$m = 0$$ m = 0 , respectively. We further extend our results to consider a knapsack constraint and develop a $$(m+1)/10$$ ( m + 1 ) / 10 approximation solution for this general case. One important implication of our results is that even for a non-monotone utility function, we still can attain an approximation ratio close to $$(1-1/e)$$ ( 1 - 1 / e ) if this function is “close” to a monotone function. This leads to improved performance bounds for many machine learning applications whose utility functions are almost adaptive monotone.

Keywords: Approximation algorithm; Adaptive submodular maximization; Partial-monotonicity (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10878-022-00965-9 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:jcomop:v:45:y:2023:i:1:d:10.1007_s10878-022-00965-9

Ordering information: This journal article can be ordered from
https://www.springer.com/journal/10878

DOI: 10.1007/s10878-022-00965-9

Access Statistics for this article

Journal of Combinatorial Optimization is currently edited by Thai, My T.

More articles in Journal of Combinatorial Optimization from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:jcomop:v:45:y:2023:i:1:d:10.1007_s10878-022-00965-9