EconPapers    
Economics at your fingertips  
 

Optimal Algorithms for Bandit Learning in Matching Markets

Tejas Pagare, Agniv Bandyopadhyay and Sandeep Juneja

Papers from arXiv.org

Abstract: We study the problem of pure exploration in matching markets under uncertain preferences, where the goal is to identify a stable matching with confidence parameter $\delta$ and minimal sample complexity. Agents learn preferences via stochastic rewards, with expected values indicating preferences. This finds use in labor market platforms like Upwork, where firms and freelancers must be matched quickly despite noisy observations and no prior knowledge, in a stable manner that prevents dissatisfaction. We consider markets with unique stable matching and establish information-theoretic lower bounds on sample complexity for (1) one-sided learning, where one side of the market knows its true preferences, and (2) two-sided learning, where both sides are uncertain. We propose a computationally efficient algorithm and prove that it asymptotically ($\delta\to 0$) matches the lower bound to a constant for one-sided learning. Using the insights from the lower bound, we extend our algorithm to the two-sided learning setting and provide experimental results showing that it closely matches the lower bound on sample complexity. Finally, using a system of ODEs, we characterize the idealized fluid path that our algorithm chases.

Date: 2025-09
References: Add references at CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2509.14466 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2509.14466

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-10-04
Handle: RePEc:arx:papers:2509.14466