Optimal adaptive testing: informativeness and incentives
Rahul Deb and
Colin Stewart
Theoretical Economics, 2018, vol. 13, issue 3
Abstract:
We introduce a learning framework in which a principal seeks to determine the ability of a strategic agent. The principal assigns a test consisting of a finite sequence of tasks. The test is adaptive: each task that is assigned can depend on the agent's past performance. The probability of success on a task is jointly determined by the agent's privately known ability and an unobserved effort level that he chooses to maximize the probability of passing the test. We identify a simple monotonicity condition under which the principal always employs the most (statistically) informative task in the optimal adaptive test. Conversely, whenever the condition is violated, we show that there are cases in which the principal strictly prefers to use less informative tasks.
Keywords: Adaptive testing; dynamic learning; ratcheting; testing experts (search for similar items in EconPapers)
JEL-codes: C44 D82 D83 (search for similar items in EconPapers)
Date: 2018-10-04
References: Add references at CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
http://econtheory.org/ojs/index.php/te/article/viewFile/20181233/21826/654 (application/pdf)
Related works:
Working Paper: Optimal Adaptive Testing: Informativeness and Incentives (2015) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:the:publsh:2914
Access Statistics for this article
Theoretical Economics is currently edited by Simon Board, Todd D. Sarver, Juuso Toikka, Rakesh Vohra, Pierre-Olivier Weill
More articles in Theoretical Economics from Econometric Society
Bibliographic data for series maintained by Martin J. Osborne ().