EconPapers    
Economics at your fingertips  
 

Generalized Bayes Quantification Learning under Dataset Shift

Jacob Fiksel, Abhirup Datta, Agbessi Amouzou and Scott Zeger

Journal of the American Statistical Association, 2022, vol. 117, issue 540, 2163-2181

Abstract: Quantification learning is the task of prevalence estimation for a test population using predictions from a classifier trained on a different population. Quantification methods assume that the sensitivities and specificities of the classifier are either perfect or transportable from the training to the test population. These assumptions are inappropriate in the presence of dataset shift, when the misclassification rates in the training population are not representative of those for the test population. Quantification under dataset shift has been addressed only for single-class (categorical) predictions and assuming perfect knowledge of the true labels on a small subset of the test population. We propose generalized Bayes quantification learning (GBQL) that uses the entire compositional predictions from probabilistic classifiers and allows for uncertainty in true class labels for the limited labeled test data. Instead of positing a full model, we use a model-free Bayesian estimating equation approach to compositional data using Kullback–Leibler loss-functions based only on a first-moment assumption. The idea will be useful in Bayesian compositional data analysis in general as it is robust to different generating mechanisms for compositional data and allows 0’s and 1’s in the compositional outputs thereby including categorical outputs as a special case. We show how our method yields existing quantification approaches as special cases. Extension to an ensemble GBQL that uses predictions from multiple classifiers yielding inference robust to inclusion of a poor classifier is discussed. We outline a fast and efficient Gibbs sampler using a rounding and coarsening approximation to the loss functions. We establish posterior consistency, asymptotic normality and valid coverage of interval estimates from GBQL, which to our knowledge are the first theoretical results for a quantification approach in the presence of local labeled data. We also establish finite sample posterior concentration rate. Empirical performance of GBQL is demonstrated through simulations and analysis of real data with evident dataset shift. Supplementary materials for this article are available online.

Date: 2022
References: Add references at CitEc
Citations:

Downloads: (external link)
http://hdl.handle.net/10.1080/01621459.2021.1909599 (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:taf:jnlasa:v:117:y:2022:i:540:p:2163-2181

Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/UASA20

DOI: 10.1080/01621459.2021.1909599

Access Statistics for this article

Journal of the American Statistical Association is currently edited by Xuming He, Jun Liu, Joseph Ibrahim and Alyson Wilson

More articles in Journal of the American Statistical Association from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().

 
Page updated 2025-03-20
Handle: RePEc:taf:jnlasa:v:117:y:2022:i:540:p:2163-2181