How good is good? Probabilistic benchmarks and nanofinance+
Rolando Gonzales Martínez
Papers from arXiv.org
Abstract:
Benchmarks are standards that allow to identify opportunities for improvement among comparable units. This study suggests a 2-step methodology for calculating probabilistic benchmarks in noisy data sets: (i) double-hyperbolic undersampling filters the noise of key performance indicators (KPIs), and (ii) a relevance vector machine estimates probabilistic benchmarks with denoised KPIs. The usefulness of the methods is illustrated with an application to a database of nano-finance+. The results indicate that-in the case of nano-finance groups-a higher discrimination power is obtained with variables that capture the macro-economic environment of the country where a group operates. Also, the estimates show that groups operating in rural regions have different probabilistic benchmarks, compared to groups in urban and peri-urban areas.
Date: 2021-03
New Economics Papers: this item is included in nep-cwa
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2103.01669 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2103.01669
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().