Measuring discourse by algorithm
Aviv Caspi and
Edward H. Stiglitz
International Review of Law and Economics, 2020, vol. 62, issue C
Abstract:
Scholars increasingly use machine learning techniques such as Latent Dirichlet Allocation (LDA) to reduce the dimensionality of textual data and to study discourse in collective bodies. However, measures of discourse based on algorithmic results typically have no intuitive meaning or obvious relationship to humanly observed discourse. Such measures of discourse must be carefully validated before relied on and interpreted. We examine several common measures of discourse based on algorithmic results, and propose a number of ways to validate them in the setting of Federal Open Market Committee meetings. We also suggest that validation techniques may be used as a principled approach to model selection and parameterization.
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0144818819302571
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:irlaec:v:62:y:2020:i:c:s0144818819302571
DOI: 10.1016/j.irle.2019.105863
Access Statistics for this article
International Review of Law and Economics is currently edited by C. Ott, A. W. Katz and H-B. Schäfer
More articles in International Review of Law and Economics from Elsevier
Bibliographic data for series maintained by Catherine Liu ().