EconPapers    
Economics at your fingertips  
 

Inference in high-dimensional regression models without the exact or $L^p$ sparsity

Jooyoung Cha, Harold D. Chiang and Yuya Sasaki

Papers from arXiv.org

Abstract: This paper proposes a new method of inference in high-dimensional regression models and high-dimensional IV regression models. Estimation is based on a combined use of the orthogonal greedy algorithm, high-dimensional Akaike information criterion, and double/debiased machine learning. The method of inference for any low-dimensional subvector of high-dimensional parameters is based on a root-$N$ asymptotic normality, which is shown to hold without requiring the exact sparsity condition or the $L^p$ sparsity condition. Simulation studies demonstrate superior finite-sample performance of this proposed method over those based on the LASSO or the random forest, especially under less sparse models. We illustrate an application to production analysis with a panel of Chilean firms.

Date: 2021-08, Revised 2022-12
New Economics Papers: this item is included in nep-big, nep-ecm, nep-isf and nep-ore
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2108.09520 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2108.09520

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators (help@arxiv.org).

 
Page updated 2025-03-22
Handle: RePEc:arx:papers:2108.09520