Ill-Conditioned Orthogonal Scores in Double Machine Learning
Gabriel Saco
Papers from arXiv.org
Abstract:
Double Machine Learning is often justified by nuisance-rate conditions, yet finite-sample reliability also depends on the conditioning of the orthogonal-score Jacobian. This conditioning is typically assumed rather than tracked. When residualized treatment variance is small, the Jacobian is ill-conditioned and small systematic nuisance errors can be amplified, so nominal confidence intervals may look precise yet systematically under-cover. Our main result is an exact identity for the cross-fitted PLR-DML estimator, with no Taylor approximation. From this identity, we derive a stochastic-order bound that separates oracle noise from a conditioning-amplified nuisance remainder and yields a sufficiency condition for root-n-inference. We further connect the amplification factor to semiparametric efficiency geometry via the Riesz representer and use a triangular-array framework to characterize regimes as residual treatment variation weakens. These results motivate an out-of-fold diagnostic that summarizes the implied amplification scale. We do not propose universal thresholds. Instead, we recommend reporting the diagnostic alongside cross-learner sensitivity summaries as a fragility assessment, illustrated in simulation and an empirical example.
Date: 2025-12, Revised 2026-01
New Economics Papers: this item is included in nep-cmp and nep-ecm
References: Add references at CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2512.07083 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2512.07083
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().