Large and Deep Factor Models
Bryan Kelly,
Boris Kuznetsov,
Semyon Malamud,
Teng Andrea Xu and
Yuan Zhang
Papers from arXiv.org
Abstract:
We show that a deep neural network (DNN) trained to construct a stochastic discount factor (SDF) admits a sharp additive decomposition that separates nonlinear characteristic discovery from the pricing rule that aggregates them. The economically relevant component of this decomposition is governed by a new object, the Portfolio Tangent Kernel (PTK), which captures the features learned by the network and induces an explicit linear factor pricing representation for the SDF. In population, the PTK-implied SDF converges to a ridge-regularized version of the true SDF, with the effective strength of regularization determined by the spectral complexity of the PTK. Using U.S. equity data, we show that the PTK representation delivers large and statistically significant performance gains, while its spectral complexity has risen sharply-by roughly a factor of six since the early 2000s-imposing increasingly tight limits on finite-sample pricing performance.
Date: 2024-01, Revised 2026-02
New Economics Papers: this item is included in nep-big and nep-cmp
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://arxiv.org/pdf/2402.06635 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2402.06635
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().