Breaking the Dimensional Barrier: A Pontryagin-Guided Direct Policy Optimization for Continuous-Time Multi-Asset Portfolio Choice
Jeonggyu Huh,
Jaegi Jeon,
Hyeng Keun Koo and
Byung Hwa Lim
Papers from arXiv.org
Abstract:
We introduce the Pontryagin-Guided Direct Policy Optimization (PG-DPO) framework for high-dimensional continuous-time portfolio choice. Our approach combines Pontryagin's Maximum Principle (PMP) with backpropagation through time (BPTT) to directly inform neural network-based policy learning, enabling accurate recovery of both myopic and intertemporal hedging demands--an aspect often missed by existing methods. Building on this, we develop the Projected PG-DPO (P-PGDPO) variant, which achieves nearoptimal policies with substantially improved efficiency. P-PGDPO leverages rapidly stabilizing costate estimates from BPTT and analytically projects them onto PMP's first-order conditions, reducing training overhead while improving precision. Numerical experiments show that PG-DPO matches or exceeds the accuracy of Deep BSDE, while P-PGDPO delivers significantly higher precision and scalability. By explicitly incorporating time-to-maturity, our framework naturally applies to finite-horizon problems and captures horizon-dependent effects, with the long-horizon case emerging as a stationary special case.
Date: 2025-04, Revised 2025-09
New Economics Papers: this item is included in nep-cmp
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2504.11116 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2504.11116
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().