EconPapers    
Economics at your fingertips  
 

Pontryagin-Guided Deep Learning for Large-Scale Constrained Dynamic Portfolio Choice

Jeonggyu Huh, Jaegi Jeon, Hyeng Keun Koo and Byung Hwa Lim

Papers from arXiv.org

Abstract: We present a Pontryagin-Guided Direct Policy Optimization (PG-DPO) method for constrained dynamic portfolio choice - incorporating consumption and multi-asset investment - that scales to thousands of risky assets. By combining neural-network controls with Pontryagin's Maximum Principle (PMP), it circumvents the curse of dimensionality that renders dynamic programming (DP) grids intractable beyond a handful of assets. Unlike value-based PDE or BSDE approaches, PG-DPO enforces PMP conditions at each gradient step, naturally accommodating no-short-selling or borrowing constraints and optional consumption bounds. A "one-shot" variant rapidly computes Pontryagin-optimal controls after a brief warm-up, leading to substantially higher accuracy than naive baselines. On modern GPUs, near-optimal solutions often emerge within just one or two minutes of training. Numerical experiments confirm that, for up to 1,000 assets, PG-DPO accurately recovers the known closed-form solution in the unconstrained case and remains tractable under constraints -- far exceeding the longstanding DP-based limit of around seven assets.

Date: 2025-01, Revised 2025-02
References: Add references at CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2501.12600 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2501.12600

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-03-19
Handle: RePEc:arx:papers:2501.12600