EconPapers    
Economics at your fingertips  
 

Reinforcement Operator Learning (ROL): A hybrid DeepONet-guided reinforcement learning framework for stabilizing the Kuramoto–Sivashinsky equation

Nadim Ahmed, Md Ashraful Babu, Muhammad Sajjad Hossain, Md Fayz-Al- Asad, Md Awlad Hossain, Md Mortuza Ahmmed, M Mostafizur Rahman and Mufti Mahmud

PLOS ONE, 2026, vol. 21, issue 1, 1-25

Abstract: This study presents Reinforcement Operator Learning (ROL)—a hybrid control paradigm that marries Deep Operator Networks (DeepONet) for offline acquisition of a generalized control law with a Twin-Delayed Deep Deterministic Policy Gradient (TD3) residual for online adaptation. The framework is assessed on the one-dimensional Kuramoto–Sivashinsky equation, a benchmark for spatio-temporal chaos. Starting from an uncontrolled energy of 42.8, ROL drives the system to a steady-state energy of 0.40 ± 0.14, achieving a 99.1% reduction relative to a linear–quadratic regulator (LQR) and a 64.3% reduction compared with a pure TD3 agent. DeepONet attains a training loss of 7.8 × 10−6 after only 200 epochs, enabling the RL phase to reach its reward plateau 2.5 × sooner and with 65% lower variance than the baseline. Spatio-temporal analysis confirms that ROL restricts state amplitudes to ±1.8—three-fold tighter than pure TD3 and an order of magnitude below LQR—while halving the energy in 0.19 simulation units (33% faster than pure TD3). These results demonstrate that combining operator learning with residual policy optimisation delivers state-of-the-art, sample-efficient stabilisation of chaotic partial differential equations and offers a scalable template for turbulence suppression, combustion control, and other high-dimensional nonlinear systems.

Date: 2026
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0341161 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 41161&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0341161

DOI: 10.1371/journal.pone.0341161

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2026-02-01
Handle: RePEc:plo:pone00:0341161