EconPapers    
Economics at your fingertips  
 

Unified system intelligence: Learning energy strategies for optimizing operations, maintenance, and market outcomes

Dimitrios Pylorof and Humberto E. Garcia

Applied Energy, 2026, vol. 402, issue PB, No S0306261925016368

Abstract: In view of a multitude of efforts to realize technologically- and economically-viable, long-term sustainable energy technologies and solutions, we develop a Reinforcement Learning (RL) approach to intelligent energy market bidding and broader plant operations and maintenance (O&M). Our approach is cognizant of current and future performance, maintenance, and economic aspects of the supervised system and its operational environment. Regardless of their energy generation modality (e.g., fossil, nuclear), the relatively-centralized systems that will complement distributed renewable energy in contemporary power grids will not only be subject to the complexity and typical operational, maintenance, and economic nuances of any complex production facility, but also be based on comparatively new technologies that yet need to prove competitive in increasingly tighter markets. The manner in which any production facility is run can have a profound effect on its operational and maintenance costs, whereas the multi-party bidding and market dynamics existing in contemporary energy markets largely dictate the operational envelopes and accrued costs for the entire facility. Our methodology establishes a long-horizon-aware intelligent feedback loop that bids strategically in day-ahead energy markets and supervises other operations and maintenance aspects (e.g., maintenance action selection and scheduling, slowdown or downtime considerations) in a way that maximizes plant profitability by increasing revenues while controlling and distributing maintenance costs. The foundations of our approach are based not only on RL techniques to periodically (re-)construct stochastic, strongly-coupled bidding and plant supervision policies with receding reasoning horizons, but also on operationally-leaning learning and inference RL workflows. In addition to establishing the interfaces and mechanics of our RL agent, we prototype key aspects of the underlying techno-economic environment and relevant algorithmic and numerical tools and approximations that enable the sought-after reasoning. In contrast to isolated bidding algorithms operating under the premise of statistically- or offline-computed marginal costs disconnected from actual, day-to-day operations in the particular operational environment, our techniques learn to address the coupled problem of bidding and plant supervision holistically, within any particular operational environment defined by the local energy grid and market, as well as in view of their probabilistic behavior and future evolution.

Keywords: Energy markets; Intelligent bidding; Reinforcement learning; Digital twinning; Supervisory control; Health and maintenance management; Optimized predictive maintenance; Nuclear power plants (search for similar items in EconPapers)
Date: 2026
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0306261925016368
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:appene:v:402:y:2026:i:pb:s0306261925016368

Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/bibliographic
http://www.elsevier. ... 405891/bibliographic

DOI: 10.1016/j.apenergy.2025.126906

Access Statistics for this article

Applied Energy is currently edited by J. Yan

More articles in Applied Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-12-06
Handle: RePEc:eee:appene:v:402:y:2026:i:pb:s0306261925016368