EconPapers    
Economics at your fingertips  
 

Managing engineering systems with large state and action spaces through deep reinforcement learning

C.P. Andriotis and K.G. Papakonstantinou

Reliability Engineering and System Safety, 2019, vol. 191, issue C

Abstract: Decision-making for engineering systems management can be efficiently formulated using Markov Decision Processes (MDPs) or Partially Observable MDPs (POMDPs). Typical MDP/POMDP solution procedures utilize offline knowledge about the environment and provide detailed policies for relatively small systems with tractable state and action spaces. However, in large multi-component systems the dimensions of these spaces easily explode, as system states and actions scale exponentially with the number of components, whereas environment dynamics are difficult to be described explicitly for the entire system and may, often, only be accessible through computationally expensive numerical simulators. In this work, to address these issues, an integrated Deep Reinforcement Learning (DRL) framework is introduced. The Deep Centralized Multi-agent Actor Critic (DCMAC) is developed, an off-policy actor-critic DRL algorithm that directly probes the state/belief space of the underlying MDP/POMDP, providing efficient life-cycle policies for large multi-component systems operating in high-dimensional spaces. Apart from deep network approximators parametrizing complex functions with vast state spaces, DCMAC also adopts a factorized representation of the system actions, thus being able to designate individualized component- and subsystem-level decisions, while maintaining a centralized value function for the entire system. DCMAC compares well against Deep Q-Network and exact solutions, where applicable, and outperforms optimized baseline policies that incorporate time-based, condition-based, and periodic inspection and maintenance considerations.

Keywords: Life-cycle cost optimization; Multi-component deteriorating systems; Inspection and maintenance; Dynamic programming; Partially observable Markov decision processes; Multi-agent stochastic control; Large discrete action spaces; Deep reinforcement learning (search for similar items in EconPapers)
Date: 2019
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (35)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0951832018313309
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:reensy:v:191:y:2019:i:c:s0951832018313309

DOI: 10.1016/j.ress.2019.04.036

Access Statistics for this article

Reliability Engineering and System Safety is currently edited by Carlos Guedes Soares

More articles in Reliability Engineering and System Safety from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:reensy:v:191:y:2019:i:c:s0951832018313309