EconPapers    
Economics at your fingertips  
 

A Machine Learning-Based Energy Management Agent for Fine Dust Concentration Control in Railway Stations

Kyung-Bin Kwon, Su-Min Hong, Jae-Haeng Heo, Hosung Jung and Jong-young Park ()
Additional contact information
Kyung-Bin Kwon: Department of Electrical and Computer Engineering, The University of Texas at Austin, 2501 Speedway, Austin, TX 78712, USA
Su-Min Hong: Raon Friends, 267 Simin-daero, Dongan-gu, Anyang-si 14054, Gyeonggi-do, Republic of Korea
Jae-Haeng Heo: Raon Friends, 267 Simin-daero, Dongan-gu, Anyang-si 14054, Gyeonggi-do, Republic of Korea
Hosung Jung: Korea Railroad Research Institute, 176 Cheoldobangmulgwan-ro, Uiwang-si 16105, Gyeonggi-do, Republic of Korea
Jong-young Park: Korea Railroad Research Institute, 176 Cheoldobangmulgwan-ro, Uiwang-si 16105, Gyeonggi-do, Republic of Korea

Sustainability, 2022, vol. 14, issue 23, 1-13

Abstract: This study developed a reinforcement learning-based energy management agent that controls the fine dust concentration by controlling facilities such as blowers and air conditioners to efficiently manage the fine dust concentration in the station. To this end, we formulated an optimization problem based on the Markov decision-making process and developed a model for predicting the concentration of fine dust in the station by training an artificial neural network (ANN) based on supervised learning to develop the transfer function. In addition to the prediction model, the optimal policy for controlling the blower and air conditioner according to the current state was obtained based on the ANN to which the Deep Q-Network (DQN) algorithm was applied. In the case study, it is confirmed that the ANN and DQN of the predictive model were trained based on the actual data of Nam-Gwangju Station to converge to the optimal policy. The comparison between the proposed method and conventional method shows that the proposed method can use less power consumption but achieved better performance on reducing fine dust concentration than the conventional method. In addition, by increasing the value of the ratio that represents the compensation due to the fine dust reduction, the learned agent achieved more reduction on the fine dust concentration by increasing the power consumption of the blower and air conditioner.

Keywords: Deep Q-network; energy management; particulate matter; reinforcement learning; supervised learning (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
https://www.mdpi.com/2071-1050/14/23/15550/pdf (application/pdf)
https://www.mdpi.com/2071-1050/14/23/15550/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:14:y:2022:i:23:p:15550-:d:980949

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:14:y:2022:i:23:p:15550-:d:980949