EconPapers    
Economics at your fingertips  
 

Vision-Based Reinforcement Learning Approach to Optimize Bucket Elevator Process for Solid Waste Utilization

Akshay Chavan (), Tobias Rosenhövel, Alexander Elbel, Benedikt Schmidt, Alfons Noe and Dominik Aufderheide ()
Additional contact information
Akshay Chavan: Industrial Measurement Section (iMe), Faculty of Electrical Engineering, South Westphalia University of Applied Sciences, 59494 Soest, Germany
Tobias Rosenhövel: Laboratory of Engineering Mechanics, South Westphalia University of Applied Sciences, 59494 Soest, Germany
Alexander Elbel: Automation Department, Di Matteo Group, 59269 Beckum, Germany
Benedikt Schmidt: Technical Projects Department, Di Matteo Group, 59269 Beckum, Germany
Alfons Noe: Laboratory of Engineering Mechanics, South Westphalia University of Applied Sciences, 59494 Soest, Germany
Dominik Aufderheide: Industrial Measurement Section (iMe), Faculty of Electrical Engineering, South Westphalia University of Applied Sciences, 59494 Soest, Germany

Sustainability, 2024, vol. 16, issue 8, 1-30

Abstract: An energy-intensive industry such as cement manufacturing requires a constant supply of high amounts of traditional fossil fuels, such as coal or gas, for the calcination process. A way to overcome this fuel need is the usage of solid waste or Alternative Fuel Resources (AFRs), such as wood or paper. An advantage of using such waste is that their combustion byproduct, “ash”, can be used as a raw material alternative in the cement manufacturing process. However, for structural reasons, only bucket elevator technology is feasible to convey the fuel vertically for feeding the calciner in most cement plants. During the fuel feeding process, the inhomogeneous characteristics of AFRs lead to discharge parabolas of these materials varying over the infeed sample. Hence, a need arises to observe these trajectories and estimate a method for their optimal discharge. Thus, the purpose of this study is to develop an intelligent high-performance bucket elevator system. As such, a vision-based reinforcement learning algorithm is proposed in this study to monitor and control the speed of the elevator depending on the material properties observed at the inlet. These inlet materials properties include the type of material used in the simulation, and the particle size distribution within the infeed sample. A relationship is established between the inlet material properties and the speed of the bucket elevator. The best possible scenario is then deduced using a reward function. Here, the reward function is formulated via the deep learning image segmentation algorithm, a novel approach. After observing the test simulation conducted with a random-parameters setup, it was noted that the optimum speed for a given infeed sample was predicted correctly. As such, it can be concluded that the goal of developing an intelligent bucket elevator system was achieved.

Keywords: alternative fuels; synthetic dataset; bucket elevators; image processing; deep learning; contour detection; reinforcement learning; DEM process simulation; intelligent systems (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2071-1050/16/8/3452/pdf (application/pdf)
https://www.mdpi.com/2071-1050/16/8/3452/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:16:y:2024:i:8:p:3452-:d:1379566

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:16:y:2024:i:8:p:3452-:d:1379566