Hybrid quantum-enhanced reinforcement learning for energy-efficient resource allocation in fog-edge computing
S. Sureka Nithila Princy () and
Paulraj Ranjith Kumar ()
Additional contact information
S. Sureka Nithila Princy: P.S.R Engineering College
Paulraj Ranjith Kumar: P.S.R Engineering College
Journal of Combinatorial Optimization, 2025, vol. 50, issue 1, No 11, 36 pages
Abstract:
Abstract The proliferation of Internet of Things (IoT) devices has intensified the need for intelligent, adaptive, and energy-efficient resource management across mobile edge–fog–cloud infrastructures. Conventional optimization approaches often fail to manage the dynamic interplay among fluctuating workloads, energy constraints, and real-time scheduling. To address this, a Hybrid Quantum-Enhanced Reinforcement Learning (HQERL) framework is introduced, unifying quantum-inspired heuristics, swarm intelligence, and reinforcement learning into a co-adaptive sched uling system. HQERL employs a feedback-driven architecture to synchronize exploration, optimization, and policy refinement for enhanced task scheduling and resource control. The Maximum Likelihood Swarm Whale Optimization (MLSWO) module encodes dynamic task and system states using swarm intelligence guided by statistical likelihood, generating information-rich inputs for the learning controller. To prevent premature convergence and expand the scheduling search space, the Quantum Brainstorm Optimization (QBO) component incorporates probabilistic memory and collective learning to diversify scheduling solutions. These enhanced representations and exploratory strategies feed into the Proximal Policy Optimization (PPO) controller, which dynamically adapts resource allocation policies in real time based on system feedback, ensuring resilience to workload shifts. Furthermore, Dynamic Voltage Scaling (DVS) is integrated to improve energy efficiency by adjusting processor voltages and frequencies according to workload demands. This seamless coordination enables HQERL to balance task latency, resource use, and power consumption. Evaluation on the LSApp dataset reveals HQERL yields a 15% energy efficiency gain, 12% makespan reduction, and a 23.3% boost in peak system utility, validating its effectiveness for sustainable IoT resource management.
Keywords: Fog-edge computing; Distributed computing; Deep learning; Quantum computing; Deep reinforcement learning (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s10878-025-01336-w Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:jcomop:v:50:y:2025:i:1:d:10.1007_s10878-025-01336-w
Ordering information: This journal article can be ordered from
https://www.springer.com/journal/10878
DOI: 10.1007/s10878-025-01336-w
Access Statistics for this article
Journal of Combinatorial Optimization is currently edited by Thai, My T.
More articles in Journal of Combinatorial Optimization from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().