EconPapers    
Economics at your fingertips  
 

Task Offloading Based on LSTM Prediction and Deep Reinforcement Learning for Efficient Edge Computing in IoT

Youpeng Tu, Haiming Chen, Linjie Yan and Xinyan Zhou
Additional contact information
Youpeng Tu: Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo 315211, China
Haiming Chen: Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo 315211, China
Linjie Yan: Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo 315211, China
Xinyan Zhou: Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo 315211, China

Future Internet, 2022, vol. 14, issue 2, 1-19

Abstract: In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%.

Keywords: computational offloading; resource allocation; prediction; DRL; LSTM (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/1999-5903/14/2/30/pdf (application/pdf)
https://www.mdpi.com/1999-5903/14/2/30/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:14:y:2022:i:2:p:30-:d:727612

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:14:y:2022:i:2:p:30-:d:727612