EconPapers    
Economics at your fingertips  
 

Meta In-Context Learning: Harnessing Large Language Models for Electrical Data Classification

Mi Zhou, Fusheng Li, Fan Zhang, Junhao Zheng and Qianli Ma ()
Additional contact information
Mi Zhou: Electric Power Research Institute, China Southern Power Grid, Guangzhou 510663, China
Fusheng Li: Electric Power Research Institute, China Southern Power Grid, Guangzhou 510663, China
Fan Zhang: Electric Power Research Institute, China Southern Power Grid, Guangzhou 510663, China
Junhao Zheng: Guangdong Provincial Key Laboratory of Intelligent Measurement and Advanced Metering of Power Grid, Guangzhou 510663, China
Qianli Ma: Guangdong Provincial Key Laboratory of Intelligent Measurement and Advanced Metering of Power Grid, Guangzhou 510663, China

Energies, 2023, vol. 16, issue 18, 1-18

Abstract: The evolution of communication technology has driven the demand for intelligent power grids and data analysis in power systems. However, obtaining and annotating electrical data from intelligent terminals is time-consuming and challenging. We propose Meta In-Context Learning (M-ICL), a new approach that harnesses large language models to classify time series electrical data, which largely alleviates the need for annotated data when adapting to new tasks. The proposed M-ICL consists of two stages: meta-training and meta-testing. In meta-training, the model is trained on various tasks that have an adequate amount of training data. The meta-training stage aims to learn the mapping between electrical data and the embedding space of large language models. In the meta-testing stage, the trained model makes predictions on new tasks. By utilizing the in-context learning ability of large language models, M-ICL adapts models to new tasks effectively with only a few annotated instances (e.g., 1–5 training instances per class). Our contributions lie in the new application of large language models to electrical data classification and the introduction of M-ICL to improve the classification performance with the strong in-context learning ability of large language models. Furthermore, we conduct extensive experiments on 13 real-world datasets, and the experimental results show that the proposed M-ICL improves the average accuracy over all datasets by 19.06%, 12.06%, and 6.63% when only one, two, and five training instances for each class are available, respectively. In summary, M-ICL offers a promising solution to the challenges of electrical data classification.

Keywords: electrical data; large language models; in-context learning (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1996-1073/16/18/6679/pdf (application/pdf)
https://www.mdpi.com/1996-1073/16/18/6679/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:16:y:2023:i:18:p:6679-:d:1242259

Access Statistics for this article

Energies is currently edited by Ms. Agatha Cao

More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jeners:v:16:y:2023:i:18:p:6679-:d:1242259