EconPapers    
Economics at your fingertips  
 

Large Language Models at Work in China's Labor Market

Qin Chen, Jinfeng Ge, Huaqing Xie, Xingcheng Xu and Yanqing Yang

Papers from arXiv.org

Abstract: This paper explores the potential impacts of large language models (LLMs) on the Chinese labor market. We analyze occupational exposure to LLM capabilities by incorporating human expertise and LLM classifications, following the methodology of Eloundou et al. (2023). The results indicate a positive correlation between occupational exposure and both wage levels and experience premiums at the occupation level. This suggests that higher-paying and experience-intensive jobs may face greater exposure risks from LLM-powered software. We then aggregate occupational exposure at the industry level to obtain industrial exposure scores. Both occupational and industrial exposure scores align with expert assessments. Our empirical analysis also demonstrates a distinct impact of LLMs, which deviates from the routinization hypothesis. We present a stylized theoretical framework to better understand this deviation from previous digital technologies. By incorporating entropy-based information theory into the task-based framework, we propose an AI learning theory that reveals a different pattern of LLM impacts compared to the routinization hypothesis.

Date: 2023-08, Revised 2025-05
New Economics Papers: this item is included in nep-ain, nep-cna and nep-tid
References: View references in EconPapers View complete reference list from CitEc
Citations:

Published in China Economic Review, Volume 92 (2025), 102413

Downloads: (external link)
http://arxiv.org/pdf/2308.08776 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2308.08776

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-05-06
Handle: RePEc:arx:papers:2308.08776