EconPapers    
Economics at your fingertips  
 

Building an intelligent brain platform for small and medium-sized enterprises using ChatGLM and Multi-Agent Systems

Daohong Yuan

PLOS ONE, 2026, vol. 21, issue 3, 1-20

Abstract: Large language models (LLMs) have demonstrated strong capabilities in semantic understanding and text generation. However, their direct application in the segmented and specialized domains of small and medium-sized enterprises (SMEs) presents several challenges. These include semantic overgeneralization, poor alignment with enterprise-specific knowledge, and insufficient domain expertise. To address these limitations, this study proposes an “Enterprise Intelligent Brain” platform tailored to the business needs of SMEs. The platform is built upon Chat Global Language Model (ChatGLM) and is enhanced through a multi-agent coordination mechanism and structured support from enterprise knowledge graphs. The study centers on improving the platform’s semantic adaptability and intelligent responsiveness in real-world enterprise scenarios. It begins by identifying the core semantic demands of typical SME operations—such as policy consultation, customer service, and business process execution—and constructs a triadic system architecture that integrates three key components: semantic parsing, task scheduling, and knowledge support. Methodologically, the platform applies domain-specific fine-tuning to the ChatGLM model to enhance relevance and precision. It also incorporates a multi-agent task allocation framework and utilizes knowledge graph reasoning to improve contextual accuracy and domain knowledge integration. The effectiveness of the proposed system is evaluated using three public datasets: Baidu DuReader-Enterprise, the E-commerce Dialogue Dataset, and the Enterprise Knowledge Graph-Based Q&A Dataset. Experimental results confirmed that the optimized system significantly outperformed the baseline model across multiple metrics. Notably, it achieved a task completion rate of up to 99.904%, an average response time as low as 0.858 seconds, a context retention score of up to 0.953, and a user satisfaction rating of up to 4.767. Additionally, the system demonstrated strong performance in knowledge invocation coverage and error recovery, indicating its robustness in complex and dynamic SME environments. Therefore, this study provides a practical and scalable framework for deploying LLMs in domain-specific SME contexts. It offers both a technical solution and theoretical insights for developing enterprise-grade semantic intelligence platforms capable of supporting intelligent decision-making and service automation.

Date: 2026
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0340964 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 40964&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0340964

DOI: 10.1371/journal.pone.0340964

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2026-03-29
Handle: RePEc:plo:pone00:0340964