EconPapers    
Economics at your fingertips  
 

MoA is All You Need: Building LLM Research Team using Mixture of Agents

Sandy Chen, Leqi Zeng, Abhinav Raghunathan, Flora Huang and Terrence C. Kim

Papers from arXiv.org

Abstract: Large Language Models (LLMs) research in the financial domain is particularly complex due to the sheer number of approaches proposed in literature. Retrieval-Augmented Generation (RAG) has emerged as one of the leading methods in the sector due to its inherent groundedness and data source variability. In this work, we introduce a RAG framework called Mixture of Agents (MoA) and demonstrate its viability as a practical, customizable, and highly effective approach for scaling RAG applications. MoA is essentially a layered network of individually customized small language models (Hoffmann et al., 2022) collaborating to answer questions and extract information. While there are many theoretical propositions for such an architecture and even a few libraries for generally applying the structure in practice, there are limited documented studies evaluating the potential of this framework considering real business constraints such as cost and speed. We find that the MoA framework, consisting of small language models (Hoffmann et al., 2022), produces higher quality and more grounded responses across various financial domains that are core to Vanguard's business while simultaneously maintaining low costs.

Date: 2024-09, Revised 2024-09
New Economics Papers: this item is included in nep-ain, nep-big, nep-cmp and nep-ipr
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2409.07487 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2409.07487

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-03-19
Handle: RePEc:arx:papers:2409.07487