Information retrieval from textual data: Harnessing large language models, retrieval augmented generation and prompt engineering
Asen Hikov and
Laura Murphy
Additional contact information
Asen Hikov: Data Scientist, Amplify Analytix, Bulgaria
Laura Murphy: Amplify Analytix BV, The Netherlands
Journal of AI, Robotics & Workplace Automation, 2024, vol. 3, issue 2, 142-150
Abstract:
This paper describes how recent advancements in the field of Generative AI (GenAI), and more specifically large language models (LLMs), are incorporated into a practical application that solves the widespread and relevant business problem of information retrieval from textual data in PDF format: searching through legal texts, financial reports, research articles and so on. Marketing research, for example, often requires reading through hundreds of pages of financial reports to extract key information for research on competitors, partners, markets and prospective clients. It is a manual, error-prone and time-consuming task for marketers, where until recently there was little scope for automation, optimisation and scaling. The application we have developed combines LLMs with a retrieval augmented generation (RAG) architecture and prompt engineering to make this process more efficient. We have developed a chatbot that allows the user to upload multiple PDF documents and obtain a summary of predefined key areas as well as to ask specific questions and get answers from the combined documents’ content. The application’s architecture begins with the creation of an index for each of the PDF files. This index includes embedding the textual content and constructing a vector store. A query engine, employing a small-to-big retrieval method, is then used to accurately respond to a set of predefined questions for each PDF to create the summary. The prompt has been designed in a manner that minimises the risk of hallucination which is common in this type of model. The user interacts with the model via a chatbot feature. It utilises similar small-to-big retrieval techniques over the indices for straightforward queries, and a more complex sub-questions engine for in-depth analysis, providing a comprehensive and interactive tool for document analysis. We have estimated that the implementation of this tool would reduce the time spent on manual research tasks by around 60 per cent, based on the discussions we have had with potential users.
Keywords: RAG architecture; LLM; PDF parsing; query engine (search for similar items in EconPapers)
JEL-codes: G2 M15 (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
https://hstalks.com/article/8575/download/ (application/pdf)
https://hstalks.com/article/8575/ (text/html)
Requires a paid subscription for full access.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:aza:airwa0:y:2024:v:3:i:2:p:142-150
Access Statistics for this article
More articles in Journal of AI, Robotics & Workplace Automation from Henry Stewart Publications
Bibliographic data for series maintained by Henry Stewart Talks ().