Interpretable Embeddings for Next Point-of-Interest Recommendation via Large Language Model Question–Answering
Jiubing Chen,
Haoyu Wang,
Jianxin Shang and
Chaomurilige ()
Additional contact information
Jiubing Chen: School of Statistics, Jilin University of Finance and Economics, Changchun 130117, China
Haoyu Wang: Big Data and Network Management Center, Jilin University, Changchun 130012, China
Jianxin Shang: School of Information and Technology, Northeast Normal University, Changchun 130024, China
Chaomurilige: Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance, Ministry of Education, Minzu University of China, Haidian District, Beijing 100081, China
Mathematics, 2024, vol. 12, issue 22, 1-12
Abstract:
Next point-of-interest (POI) recommendation provides users with location suggestions that they may be interested in, allowing them to explore their surroundings. Existing sequence-based or graph-based POI recommendation methods have matured in capturing spatiotemporal information; however, POI recommendation methods based on large language models (LLMs) focus more on capturing sequential transition relationships. This raises an unexplored challenge: how to leverage LLMs to better capture geographic contextual information. To address this, we propose interpretable embeddings for next point-of-interest recommendation via large language model question–answering, named QA-POI, which transforms the POI recommendation task into obtaining interpretable embeddings via LLM prompts, followed by lightweight MLP fine-tuning. We introduce question–answer embeddings, which are generated by asking LLMs yes/no questions about the user’s trajectory sequence. By asking spatiotemporal questions about the trajectory sequence, we aim to extract as much spatiotemporal information from the LLM as possible. During training, QA-POI iteratively selects the most valuable subset of potential questions from a set of questions to prompt the LLM for the next POI recommendation. It is then fine-tuned for the next POI recommendation task using a lightweight Multi-Layer Perceptron (MLP). Extensive experiments on two datasets demonstrate the effectiveness of our approach.
Keywords: point of interest; sequential recommendation; large language models; spatiotemporal (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/12/22/3592/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/22/3592/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:22:p:3592-:d:1522289
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().