Predicting individual food valuation via vision-language embedding model
Hiroki Kojima,
Asako Toyama,
Shinsuke Suzuki and
Yuichi Yamashita
PLOS Digital Health, 2025, vol. 4, issue 10, 1-18
Abstract:
Food preferences differ among individuals, and these variations reflect underlying personalities or mental tendencies. However, capturing and predicting these individual differences remains challenging. Here, we propose a novel method to predict individual food preferences by using CLIP (Contrastive Language-Image Pre-Training), which can capture both visual and semantic features of food images. By applying this method to food image rating data obtained from human subjects, we demonstrated our method’s prediction capability, which achieved better scores compared to methods using pixel-based embeddings or label text-based embeddings. Our method can also be used to characterize individual traits as characteristic vectors in the embedding space. By analyzing these individual trait vectors, we captured the tendency of the trait vectors of the high picky-eater group. In contrast, the group with relatively high levels of general psychopathology did not show any bias in the distribution of trait vectors, but their preferences were significantly less well-represented by a single trait vector for each individual. Our results demonstrate that CLIP embeddings, which integrate both visual and semantic features, not only effectively predict food image preferences but also provide valuable representations of individual trait characteristics, suggesting potential applications for understanding and addressing food preference patterns in both research and clinical contexts.Author summary: Food preferences vary greatly among individuals and can provide insights into personality traits and mental health patterns. Traditional approaches to understanding these preferences have been limited by their inability to capture the complex interplay between what we see and what we know about food. In this study, we developed a new computational method using CLIP (Contrastive Language-Image Pre-Training), an artificial intelligence model that can analyze both visual features and semantic meaning simultaneously. We tested our approach on food rating data from 199 participants who evaluated 896 food images. Our method successfully predicted individual food preferences and revealed distinct patterns in people with different eating behaviors and mental health characteristics. Notably, individuals with picky eating tendencies showed preference patterns that systematically avoided healthy foods, while those with higher mental health symptom scores had less consistent preference patterns overall. These findings demonstrate that combining visual and semantic information provides a powerful tool for understanding food preferences, with potential applications in personalized nutrition, clinical assessment, and treatment of eating disorders.
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0001044 (text/html)
https://journals.plos.org/digitalhealth/article/fi ... 01044&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pdig00:0001044
DOI: 10.1371/journal.pdig.0001044
Access Statistics for this article
More articles in PLOS Digital Health from Public Library of Science
Bibliographic data for series maintained by digitalhealth ().