EconPapers    
Economics at your fingertips  
 

Intuitively Searching for the Rare Colors from Digital Artwork Collections by Text Description: A Case Demonstration of Japanese Ukiyo-e Print Retrieval

Kangying Li, Jiayun Wang, Biligsaikhan Batjargal and Akira Maeda
Additional contact information
Kangying Li: Research Organization of Science and Technology, Ritsumeikan University, Shiga 525-8577, Japan
Jiayun Wang: Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga 525-8577, Japan
Biligsaikhan Batjargal: Research Organization of Science and Technology, Ritsumeikan University, Shiga 525-8577, Japan
Akira Maeda: College of Information Science and Engineering, Ritsumeikan University, Shiga 525-8577, Japan

Future Internet, 2022, vol. 14, issue 7, 1-21

Abstract: In recent years, artworks have been increasingly digitized and built into databases, and such databases have become convenient tools for researchers. Researchers who retrieve artwork are not only researchers of humanities, but also researchers of materials science, physics, art, and so on. It may be difficult for researchers of various fields whose studies focus on the colors of artwork to find the required records in existing databases, that are color-based and only queried by the metadata. Besides, although some image retrieval engines can be used to retrieve artwork by text description, the existing image retrieval systems mainly retrieve the main colors of the images, and rare cases of color use are difficult to find. This makes it difficult for many researchers who focus on toning, colors, or pigments to use search engines for their own needs. To solve the two problems, we propose a cross-modal multi-task fine-tuning method based on CLIP (Contrastive Language-Image Pre-Training), which uses the human sensory characteristics of colors contained in the language space and the geometric characteristics of the sketches of a given artwork in order to gain better representations of that artwork piece. The experimental results show that the proposed retrieval framework is efficient for intuitively searching for rare colors, and that a small amount of data can improve the correspondence between text descriptions and color information.

Keywords: artwork retrieval; cross-modal representation learning; multi-task fine-tuning (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1999-5903/14/7/212/pdf (application/pdf)
https://www.mdpi.com/1999-5903/14/7/212/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:14:y:2022:i:7:p:212-:d:865398

Access Statistics for this article

Future Internet is currently edited by Ms. Grace You

More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jftint:v:14:y:2022:i:7:p:212-:d:865398