Segment Anything Model (SAM)
Elguerch Badr Dommane Hamza and
Ait Ameur Youssef Dikel Mohammed
Additional contact information
Elguerch Badr Dommane Hamza: Nanjing University of Information Science and Technology Nanjing, Jiangsu, 210044, China
Ait Ameur Youssef Dikel Mohammed: Nanjing University of Information Science and Technology Nanjing, Jiangsu, 210044, China
International Journal of Research and Innovation in Social Science, 2025, vol. 9, issue 4, 911-928
Abstract:
In this research paper, we explore the application of the Segment Anything Model for full body segmentation. The primary motivation behind this study is to harness SAM’s generalization capabilities and powerful segmentation architecture to address the specific challenges associated with full body segmentation. SAM’s ability to adapt to different segmentation tasks without task-specific training makes it an ideal candidate for this purpose. We begin by providing a detailed overview of the SAM architecture, highlighting its key components and the mechanisms that enable its versatile performance. We then describe the modifications and adaptations made to optimize SAM for full body segmentation. These include fine-tuning the model on a curated dataset that encompasses a wide range of human body types, poses, and backgrounds to enhance its specificity and accuracy in this context. To validate the effectiveness of our approach, we conduct extensive experiments comparing SAM’s performance with state-of-the-art full body segmentation models. We evaluate the models using metrics such as Intersection over Union (IoU) and Dice coefficient, and provide both quantitative and qualitative analyses. Our results demonstrate that SAM, when appropriately adapted, not only matches but often surpasses the performance of specialized segmentation models. Furthermore, we address potential limitations and propose strategies to mitigate them, such as post processing techniques to refine segmentation boundaries and reduce errors in challenging regions. We also explore the integration of SAM with other computer vision tasks like pose estimation and action recognition, showcasing its potential for comprehensive human-centric applications.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.rsisinternational.org/journals/ijriss/ ... -issue-4/911-928.pdf (application/pdf)
https://rsisinternational.org/journals/ijriss/arti ... -anything-model-sam/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bcp:journl:v:9:y:2025:issue-4:p:911-928
Access Statistics for this article
International Journal of Research and Innovation in Social Science is currently edited by Dr. Nidhi Malhan
More articles in International Journal of Research and Innovation in Social Science from International Journal of Research and Innovation in Social Science (IJRISS)
Bibliographic data for series maintained by Dr. Pawan Verma ().