Visual Diagnostics of Dental Caries through Deep Learning of Non-Standardised Photographs Using a Hybrid YOLO Ensemble and Transfer Learning Model
Abu Tareq,
Mohammad Imtiaz Faisal,
Md. Shahidul Islam,
Nafisa Shamim Rafa,
Tashin Chowdhury,
Saif Ahmed,
Taseef Hasan Farook (),
Nabeel Mohammed and
James Dudley
Additional contact information
Abu Tareq: Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh
Mohammad Imtiaz Faisal: Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh
Md. Shahidul Islam: Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh
Nafisa Shamim Rafa: Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh
Tashin Chowdhury: Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh
Saif Ahmed: Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh
Taseef Hasan Farook: Adelaide Dental School, The University of Adelaide, Adelaide, SA 5005, Australia
Nabeel Mohammed: Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh
James Dudley: Adelaide Dental School, The University of Adelaide, Adelaide, SA 5005, Australia
IJERPH, 2023, vol. 20, issue 7, 1-13
Abstract:
Background: Access to oral healthcare is not uniform globally, particularly in rural areas with limited resources, which limits the potential of automated diagnostics and advanced tele-dentistry applications. The use of digital caries detection and progression monitoring through photographic communication, is influenced by multiple variables that are difficult to standardize in such settings. The objective of this study was to develop a novel and cost-effective virtual computer vision AI system to predict dental cavitations from non-standardised photographs with reasonable clinical accuracy. Methods: A set of 1703 augmented images was obtained from 233 de-identified teeth specimens. Images were acquired using a consumer smartphone, without any standardised apparatus applied. The study utilised state-of-the-art ensemble modeling, test-time augmentation, and transfer learning processes. The “you only look once” algorithm (YOLO) derivatives, v5s, v5m, v5l, and v5x, were independently evaluated, and an ensemble of the best results was augmented, and transfer learned with ResNet50, ResNet101, VGG16, AlexNet, and DenseNet. The outcomes were evaluated using precision, recall, and mean average precision ( mAP ). Results: The YOLO model ensemble achieved a mean average precision ( mAP ) of 0.732, an accuracy of 0.789, and a recall of 0.701. When transferred to VGG16, the final model demonstrated a diagnostic accuracy of 86.96%, precision of 0.89, and recall of 0.88. This surpassed all other base methods of object detection from free-hand non-standardised smartphone photographs. Conclusion: A virtual computer vision AI system, blending a model ensemble, test-time augmentation, and transferred deep learning processes, was developed to predict dental cavitations from non-standardised photographs with reasonable clinical accuracy. This model can improve access to oral healthcare in rural areas with limited resources, and has the potential to aid in automated diagnostics and advanced tele-dentistry applications.
Keywords: cariology; deep learning; model ensemble; object detection; transfer learning (search for similar items in EconPapers)
JEL-codes: I I1 I3 Q Q5 (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1660-4601/20/7/5351/pdf (application/pdf)
https://www.mdpi.com/1660-4601/20/7/5351/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jijerp:v:20:y:2023:i:7:p:5351-:d:1112771
Access Statistics for this article
IJERPH is currently edited by Ms. Jenna Liu
More articles in IJERPH from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().