A Self-Training-Based System for Die Defect Classification
Ping-Hung Wu,
Siou-Zih Lin,
Yuan-Teng Chang,
Yu-Wei Lai and
Ssu-Han Chen ()
Additional contact information
Ping-Hung Wu: Product Testing Service Office, Nanya Technology Corporation, New Taipei City 243089, Taiwan
Siou-Zih Lin: AI Chip Application & Green Manufacturing Department, Industrial Technology Research Institute, Hsinchu 310401, Taiwan
Yuan-Teng Chang: Department of Industrial Engineering and Management, Ming Chi University of Technology, New Taipei City 243303, Taiwan
Yu-Wei Lai: Center for Artificial Intelligence & Data Science, Ming Chi University of Technology, New Taipei City 243303, Taiwan
Ssu-Han Chen: Department of Industrial Engineering and Management, Ming Chi University of Technology, New Taipei City 243303, Taiwan
Mathematics, 2024, vol. 12, issue 15, 1-25
Abstract:
With increasing wafer sizes and diversifying die patterns, automated optical inspection (AOI) is progressively replacing traditional visual inspection (VI) for wafer defect detection. Yet, the defect classification efficacy of current AOI systems in our case company is not optimal. This limitation is due to the algorithms’ reliance on expertly designed features, reducing adaptability across various product models. Additionally, the limited time available for operators to annotate defect samples restricts learning potential. Our study introduces a novel hybrid self-training algorithm, leveraging semi-supervised learning that integrates pseudo-labeling, noisy student, curriculum labeling, and the Taguchi method. This approach enables classifiers to autonomously integrate information from unlabeled data, bypassing the need for feature extraction, even with scarcely labeled data. Our experiments on a small-scale set show that with 25% and 50% labeled data, the method achieves over 92% accuracy. Remarkably, with only 10% labeled data, our hybrid method surpasses the supervised DenseNet classifier by over 20%, achieving more than 82% accuracy. On a large-scale set, the hybrid method consistently outperforms other approaches, achieving up to 88.75%, 86.31%, and 83.61% accuracy with 50%, 25%, and 10% labeled data. Further experiments confirm our method’s consistent superiority, highlighting its potential for high classification accuracy in limited-data scenarios.
Keywords: semi-supervised learning; self-training; die defect classification (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/12/15/2415/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/15/2415/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:15:p:2415-:d:1449152
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().