Optimizing Steel Plate Defect Classification: Leveraging Deep Learning For Improved Accuracy And Operational Efficiency By Using Hybrid Attention-Enhanced Convolutional Network (HAE-CNN)
##plugins.themes.bootstrap3.article.sidebar##
Download : 43 times
##plugins.themes.bootstrap3.article.main##
Mrs. Sridevi Tharanidharan
Dr. Nahla Al-Nour Muhammad Al-Makki
Abstract
The classification of defects in steel plates is crucial for ensuring quality and efficiency in industrial manufacturing processes. Traditional classification methods face significant challenges, including data imbalance, high computational requirements, and the diversity of defect types, complicating real-time detection. This paper introduces a novel approach called the Hybrid Attention-Enhanced Convolutional Network (HAE-CNN) to tackle these issues and improve classification accuracy. The HAE-CNN leverages the capabilities of Convolutional Neural Networks (CNNs) along with an adaptive attention mechanism that allows the model to focus dynamically on key areas within defect images. It employs multi-scale feature extraction using DenseNet to effectively capture both local and global features of steel plate surfaces. To enhance generalization and reduce training duration, transfer learning is utilized through the fine-tuning of pre-trained models, while data augmentation techniques, including Generative Adversarial Networks (GANs), help mitigate data imbalance. Additionally, the model is optimized for real-time applications by implementing methods such as pruning and quantization, ensuring efficient functionality in environments with limited resources. Experimental evaluations reveal that HAE-CNN surpasses existing models such as CNN, ResNet, and DenseNet across various metrics, including precision, recall, F1-score, and accuracy, establishing it as a highly effective solution for defect classification in industrial contexts.
##plugins.themes.bootstrap3.article.details##
This work is licensed under a Creative Commons Attribution 4.0 International License.