This is a staging instance. Full-text downloads are disabled.
 

Publication:
AI-based automated pre-processing and classification of impacted maxillary canines in panoramic radiographs

Research Projects

Organizational Units

Journal Issue

Abstract

Abstract: Objectives: Automating the digital workflow for diagnosing impacted canines using panoramic radiographs (PRs) is challenging. This study explored feature extraction, automated cropping, and classification of impacted and non-impacted canines as a first step. Methods: A convolutional neural network (CNN) with SqueezeNet architecture was first trained to classify two groups of PRs (91with, and 91without impacted canines) on the MATLAB programming platform. Based on results, the need to crop the PRs was realized. Next, artificial intelligence (AI) detectors were trained to identify specific landmarks (maxillary central incisors, lateral incisors, canines, bicuspids, nasal area, and the mandibular ramus) on the PRs. Landmarks were then explored to guide cropping of the PRs. Finally, improvements in classification of automatically cropped PRs was studied. Results: Without cropping, the area under the curve (AUC) of the Receiver Operating Characteristic (ROC) curve for classifying impacted and non-impacted canine was 84%. Landmark training showed that detectors could correctly identify upper central incisors and the ramus in ~98% of PRs . The combined use of the mandibular ramus and maxillary central incisors as guides for cropping yielded the best results (~10% incorrect cropping). When automatically cropped PRs were used, the AUC-ROC improved to 96%. Conclusion: AI algorithms can be automated to pre-process PRs and improve the identification of impacted canines.

Description

Keywords

Artificial Intelligence, Deep learning, Automated Algorithm, Orthopantomography, Impacted tooth.

Citation