CN111798445A - Tooth image caries identification method and system based on convolutional neural network - Google Patents
Tooth image caries identification method and system based on convolutional neural network Download PDFInfo
- Publication number
- CN111798445A CN111798445A CN202010690021.6A CN202010690021A CN111798445A CN 111798445 A CN111798445 A CN 111798445A CN 202010690021 A CN202010690021 A CN 202010690021A CN 111798445 A CN111798445 A CN 111798445A
- Authority
- CN
- China
- Prior art keywords
- caries
- neural network
- module
- image
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000002925 dental caries Diseases 0.000 title claims abstract description 139
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012549 training Methods 0.000 claims description 28
- 238000007781 pre-processing Methods 0.000 claims description 27
- 238000011176 pooling Methods 0.000 claims description 18
- 238000003745 diagnosis Methods 0.000 claims description 15
- 230000007246 mechanism Effects 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 10
- 230000035945 sensitivity Effects 0.000 claims description 9
- 238000012795 verification Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 7
- 210000000214 mouth Anatomy 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 9
- 238000011161 development Methods 0.000 description 8
- 230000018109 developmental process Effects 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 201000010099 disease Diseases 0.000 description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000036541 health Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 201000004328 Pulpitis Diseases 0.000 description 2
- 206010037464 Pulpitis dental Diseases 0.000 description 2
- 230000001580 bacterial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 210000004268 dentin Anatomy 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000010794 food waste Substances 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- RNAMYOYQYRYFQY-UHFFFAOYSA-N 2-(4,4-difluoropiperidin-1-yl)-6-methoxy-n-(1-propan-2-ylpiperidin-4-yl)-7-(3-pyrrolidin-1-ylpropoxy)quinazolin-4-amine Chemical compound N1=C(N2CCC(F)(F)CC2)N=C2C=C(OCCCN3CCCC3)C(OC)=CC2=C1NC1CCN(C(C)C)CC1 RNAMYOYQYRYFQY-UHFFFAOYSA-N 0.000 description 1
- 241000894006 Bacteria Species 0.000 description 1
- 208000006558 Dental Calculus Diseases 0.000 description 1
- 206010065268 Dental Fistula Diseases 0.000 description 1
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000033558 biomineral tissue development Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001684 chronic effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 210000003074 dental pulp Anatomy 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000012850 discrimination method Methods 0.000 description 1
- 231100000676 disease causative agent Toxicity 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000003670 easy-to-clean Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000005416 organic matter Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 208000004480 periapical periodontitis Diseases 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 239000000068 pit and fissure sealant Substances 0.000 description 1
- 230000007406 plaque accumulation Effects 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a tooth image caries identification method and system based on a convolutional neural network. The invention can accurately judge the probability of tooth caries in the target picture by deeply learning a large number of clinically diagnosed digital photos of teeth, can help people to carry out self supervision on the caries and can timely go to a hospital for examination if necessary. Has the characteristics of good real-time performance and capability of discovering early caries.
Description
Technical Field
The invention relates to a tooth health state judgment system based on deep learning, wherein a judgment object is a digital photo, the problem mainly solved is that a non-professional person finds early caries by self and has difficulty, and an algorithm can early warn the early caries.
Background
Caries (dental caries) is a chronic, progressive and destructive disease that occurs in the hard tissues of teeth, involving multiple factors, using bacteria as the causative agent. Caries is a common disease and a frequently encountered disease, is classified as one of three diseases harmful to human beings by the world health organization, is discovered as early as possible, is prevented and treated early, and can prevent the development of caries to a great extent.
Caries, especially children caries, has wide range of caries, rapid development and unobvious early caries loss, and usually caries loss develops into holes when parents find and seek medical treatment, and even symptoms of pulpitis or gingival fistula appear. Therefore, the detection of caries as early as possible is crucial to the treatment of children's caries, and can prevent the progression of caries to the maximum extent and obtain better therapeutic effect.
The good tooth surface of the caries is the occlusal surface, the adjacent surface, the root surface and the labial and buccal sides in sequence. According to the different anatomical parts of caries, the caries can be divided into three categories: pit-furrow point-gap caries (combined caries), smooth caries (adjacent caries, labial-lingual caries), and root-surface caries. Early caries, which is not easily found, is usually found in the crevice and between adjacent teeth. The pit and furrow point gaps of the occlusal surface of the posterior teeth are weak parts left by incomplete fusion of hard tissues in the development process of teeth, are easy to accumulate bacterial plaque and food residues, are not easy to clean furrow cracks, are low in mineralization degree and easy to generate caries, and are difficult to discover in early stage due to complex three-dimensional structure of pit and furrow caries; the caries is easy to occur on the adjacent surface of the tooth due to the food impaction and the plaque accumulation, the caries occurring on the adjacent surface has early concealment, the caries mostly occurs on the root of a contact point, and the change of the ink immersion under the marginal ridge can be found through visual inspection when the caries spreads and the dentin of the contact point on the adjacent surface is often seen; the discovery and diagnosis of both adjacent and alveolar caries at an early stage, i.e., before significant caries is formed, is difficult for the oral professional, and requires examination means such as X-ray film in addition to clinical examination. It is almost impossible for non-professionals to find these types of caries. When self-perception of caries is detected, the caries cavity is not small, the cost for treatment is higher, and the effect is not ideal compared with the early treatment. The children's caries has the characteristic of fast caries development, and for large-area damaged deep caries, lesions develop into deep dentin, clinically obvious deep caries cavities can be usually observed, are close to dental pulp, and can have or not subjective symptoms, and if the caries is not treated in time, the caries is easy to develop into pulpitis or periapical periodontitis. The invention can identify early caries (caries on the groove wall and the groove bottom, with relatively complete pit and groove shape), early adjacent caries (without destroying the marginal ridge) and large-area damaged deep caries by the development and design of the deep learning algorithm, thereby realizing the early warning of the dental caries and better monitoring the oral health condition.
With the development of computer technology, more new technologies are applied to more accurately detect and judge the pit and furrow caries and the adjacent caries, such as optical projection, electrical impedance method, quantitative light guide fluorescence (QLF), near infrared transillumination imaging technology, digital radiography, LED technology and the like. These techniques require expensive external imaging hardware, have certain requirements on external environmental conditions, and have more limitations on popularization and application to general population. Sopro-life is a caries detection system using LED technology, the working principle of which is not clear, and may be related to organic matter at the bottom of the sulcus, and there are three modes, wherein the sunlight mode can locally enlarge the tooth surface by 50 times, and emits fluorescence with 450nm wavelength in the diagnosis mode, and the normal tooth tissue is green and the abnormal suspicious carious tissue is reddish brown when observed in the program. Some studies at present prove that the technology has good sensitivity, but the specificity fluctuation range is large, the influence factors are more, and substances which react to laser can influence the numerical value, such as bacterial plaque, tartar, food residues, pigments, repairing materials, pit and fissure sealant, polishing paste and the like.
With the development of personal computers, the digital imaging process is made more efficient and convenient. However, few scholars apply convolutional neural networks to quantitatively evaluate digital images to diagnose dental caries. Recently, researchers have proposed that digital image processing and recognition techniques can be applied to assess caries with good sensitivity and specificity. In 2015, Berdouses et al firstly proposed a computer-aided full-automatic caries assessment method, which can detect caries based on a color image processing system and provide similar or better performance to trained dentists to classify caries. The algorithm can be adjusted to update the knowledge about caries diagnosis by training with more pictures of known classifications with appropriate features and inclusion or exclusion rules (A computer-aided automatic method for the detection and classification of organic diseases from the photoperiodic images [ J ]. computer Biology & Medicine,2015,62: 119-. In 2019, K.Moutselos and I.Maglogiannis et al proposed a method for identifying and classifying facet caries according to ICDAS based on a Deep Learning architecture with less training data, only 88 cases (Moutselos K, Berdouses E, Oulis C et al. And no literature report exists on the identification of the adjacent caries digital image by using a convolutional neural network and a machine learning technology.
In summary, the prior art has the following technical disadvantages: (1) expensive external imaging hardware is needed, certain requirements are also placed on external environmental conditions, the imaging system can be used as a medical institution diagnosis aid, and the imaging system is not suitable for ordinary people to monitor the caries condition in daily. (2) At present, the main method for clinically finding early caries requires that an examinee regularly goes to a medical institution for examination, on one hand, the method needs enough medical resources, and the professional team of dentists in China, especially children, is seriously insufficient; on the other hand, the examinee needs to take time to the medical institution, and in short, the examinee needs to consume a large amount of social resources, which inevitably results in waste of both the doctor and the patient. Summary the disadvantages of the current method are: resources are insufficient and wasted. (3) There is no precedent for automatically identifying the adjacent caries digital photos based on machine learning.
In conclusion, no method for monitoring early caries (relatively complete pit and furrow shape, shown as caries at the bottom of the furrow wall and furrow) and adjacent caries (without destroying the marginal ridge) which are commonly used by people in daily life exists, and the accessibility of the currently recommended method is poor, so that a large amount of medical and patient resources are consumed.
Disclosure of Invention
The technical problem of the invention is solved: the defects of the prior art are overcome, and the tooth image caries identification method and the tooth image caries identification system based on the convolutional neural network can more comprehensively carry out image identification on early caries.
The technical scheme of the invention is as follows:
the invention relates to a tooth image caries identification method based on a convolutional neural network, which comprises the following steps,
step 1) data acquisition, wherein electronic medical record records related to oral cavities of a plurality of medical sample of a hospital are acquired as a data set, and the electronic medical record records include but are not limited to pictures in the oral cavity before diagnosis and treatment;
step 2) data marking, namely judging the photos in the dataset as carious or non-carious according to a set rule, and marking samples corresponding to the photos with different types of labels, wherein the labels are positive samples of the carious samples and negative samples of the non-carious samples, so as to obtain a marked dataset;
step 3) data preprocessing, namely preprocessing the photos in the labeled data set to obtain a preprocessed data set;
step 4) data division, wherein the processed data set is randomly extracted and divided according to a set positive and negative sample extraction ratio and a set data set division ratio, so that a training set, a verification set and a test set are obtained;
step 5), model definition, namely constructing a convolutional neural network classifier and defining a loss function;
step 6) model training, initializing convolutional neural network classifier parameters, iteratively using the training set to train the convolutional neural network classifier parameters, stopping training the convolutional neural network classifier parameters when a loss function is smaller than a set threshold value, and using the verification set to optimize the convolutional neural network classifier parameters so as to obtain a trained and optimized convolutional neural network classifier;
step 7), evaluating a model, namely evaluating the performance index of the trained and optimized convolutional neural network classifier by using the test set;
and 8) model identification, namely preprocessing the new image in the fore-oral cavity for diagnosis and treatment, inputting the preprocessed image into the trained and optimized convolutional neural network classifier, and predicting and outputting a corresponding label category by the trained and optimized convolutional neural network classifier so as to identify whether the image in the fore-oral cavity is carious or not.
In the step 2), the predetermined rule includes:
a) if the pit and furrow shape in the photo is relatively complete and is expressed as early pit and furrow caries with damaged pit and furrow bottom or adjacent plane caries without damaged marginal ridge, the photo is judged to be carious;
b) if the picture has carious or contains carious parts which are easy to identify, the picture is interpreted as carious;
c) if the picture is not a) or not b), it is interpreted as non-carious.
In step 2), the photographs in the dataset are interpreted as carious or non-carious using a toothMarking tool.
In the step 3), the preprocessing includes image denoising and image size normalization operations.
In the step 4), the setting of the positive and negative sample extraction ratio is that the ratio of the positive sample to the negative sample is 1: and 1, setting the positive and negative sample extraction ratio as the ratio of the training set to the verification set to the test set to be 4:1: 1.
In the step 5), the convolutional neural network classifier network includes 13 convolutional layers, 5 pooling layers, 1 self-attention mechanism module, 3 full-link layers and 1 softmax layer, which are divided into 8 modules from front to back, wherein each of the module 1 and the module 2 includes 2 convolutional layers and 1 pooling layer, each of the module 3, the module 4 and the module 5 includes 3 convolutional layers and 1 pooling layer, the module 6 is a self-attention mechanism module, the module 7 includes 3 full-link layers, the module 8 includes 1 softmax layer, the convolutional layers adopt convolution operations in which convolution kernels are 3 × 3, a step is 1 and padding is 1, the pooling layers adopt a pooling operation of maximum pooling, the full-link layers and the convolutional layers adopt RELU as an activation function, and tag categories identified by photos are output through the softmax layer. The self-attention mechanism module comprises 3 layers of 1 × 1 convolutional layers from left to right, 1 layer of pixel-by-pixel multiplication connecting operation and softmax layer, 1 layer of pixel-by-pixel multiplication connecting operation and 1 layer of 1 × 1 convolutional layers.
In said step 5), said loss function is a cross-entry function.
In the step 6), the set threshold is 1e-10。
In step 7), the performance indicators include, but are not limited to, sensitivity and specificity.
The invention discloses a tooth image caries identification system based on a convolutional neural network, which comprises an image input module, an image preprocessing module, a caries identification module and a result output module, wherein the image input module is used for inputting a tooth image;
the image input module is used for inputting a tooth image and sending the tooth image to the preprocessing module;
the image preprocessing module is used for receiving the tooth image sent by the preprocessing module, preprocessing the tooth image to obtain a preprocessed tooth image and sending the preprocessed tooth image to the caries identification module;
the caries identification module comprises a trained convolutional neural network classifier, and the trained convolutional neural network classifier receives the preprocessed tooth image sent by the image preprocessing module and predicts and outputs a corresponding label category;
the result output module receives the label types output by the caries identification module and further outputs whether the tooth image is carious or not.
Compared with the prior art, the invention has the advantages that: the automatic judgment system can identify early caries (caries at the bottom of the groove wall and the groove) and adjacent caries (people without damaging the marginal ridges) through the development and design of a deep learning algorithm, so as to realize early warning of caries which is relatively difficult to detect, better monitor the oral health condition, give a doctor-seeing prompt to people needing treatment, save medical and social resources as far as possible and provide a means for remote oral examination and diagnosis for areas with insufficient oral diagnosis and treatment resources.
Compared with the prior art, the invention has the technical advantages that:
(1) the prior art has high requirements on equipment, and special devices such as laser and the like are needed, but the invention adopts common digital photographic equipment, and solves the accessibility problem of hardware equipment in the era of high popularization of smart phones and practicability of oral endoscopes at low cost.
(2) The technology adopts a large number of digital photos with definite clinical diagnosis of authoritative oral medical institutions as learning objects, and builds a solid foundation for the accuracy of the judgment capability of the established system.
(3) Compared with other modes, the identification of the dental digital photo caries by the convolutional neural network technology does not need to use special equipment, and has wider applicability.
(4) The method is added with the identification of caries in the adjacent caries digital photo based on the convolutional neural network, and the final output judgment result is the percentage of the two categories of the caries and the judgment accuracy.
In summary, the invention provides a tooth digital photo caries identification technical scheme based on a convolutional neural network in the technical field of tooth image caries identification, in order to solve the technical problem of identifying early caries (the pit and furrow form is relatively complete and is expressed as caries at the bottom of a furrow wall and the furrow, and the early caries of the pit and furrow is a combined term) and adjacent caries (people without destroying edge ridges), and has relatively higher identification rate and wider applicability compared with the prior art, and has the technical advantage of adding the adjacent caries digital photo caries identification based on the convolutional neural network
Drawings
FIG. 1 shows a flow chart of the steps of the method of the present invention;
FIG. 2 is a functional interface diagram of a marking tool, a toothMarking tool, developed for facilitating marking of truth data by an oral professional in the data collection stage of the method of the present invention;
FIG. 3 is a schematic diagram of a convolutional neural network classifier;
FIG. 4 shows a parametric quantity histogram of the backbone network of the convolutional neural network classifier.
The specific implementation mode is as follows:
the following further describes embodiments of the system of the present invention with reference to the drawings.
As shown in fig. 1, the present invention provides a tooth image caries identification method based on convolutional neural network, comprising the following steps:
in order to ensure the accuracy of the acquired data, the photos of the children before oral diagnosis and treatment in department of stomatology in the mouth hospital of Beijing university and the electronic medical record records of dental treatment are collected as a data set, and the clinical actual diagnosis is used as a gold standard.
Designing a tooth marking tool for marking caries damage, wherein in the tooth marking tool, real-valued data before diagnosis and treatment, namely digital photos of intraoral teeth of an observation object are observed by an oral professional doctor, and pit and furrow shapes in the photos are relatively complete according to clinical medical record records (golden standards), and are expressed as early pit and furrow caries with damaged pit and furrow walls and decayed pit bottom and adjacent plane caries without damaging marginal ridges (the two types are called early caries and are characterized in that non-professionals cannot easily find); the carious portion, which is easily recognized by non-professionals, in which the cavity has formed, is also marked, by which a labeled dataset of carious lesions is obtained. An equal number of non-carious teeth (including teeth that have completed a complete caries treatment) are simultaneously marked to obtain a non-carious labeled dataset. The practitioner of the oral cavity uses the annotation data of the annotation tool to provide truth data for the discrimination method.
And preprocessing the image, including labeling according to medical record (golden standard), denoising the image and unifying the size.
And carrying out deep learning on the labeled data set containing the photo. The method comprises the steps of constructing a convolutional neural network classifier, wherein the convolutional neural network classifier network comprises 13 convolutional layers, 5 pooling layers, 1 self-attention mechanism module, 3 full-connection layers and 1 softmax layer, the convolutional neural network classifier network is divided into 8 modules from the front to the back, wherein each module 1 and each module 2 comprise 2 convolutional layers and 1 pooling layer, each module 3, each module 4 and each module 5 comprise 3 convolutional layers and 1 pooling layer, each module 6 is a self-attention mechanism module, each module 7 comprises 3 full-connection layers, each module 8 comprises 1 softmax layer, each convolutional layer adopts convolution operation with convolution kernels of 3 x 3 and steps of 1 and filling of 1, each pooling layer adopts pooling operation with the maximum pooling, each full-connection layer and each convolutional layer adopt RELU as an activation function, and label categories identified by photos are output through the softmax layers. The self-attention mechanism module comprises 3 layers of 1 × 1 convolutional layers from left to right, 1 layer of pixel-by-pixel multiplication connecting operation and softmax layer, 1 layer of pixel-by-pixel multiplication connecting operation and 1 layer of 1 × 1 convolutional layers. For each tooth image, feature extraction is completed in a backbone network of a convolutional neural network classifier, local to global feature (texture, plaque area size, plaque edge feature and the like) redistribution of a tooth carious region is realized through a self-attention mechanism module, a cross-entry loss function is defined, and the carious image is distinguished. Comparing the identification result of the test set by using the method with the diagnosis information in the electronic medical record, and calculating the sensitivity and specificity of the method; and adding more samples in the electronic medical record as a training set to continuously optimize and learn, and continuously improving the accuracy of the algorithm, so that the method can achieve the capability of early warning the tooth caries degree. According to the method, an identification system capable of automatically judging early caries according to the pictures is established, such as mobile terminal app and the like, and continuous improvement and further deep learning are performed, so that the purposes of early warning and reminding a user of early caries of teeth are achieved.
As shown in FIG. 2, the toothMarking tool mainly uses Windows form tool developed by Microsoft, and the kernel algorithm uses "Intelligent Scissors" algorithm, which is an image segmentation interactive tool proposed by Morten-son and Barrett in 1995 (Intelligent Scissors for image composition [ C ]. Proceedings of the 22nd annular context on computers and interactive techniques. ACM,1995:191-198.) and can be used for 2D image segmentation, by which a user can easily and accurately outline a region of interest (ROI interest) in an image. In the toothMarking tool, pre-treatment intraoral photographs in a data set are viewed by an oral professional and caries and non-caries markings are made.
As shown in fig. 3, the convolutional neural network classifier structure includes two modules of global and local feature extraction and softmax classification, and the construction and application of the convolutional neural network classifier includes the following steps:
the Vgg16 network structure is used as a main network for feature extraction, Vgg16 is a Convolutional neural network model (Very Deep conditional Networks for Large-Scale Image Recognition [ J ] in 2014) proposed by Simony and Zisserman, the network structure is simple and comprises 16 layers in total, wherein 13 layers of Convolutional layers and 3 layers of fully-connected layers are mainly used for processing classification and target detection and the like, and the network is more concerned with the extraction of the features of the carious part for the carious tooth classification task, so that the carious tooth classification task is correspondingly improved and comprises the following contents:
1. in the output stream part, the output categories are guaranteed to be two categories of decayed teeth and non-decayed teeth by means of softmax;
2. in order to better ensure the fitting capability of the network model to the tooth image, the original network structure and parameters are kept unchanged, as shown in fig. 4, since the scale of the convolution kernel used in the backbone network is kept consistent, the parameters in the network increase with the increase of the network depth;
3. in order to extract global and local context features aiming at the dental caries image, a self-attention mechanism module is added in the main network to re-weight the dental caries image features.
The module operation of the self-attention mechanism is expressed by formulaWhere x is the feature map extracted from the convolutional layer, y is the output from the attention mechanism module,andfor the parameters to be learned in the 1x1 convolution, g (x) is the identity function, g (x) Wgx,WgAlso the parameters to be learned in the 1x1 convolution. The operation carries out non-local weighting processing on the feature maps extracted by the first 5 modules in the neural network classifier, so that the feature maps focus on local features and increase the attention of non-local features. The method is characterized in that a self-attention mechanism module is applied to reweigh the characteristics of the dental caries image, namely, a characteristic graph in a network training process is reweighed, so that the carious part in the image is paid more attention, and the self-attention module is applied to acquire and pay attention to information from local to global, so that the accuracy of network model judgment is improved.
4. And the model compares the loss of the predicted value label and the truth value data of the tooth image, and iteratively optimizes the network model by using a loss function.
The model carries out loss comparison on the predicted value label and the truth value data of the tooth image, namely, the cross entropy loss function is utilized to compare the predicted value label and the truth value data of the tooth image;
5. and applying the pre-trained network parameters to perform fast retraining and fine tuning on the tooth image training set.
The retraining refers to that true value data corresponding to the neural network classifier training set and the training set is used as network input, a model trained on the ImageNet data set by the Vgg16 network is used as a pre-training parameter, after one-time forward propagation, a cross entropy loss function is used for calculating loss, the back propagation is carried out, the above processes are iterated continuously, the loss is reduced continuously, and after the difference between the two losses is smaller than a set threshold value or reaches a set training time, the training of the model is completed. And in the training process, after each training for a certain number of times, the classifier model is evaluated by using the verification set, and the training model which has the best performance on the verification set in the training process is stored. The fine tuning means that the trained neural network classifier model parameters are used as new pre-training parameters, the training set data is used for training the neural network, the first 7 modules in the network model are closed during training, namely, the network parameters in the first 7 modules are not optimized any more, and only the full-connection layer in the last module is trained, so that the network classifier structural parameters can be optimized, and the model can achieve a better effect.
As illustrated in fig. 4, since the scale of the convolution kernel used in the backbone network Vgg16 remains consistent, the parameters in the backbone network Vgg16 increase with increasing network depth.
The experimental result of the caries identification method takes Sensitivity and specificity as evaluation indexes, the Sensitivity is also called as true positive rate, the calculation formula is Sensitivity which is TP/(TP + FN), wherein TP represents the number of true positive samples, FN represents the number of false negative samples, and the Sensitivity can measure the identification capability of the caries identification system for the positive examples. The Specificity is also called as a true negative rate, and the calculation formula is Specificity TN/(FP + TN), wherein TN represents the number of true negative samples, FP represents the number of false positive samples, and the Specificity can be used for measuring the identification capability of the caries discrimination system for negative examples. There are different diagnostic systems and standards for caries, and clinical diagnosis is used as the gold standard in this study, and this diagnosis is made by a clinician after integrating various pieces of clinical information according to the actual condition of a patient who concludes that the tooth being diagnosed needs treatment. This is also the point of entry into the study, i.e. the object we have diagnosed is caries which requires clinical treatment, for particularly minor caries, which is not diagnosed as early caries when clinical treatment is not yet required.
The invention also provides a tooth image caries identification system based on the convolutional neural network, which comprises an image input module, an image preprocessing module, a caries identification module and a result output module,
the image input module is used for inputting a tooth image and sending the tooth image to the preprocessing module;
the image preprocessing module is used for receiving the tooth image sent by the preprocessing module, preprocessing the tooth image to obtain a preprocessed tooth image and sending the preprocessed tooth image to the caries identification module;
the caries identification module comprises a trained convolutional neural network classifier, and the trained convolutional neural network classifier receives the preprocessed tooth image sent by the image preprocessing module and predicts and outputs a corresponding label category;
the result output module receives the label types output by the caries identification module and further outputs whether the tooth image is carious or not.
The above examples are provided only for the purpose of describing the present invention, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent substitutions and modifications can be made without departing from the spirit and principles of the invention, and are intended to be within the scope of the invention.
Claims (10)
1. A tooth image caries identification method based on convolution neural network is characterized in that the method comprises the following steps,
step 1) data acquisition, wherein electronic medical record records related to oral cavities of a plurality of medical sample of a hospital are acquired as a data set, and the electronic medical record records comprise intraoral photos before diagnosis and treatment;
step 2) data marking, namely interpreting the photos in the data set as carious or non-carious, and marking different types of labels on samples corresponding to the photos, wherein the labels are positive samples of the carious samples and negative samples of the non-carious samples, so as to obtain a marked data set;
step 3) data preprocessing, namely preprocessing the photos in the labeled data set to obtain a preprocessed data set;
step 4) data division, wherein the processed data set is randomly extracted and divided according to a set positive and negative sample extraction ratio and a set data set division ratio, so that a training set, a verification set and a test set are obtained;
step 5), model definition, namely constructing a convolutional neural network classifier and defining a loss function;
step 6) model training, initializing convolutional neural network classifier parameters, iteratively using the training set to train the convolutional neural network classifier parameters, stopping training the convolutional neural network classifier parameters when a loss function is smaller than a set threshold value, and using the verification set to optimize the convolutional neural network classifier parameters so as to obtain a trained and optimized convolutional neural network classifier;
step 7), evaluating a model, namely evaluating the performance index of the trained and optimized convolutional neural network classifier by using the test set;
and 8) model identification, namely preprocessing the new image in the fore-oral cavity for diagnosis and treatment, inputting the preprocessed image into the trained and optimized convolutional neural network classifier, and predicting and outputting a corresponding label category by the trained and optimized convolutional neural network classifier so as to identify whether the image in the fore-oral cavity is carious or not.
2. A tooth image caries identification method based on convolution neural network as claimed in claim 1, characterized in that in said step 2), the concrete implementation includes:
a) if the pit and furrow shape in the photo is relatively complete and is expressed as early pit and furrow caries with damaged pit and furrow bottom or adjacent plane caries without damaged marginal ridge, the photo is judged to be carious;
b) if the picture has carious or contains carious parts which are easy to identify, the picture is interpreted as carious;
c) if the picture is not a) or not b), it is interpreted as non-carious.
3. A method of tooth image caries identification based on convolutional neural network as claimed in claim 1, characterized in that in step 2), the photos in the dataset are interpreted as carious or non-carious using a toothMarking tool.
4. A method for identifying dental image caries based on convolution neural network as claimed in claim 1, characterized in that in said step 3), said preprocessing includes image denoising and image size normalization operation.
5. A method for identifying dental caries based on convolution neural network as claimed in claim 1, wherein in said step 4), said setting of extraction ratio of positive and negative samples is that ratio of said positive samples to said negative samples is 1: and 1, setting the positive and negative sample extraction ratio as the ratio of the training set to the verification set to the test set to be 4:1: 1.
6. A method for identifying dental caries based on convolutional neural network as claimed in claim 1, wherein in step 5), the convolutional neural network classifier network comprises 13 convolutional layers, 5 pooling layers, 1 self-attention mechanism module, 3 fully-connected layers and 1 softmax layer, which are divided into 8 modules from front to back, wherein each of module 1 and module 2 comprises 2 convolutional layers and 1 pooling layer, each of module 3, module 4 and module 5 comprises 3 convolutional layers and 1 pooling layer, module 6 is a self-attention mechanism module, module 7 comprises 3 fully-connected layers, module 8 comprises 1 softmax layer, convolutional layers are operated with convolution kernel of 3 x 3, convolutional layers of 1 and padding of 1, pooling layers are operated with pooling maximum pooling, fully-connected layers and convolutional layers are operated with RELU as activation function, outputting the label category identified by the photo through the softmax layer; the self-attention mechanism module comprises 3 layers of 1 × 1 convolutional layers from left to right, 1 layer of pixel-by-pixel multiplication connecting operation and softmax layer, 1 layer of pixel-by-pixel multiplication connecting operation and 1 layer of 1 × 1 convolutional layers.
7. A method for identifying dental image caries based on convolution neural network as claimed in claim 1 wherein, in said step 5), said loss function is cross-entry function.
8. A method for identifying dental caries based on convolution neural network as claimed in claim 1 wherein, in step 6), the set threshold is 1e-10。
9. A method for identifying dental image caries based on convolution neural network as claimed in claim 1 wherein, in step 7), said performance index includes but is not limited to sensitivity and specificity.
10. A tooth image caries identification system based on a convolutional neural network is characterized by comprising an image input module, an image preprocessing module, a caries identification module and a result output module;
the image input module is used for inputting a tooth image and sending the tooth image to the preprocessing module;
the image preprocessing module is used for receiving the tooth image sent by the preprocessing module, preprocessing the tooth image to obtain a preprocessed tooth image and sending the preprocessed tooth image to the caries identification module;
the caries identification module comprises a trained convolutional neural network classifier, and the trained convolutional neural network classifier receives the preprocessed tooth image sent by the image preprocessing module and predicts and outputs a corresponding label category;
the result output module receives the label types output by the caries identification module and further outputs whether the tooth image is carious or not.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010690021.6A CN111798445B (en) | 2020-07-17 | 2020-07-17 | Tooth image caries identification method and system based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010690021.6A CN111798445B (en) | 2020-07-17 | 2020-07-17 | Tooth image caries identification method and system based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111798445A true CN111798445A (en) | 2020-10-20 |
CN111798445B CN111798445B (en) | 2023-10-31 |
Family
ID=72807577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010690021.6A Active CN111798445B (en) | 2020-07-17 | 2020-07-17 | Tooth image caries identification method and system based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111798445B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598603A (en) * | 2021-02-01 | 2021-04-02 | 福建医科大学附属口腔医院 | Oral cavity caries image intelligent identification method based on convolution neural network |
CN113379697A (en) * | 2021-06-06 | 2021-09-10 | 湖南大学 | Color image caries identification method based on deep learning |
CN113679500A (en) * | 2021-07-29 | 2021-11-23 | 广州华视光学科技有限公司 | AI algorithm-based caries and dental plaque detection and distribution method |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101528116A (en) * | 2006-09-12 | 2009-09-09 | 卡尔斯特里姆保健公司 | Apparatus for caries detection |
CN106875386A (en) * | 2017-02-13 | 2017-06-20 | 苏州江奥光电科技有限公司 | A kind of method for carrying out dental health detection automatically using deep learning |
CN109214437A (en) * | 2018-08-22 | 2019-01-15 | 湖南自兴智慧医疗科技有限公司 | A kind of IVF-ET early pregnancy embryonic development forecasting system based on machine learning |
CN109712703A (en) * | 2018-12-12 | 2019-05-03 | 上海牙典软件科技有限公司 | A kind of correction prediction technique and device based on machine learning |
CN109859203A (en) * | 2019-02-20 | 2019-06-07 | 福建医科大学附属口腔医院 | Defect dental imaging recognition methods based on deep learning |
CN109948619A (en) * | 2019-03-12 | 2019-06-28 | 北京羽医甘蓝信息技术有限公司 | The method and apparatus of whole scenery piece dental caries identification based on deep learning |
CN110210391A (en) * | 2019-05-31 | 2019-09-06 | 合肥云诊信息科技有限公司 | Tongue picture grain quantitative analysis method based on multiple dimensioned convolutional neural networks |
CN110309331A (en) * | 2019-07-04 | 2019-10-08 | 哈尔滨工业大学(深圳) | A kind of cross-module state depth Hash search method based on self-supervisory |
GB201912054D0 (en) * | 2018-11-13 | 2019-10-09 | Adobe Inc | Object detection in images |
US20190333627A1 (en) * | 2018-04-25 | 2019-10-31 | Sota Precision Optics, Inc. | Dental imaging system utilizing artificial intelligence |
CN110400579A (en) * | 2019-06-25 | 2019-11-01 | 华东理工大学 | Based on direction from the speech emotion recognition of attention mechanism and two-way length network in short-term |
CN110517784A (en) * | 2019-08-01 | 2019-11-29 | 中国医科大学附属口腔医院 | A kind of aged people dental morbidity forecasting system based on generalized regression nerve networks |
US20190384970A1 (en) * | 2018-06-13 | 2019-12-19 | Sap Se | Image data extraction using neural networks |
CN110610129A (en) * | 2019-08-05 | 2019-12-24 | 华中科技大学 | Deep learning face recognition system and method based on self-attention mechanism |
EP3591616A1 (en) * | 2018-07-03 | 2020-01-08 | Promaton Holding B.V. | Automated determination of a canonical pose of a 3d dental structure and superimposition of 3d dental structures using deep learning |
CN110727765A (en) * | 2019-10-10 | 2020-01-24 | 合肥工业大学 | Problem classification method and system based on multi-attention machine mechanism and storage medium |
KR20200014624A (en) * | 2018-08-01 | 2020-02-11 | 연세대학교 산학협력단 | Method for predicting dental caries area and device for predicting dental caries area using the same |
CN110826565A (en) * | 2019-11-01 | 2020-02-21 | 北京中科芯健医疗科技有限公司 | Cross-connection-based convolutional neural network tooth mark tongue picture classification method and system |
CN111008618A (en) * | 2019-10-29 | 2020-04-14 | 黄山学院 | Self-attention deep learning end-to-end pedestrian re-identification method |
WO2020080819A1 (en) * | 2018-10-16 | 2020-04-23 | 주식회사 큐티티 | Oral health prediction apparatus and method using machine learning algorithm |
CN111062964A (en) * | 2019-11-28 | 2020-04-24 | 深圳市华尊科技股份有限公司 | Image segmentation method and related device |
-
2020
- 2020-07-17 CN CN202010690021.6A patent/CN111798445B/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101528116A (en) * | 2006-09-12 | 2009-09-09 | 卡尔斯特里姆保健公司 | Apparatus for caries detection |
CN106875386A (en) * | 2017-02-13 | 2017-06-20 | 苏州江奥光电科技有限公司 | A kind of method for carrying out dental health detection automatically using deep learning |
US20190333627A1 (en) * | 2018-04-25 | 2019-10-31 | Sota Precision Optics, Inc. | Dental imaging system utilizing artificial intelligence |
US20190384970A1 (en) * | 2018-06-13 | 2019-12-19 | Sap Se | Image data extraction using neural networks |
EP3591616A1 (en) * | 2018-07-03 | 2020-01-08 | Promaton Holding B.V. | Automated determination of a canonical pose of a 3d dental structure and superimposition of 3d dental structures using deep learning |
KR20200014624A (en) * | 2018-08-01 | 2020-02-11 | 연세대학교 산학협력단 | Method for predicting dental caries area and device for predicting dental caries area using the same |
CN109214437A (en) * | 2018-08-22 | 2019-01-15 | 湖南自兴智慧医疗科技有限公司 | A kind of IVF-ET early pregnancy embryonic development forecasting system based on machine learning |
WO2020080819A1 (en) * | 2018-10-16 | 2020-04-23 | 주식회사 큐티티 | Oral health prediction apparatus and method using machine learning algorithm |
GB201912054D0 (en) * | 2018-11-13 | 2019-10-09 | Adobe Inc | Object detection in images |
CN109712703A (en) * | 2018-12-12 | 2019-05-03 | 上海牙典软件科技有限公司 | A kind of correction prediction technique and device based on machine learning |
CN109859203A (en) * | 2019-02-20 | 2019-06-07 | 福建医科大学附属口腔医院 | Defect dental imaging recognition methods based on deep learning |
CN109948619A (en) * | 2019-03-12 | 2019-06-28 | 北京羽医甘蓝信息技术有限公司 | The method and apparatus of whole scenery piece dental caries identification based on deep learning |
CN110210391A (en) * | 2019-05-31 | 2019-09-06 | 合肥云诊信息科技有限公司 | Tongue picture grain quantitative analysis method based on multiple dimensioned convolutional neural networks |
CN110400579A (en) * | 2019-06-25 | 2019-11-01 | 华东理工大学 | Based on direction from the speech emotion recognition of attention mechanism and two-way length network in short-term |
CN110309331A (en) * | 2019-07-04 | 2019-10-08 | 哈尔滨工业大学(深圳) | A kind of cross-module state depth Hash search method based on self-supervisory |
CN110517784A (en) * | 2019-08-01 | 2019-11-29 | 中国医科大学附属口腔医院 | A kind of aged people dental morbidity forecasting system based on generalized regression nerve networks |
CN110610129A (en) * | 2019-08-05 | 2019-12-24 | 华中科技大学 | Deep learning face recognition system and method based on self-attention mechanism |
CN110727765A (en) * | 2019-10-10 | 2020-01-24 | 合肥工业大学 | Problem classification method and system based on multi-attention machine mechanism and storage medium |
CN111008618A (en) * | 2019-10-29 | 2020-04-14 | 黄山学院 | Self-attention deep learning end-to-end pedestrian re-identification method |
CN110826565A (en) * | 2019-11-01 | 2020-02-21 | 北京中科芯健医疗科技有限公司 | Cross-connection-based convolutional neural network tooth mark tongue picture classification method and system |
CN111062964A (en) * | 2019-11-28 | 2020-04-24 | 深圳市华尊科技股份有限公司 | Image segmentation method and related device |
Non-Patent Citations (4)
Title |
---|
A. TEWARI ET AL.: "StyleRig: Rigging StyleGAN for 3D Control Over Portrait Images", 《2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
MA AND S. LIANG: "Human-Object Relation Network For Action Recognition In Still Images", 《2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)》 * |
李龙: "融合注意力机制的人体骨骼点动作识别方法研究", 《中国优秀博硕士学位论文全文数据库(硕士) * |
杨熙丞: "基于全卷积神经网络的OCT内外指纹提取算法", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598603A (en) * | 2021-02-01 | 2021-04-02 | 福建医科大学附属口腔医院 | Oral cavity caries image intelligent identification method based on convolution neural network |
CN113379697A (en) * | 2021-06-06 | 2021-09-10 | 湖南大学 | Color image caries identification method based on deep learning |
CN113379697B (en) * | 2021-06-06 | 2022-03-25 | 湖南大学 | Color image caries identification method based on deep learning |
CN113679500A (en) * | 2021-07-29 | 2021-11-23 | 广州华视光学科技有限公司 | AI algorithm-based caries and dental plaque detection and distribution method |
Also Published As
Publication number | Publication date |
---|---|
CN111798445B (en) | 2023-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mohammad-Rahimi et al. | Deep learning for caries detection: A systematic review | |
Muresan et al. | Teeth detection and dental problem classification in panoramic X-ray images using deep learning and image processing techniques | |
Tuan et al. | Dental diagnosis from X-ray images: an expert system based on fuzzy computing | |
Chauhan et al. | An overview of image processing for dental diagnosis | |
Huang et al. | A review of deep learning in dentistry | |
CN111798445B (en) | Tooth image caries identification method and system based on convolutional neural network | |
Musri et al. | Deep learning convolutional neural network algorithms for the early detection and diagnosis of dental caries on periapical radiographs: A systematic review | |
Ghaedi et al. | An automated dental caries detection and scoring system for optical images of tooth occlusal surface | |
ALbahbah et al. | Detection of caries in panoramic dental X-ray images using back-propagation neural network | |
CN113516639B (en) | Training method and device for oral cavity abnormality detection model based on panoramic X-ray film | |
Antolin et al. | Tooth condition classification for dental charting using convolutional neural network and image processing | |
Ghaffari et al. | A Review of Advancements of Artificial Intelligence in Dentistry | |
Vimalarani et al. | Automatic diagnosis and detection of dental caries in bitewing radiographs using pervasive deep gradient based LeNet classifier model | |
Olsen et al. | An image-processing enabled dental caries detection system | |
Zhao et al. | Recognition and segmentation of teeth and mandibular nerve canals in panoramic dental X-rays by Mask RCNN | |
Vashisht et al. | ARTIFICIAL INTELLIGENCE IN DENTISTRY-A SCOPING REVIEW | |
Jayasinghe et al. | Effectiveness of Using Radiology Images and Mask R-CNN for Stomatology | |
Kumar et al. | A Comparative Study of Machine Learning Regression Approach on Dental Caries Detection | |
Vaccaro et al. | Dental tissue classification using computational intelligence and digital image analysis | |
Keser et al. | A deep learning approach to detection of oral cancer lesions from intra oral patient images: A preliminary retrospective study | |
Velusamy et al. | Faster Region‐based Convolutional Neural Networks with You Only Look Once multi‐stage caries lesion from oral panoramic X‐ray images | |
Erkan et al. | Objective characterization of dental occlusal and fissure morphologies: Method development and exploratory analysis | |
Samiappan et al. | Analysis of Dental X-Ray Images for the Diagnosis and Classification of Oral Conditions | |
AlSayyed et al. | Employing CNN ensemble models in classifying dental caries using oral photographs | |
Pang et al. | Establishment and evaluation of a deep learning-based tooth wear severity grading system using intraoral photographs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |