CN113160151A - Panoramic film dental caries depth identification method based on deep learning and attention mechanism - Google Patents

Panoramic film dental caries depth identification method based on deep learning and attention mechanism Download PDF

Info

Publication number
CN113160151A
CN113160151A CN202110360226.2A CN202110360226A CN113160151A CN 113160151 A CN113160151 A CN 113160151A CN 202110360226 A CN202110360226 A CN 202110360226A CN 113160151 A CN113160151 A CN 113160151A
Authority
CN
China
Prior art keywords
attention
decayed tooth
oral
module
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110360226.2A
Other languages
Chinese (zh)
Other versions
CN113160151B (en
Inventor
叶冠琛
曹政
朱海华
朱赴东
陈谦明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110360226.2A priority Critical patent/CN113160151B/en
Publication of CN113160151A publication Critical patent/CN113160151A/en
Application granted granted Critical
Publication of CN113160151B publication Critical patent/CN113160151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4542Evaluating the mouth, e.g. the jaw
    • A61B5/4547Evaluating teeth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Rheumatology (AREA)
  • Fuzzy Systems (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a panoramic film decayed tooth depth identification method based on a depth learning and attention mechanism, which realizes the grading of different decayed tooth lesion periods in an oral panoramic film by using a depth learning model containing the attention mechanism. The entire caries grading system consists of three modules: the system comprises a segmentation network module, an attention extraction module and a classification module. The automatic identification of the decayed tooth area in the panoramic picture of the oral cavity and the derivation of the results of different decayed tooth pathological changes are realized, which is helpful for assisting accurate grading, provides basis for preventing the occurrence and development of decayed tooth and has important clinical and social significance for the maintenance of oral health.

Description

Panoramic film dental caries depth identification method based on deep learning and attention mechanism
Technical Field
The invention belongs to the field of medical treatment, and relates to a panoramic film decayed tooth depth identification method based on a deep learning and attention mechanism.
Background
Dental caries is a progressive lesion of hard tooth tissue caused by the combined action of various factors in the oral cavity, is a main common disease of the oral cavity and is one of the most common diseases of human beings. Clinically, the color, shape and quality of the decayed tooth can be changed, the change of color and shape is the result of quality change, the lesion enters the dentin from the enamel along with the development of the course of disease, the tissue is continuously destroyed and disintegrated to gradually form a decayed cavity, and the decayed tooth is divided into three stages of light, medium and deep decayed tooth according to the extent of decayed tooth. Due to the increasing demand for oral health by the general public at present, more and more people go to hospitals or clinics for oral consultation or treatment. However, because of more patients and limited diagnosis and treatment time, sometimes doctors have to pay attention to the symptomatic teeth in priority, but neglect other dental caries with lower potential degree, further development of the dental caries is avoided, and treatment difficulty and treatment cost are improved. The panoramic film is the most common auxiliary examination means in clinic oral cavity, comprises all teeth in the mouth, and has low cost and small radiation quantity. With the improvement of modern medical level, how to realize automatic detection and classification based on the oral panoramic by using electronic information technology becomes a popular research topic.
As an important component of artificial intelligence, the deep learning technology has good effect in the field of auxiliary medical diagnosis. Deep learning can achieve better generalization performance by learning features using more data volume than traditional methods. The deep neural network in deep learning can automatically extract features in the network without artificial feature selection, and the extracted features can be classified according to the full connection layer of the network, so that feature extraction and classification are combined together, and a result superior to that obtained by a traditional method is obtained.
However, most of the existing deep learning methods use images enhanced by simple data to identify decayed teeth. Firstly, an input image in training data is transmitted into a neural network consisting of a series of convolution structures and full-connection structures, then trained neural network parameters are stored, and finally the trained neural network is used for predicting test data to obtain a prediction result of dental caries classification. Due to the black box characteristic of the neural network, the deep learning method may lose some small characteristic information, only highlights main characteristics, and cannot accurately identify some decayed tooth areas with small expression range in the aspect of caries classification.
Therefore, the development of an improved deep learning algorithm for accurately performing the automatic identification method of the decayed tooth is a problem which needs to be solved in the field.
Disclosure of Invention
The invention aims to provide a panoramic caries depth identification method based on deep learning and attention mechanism on the basis of grading caries degree by using the existing deep learning method, and provides an innovative algorithm for assisting caries classification by using the attention mechanism and feature fusion. The evaluation index of the grading system is improved while the running speed and the generalization performance are ensured.
The invention uses the oral panoramic picture as data input to enter the segmentation network, calculates the corresponding attention characteristic in each layer of characteristic output of the segmentation network, and finally fuses the attention characteristic and the original convolution characteristic, so that the network attention is focused on partial fine characteristic areas which are beneficial to classification in the segmentation process, thereby improving the accurate segmentation of the dental caries lesion in the fine area, and then uses the classification network to extract and classify the characteristics of the segmentation network, thereby effectively improving the accuracy of dental caries classification. The entire caries grading system consists of three modules: the system comprises a segmentation network module, an attention extraction module and a classification module.
In order to achieve the purpose, the invention provides the following technical steps:
1. data preprocessing: carrying out data marking and standardized preprocessing by using the oral panoramic film in the database to prepare a training set; specifically, (1) collect clear oral panoramic picture of picture from the radio examination database, regulate luminance and contrast in unison; (2) the carious part of the carious tooth in the oral cavity panoramic film is artificially marked.
2. Dividing the decayed tooth area: in the segmentation network module, a U-Net segmentation network is used for segmenting normal teeth and decayed tooth parts in the panoramic picture of the oral cavity, separating an oral cavity background area and a target area and extracting corresponding image blocks; the image digital sampling is carried out on the collected oral panoramic picture, and then the segmentation and extraction of the background and the target area are realized through a segmentation network module.
3. Attention calculation: calculating an attention feature map captured by a neural network corresponding to the caries position in a corresponding U-Net segmentation network by using an attention extraction module; specifically, the method comprises the following steps: (1) taking a feature graph generated by a segmentation network as input, and obtaining a feature graph with a weight value through an attention extraction module, namely the feature graph representing attention in deep learning; (2) the label of the original image corresponding to the feature map can obtain a loss value through the loss function, and the loss value is used as a part of the total loss value.
4. Grading the degree of caries: in a classification module, combining the extracted image blocks output by the object segmentation network and the feature map output by the attention module thereof, and grading the lesion degree of the decayed tooth by using a ResNet classification network;
specifically, the method comprises the following steps: (1) extracting the characteristics of the middle layers of the segmentation network module and the attention extraction module to be used as input data of the modules, and obtaining the attention weight of the input data under the action of a convolutional neural network; (2) respectively applying different attention to the two input data to obtain attention characteristics; (3) and fusing the two attention feature graphs to obtain a new feature graph, and simply classifying the newly generated feature graph to obtain a final prediction result.
The attention extraction module takes a feature map generated by segmenting the network as input, and after the network is classified through the feature map, a probability vector of 3 categories is obtained, and the probability vector and a label of an original image corresponding to the feature map can obtain a loss value through a loss function, wherein the loss value is used as a part of a total loss value. The original image classification network module takes an original image as input, and after passing through the original image classification network, a probability vector of 3 categories is obtained, and the probability vector and a label of the original image can obtain a loss value through a loss function, wherein the loss value is used as a part of the same total loss value.
And the classification module in the fourth step extracts the characteristics of the middle layers of the segmentation network module and the attention extraction module as input data of the module, obtains the attention weight of the input data through the function of a convolutional neural network, then obtains an attention characteristic diagram by respectively applying different attention to the two input data, finally fuses the two attention characteristic diagrams to obtain a new characteristic diagram, and then simply classifies the newly generated characteristic diagram to obtain a final prediction result.
The invention has the advantages that:
1. different from the traditional method of directly predicting the decayed tooth grading by simply using deep learning, the invention introduces an attention mechanism on the basis of a simple deep learning prediction system, combines with a segmentation network to segment the decayed tooth area, and then uses a classification network to grade the decayed tooth area, so that the stability of the system is better.
2. Due to the adoption of the attention mechanism, the system can well identify and grade the decayed tooth area with smaller characteristics and difficult grading, so that the performance of the system is greatly improved as a whole.
3. The invention does not need manual participation in the using process, and belongs to a full-automatic grading system based on an attention mechanism and feature fusion.
Drawings
Fig. 1 is an overall work flow diagram.
Fig. 2 is a diagram of a split network module.
Fig. 3 is a diagram of an attention-extracting module.
Fig. 4 is a block diagram of a classification network.
Detailed Description
The whole process of the panoramic film caries depth identification method based on the depth learning and attention mechanism will be described with reference to the drawings and the embodiment.
In the whole method, the decayed tooth is graded and decomposed into three mutually related modules, and finally, a decayed tooth grading system based on attention mechanism is realized.
Example 1
Referring to fig. 1, the workflow of the entire recognition method is shown: first, the original panoramic image of the oral cavity is converted into 572 × 572 in a unified manner, and the converted image is loaded to the segmentation network module as input data. And inputting the feature map output by the segmentation network module into an attention extraction module to obtain a corresponding attention feature map. And finally, inputting the characteristic diagram output by the segmentation network module and the characteristic diagram output by the attention mechanism module into a classification network module to obtain a final caries classification result.
The operation method of each module in the present invention is described with reference to fig. 2 to 4.
Fig. 2 illustrates a segmented net module, where the input of the net is a picture of 572 x 572, and the left side of the net is a series of downsampling operations consisting of convolution and max pooling, each operation consisting of 4 blocks, each block using 3 effective convolutions and 1 max pooling downsampling, and the number of feature maps after each downsampling is multiplied by 2. A characteristic map with a dimension of 32 × 32 was finally obtained. The right part of the network also consists of 4 blocks, the size of the feature map is multiplied by 2 by deconvolution before each block starts, meanwhile, the number of the blocks is halved, and then the feature maps are merged with the feature map of the left symmetrical compression path, and finally, the feature map with the size of 388 × 388 is obtained, namely, the result of the network segmentation.
Figure 3 illustrates an attention mechanism module. The input feature graph is respectively subjected to maximum pooling and average pooling, then 7x7 convolution operation of the image is carried out by using the convolutional layer, the convolutional layer outputs corresponding features to be loud, then a weight graph Ms is obtained through a sigmoid activation function, and then the weight graph Ms and the input feature graph are subjected to weighted average operation to obtain a final feature graph. The weight graph output by the module reflects the importance of different positions of the input feature graph, and places with large weights indicate that the places are more important, and the network puts more attention on the places, namely the application of the attention mechanism in the system.
The classification network module is presented in fig. 4, and in consideration of the balance between the system speed and the accuracy, a residual network of 41 layers is used in the present invention. Phase 1, phase 2, …, and phase 5 are 5 phases of the Resnet network. 2X and 4X are module repetition times. In Conv (64,256, k is (1,1), s is 1, and p is 0), k is the convolution kernel size, s is the sliding step, p is the fill pixel, 256 is the convolution kernel channel number, 64 is the channel number output by the previous convolution layer, and the rest of the convolution layers are similar. The main network is formed by stacking residual error learning modules, the head end and the tail end of each residual error learning module are 1 multiplied by 1 convolution kernel, and the middle of each residual error learning module is 3 multiplied by 3 convolution kernel. In the first learning module of each stage of the network, except the series connection of 3 convolutional layers, the input and the output are connected through a convolutional layer bypass to increase the number of channels of the input characteristic diagram, so that the input characteristic diagram is convenient to fuse with the output characteristic diagram, and the number of channels of the input characteristic diagram and the output characteristic diagram of the subsequent residual learning module is consistent, so that the addition operation can be directly carried out without increasing the dimension through the convolutional layers. The structure can effectively reduce characteristic loss and improve the training effect of the model.

Claims (5)

1. The panoramic film decayed tooth depth identification method based on the deep learning and attention mechanism is characterized by comprising the following steps of:
(1) data preprocessing: carrying out data marking and standardized preprocessing by using the oral panoramic film in the database to prepare a training set;
(2) dividing the decayed tooth area: using a U-Net object segmentation network to segment normal teeth and decayed tooth parts in the oral panoramic image, separating an oral background area and a target area, and extracting corresponding image blocks;
(3) attention calculation: calculating an attention feature map captured by a neural network corresponding to the caries position in a corresponding U-Net segmentation network by using an attention extraction module;
(4) grading the degree of caries: and (3) grading the lesion degree of the decayed tooth by using a ResNet classification network in combination with the extracted image blocks output by the object segmentation network and the feature map output by the attention extraction module.
2. The method according to claim 1, characterized in that said step (1) comprises in particular:
(a) collecting clear oral panoramic pictures from a radiology examination database, and uniformly adjusting brightness and contrast;
(b) the carious part of the carious tooth in the oral cavity panoramic film is artificially marked.
3. The method according to claim 1, wherein the step (2) comprises in particular: the collected oral panoramic picture is subjected to image digital sampling, and then the segmentation and extraction of the background and the target area are realized through a segmentation network module.
4. The method according to claim 1, wherein the step (3) comprises in particular:
(a) the feature graph generated by the segmentation network is used as input, and a feature graph with weight, namely the feature graph representing attention in deep learning, is obtained through the attention extraction module.
(b) The label of the original image corresponding to the feature map can obtain a loss value through the loss function, and the loss value is used as a part of the total loss value.
5. The method according to claim 1, characterized in that said step (4) comprises in particular:
(a) extracting the characteristics of the middle layers of the segmentation network module and the attention extraction module to be used as input data of the modules, and obtaining the attention weight of the input data under the action of a convolutional neural network;
(b) respectively applying different attention to the two input data to obtain attention characteristics;
(c) and fusing the two attention feature graphs to obtain a new feature graph, and simply classifying the newly generated feature graph to obtain a final prediction result.
CN202110360226.2A 2021-04-02 2021-04-02 Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism Active CN113160151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110360226.2A CN113160151B (en) 2021-04-02 2021-04-02 Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110360226.2A CN113160151B (en) 2021-04-02 2021-04-02 Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism

Publications (2)

Publication Number Publication Date
CN113160151A true CN113160151A (en) 2021-07-23
CN113160151B CN113160151B (en) 2023-07-25

Family

ID=76886305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110360226.2A Active CN113160151B (en) 2021-04-02 2021-04-02 Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism

Country Status (1)

Country Link
CN (1) CN113160151B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113679500A (en) * 2021-07-29 2021-11-23 广州华视光学科技有限公司 AI algorithm-based caries and dental plaque detection and distribution method
WO2023246463A1 (en) * 2022-06-24 2023-12-28 杭州朝厚信息科技有限公司 Oral panoramic radiograph segmentation method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN111754520A (en) * 2020-06-09 2020-10-09 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN111784639A (en) * 2020-06-05 2020-10-16 浙江大学 Oral panoramic film dental caries depth identification method based on deep learning
US20200349697A1 (en) * 2019-05-02 2020-11-05 Curacloud Corporation Method and system for intracerebral hemorrhage detection and segmentation based on a multi-task fully convolutional network
CN111951288A (en) * 2020-07-15 2020-11-17 南华大学 Skin cancer lesion segmentation method based on deep learning
CN112151167A (en) * 2020-05-14 2020-12-29 余红兵 Intelligent screening method for six-age dental caries of children based on deep learning
CN112446239A (en) * 2019-08-29 2021-03-05 株式会社理光 Neural network training and target detection method, device and storage medium
CN112541503A (en) * 2020-12-11 2021-03-23 南京邮电大学 Real-time semantic segmentation method based on context attention mechanism and information fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200349697A1 (en) * 2019-05-02 2020-11-05 Curacloud Corporation Method and system for intracerebral hemorrhage detection and segmentation based on a multi-task fully convolutional network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN112446239A (en) * 2019-08-29 2021-03-05 株式会社理光 Neural network training and target detection method, device and storage medium
CN112151167A (en) * 2020-05-14 2020-12-29 余红兵 Intelligent screening method for six-age dental caries of children based on deep learning
CN111784639A (en) * 2020-06-05 2020-10-16 浙江大学 Oral panoramic film dental caries depth identification method based on deep learning
CN111754520A (en) * 2020-06-09 2020-10-09 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN111951288A (en) * 2020-07-15 2020-11-17 南华大学 Skin cancer lesion segmentation method based on deep learning
CN112541503A (en) * 2020-12-11 2021-03-23 南京邮电大学 Real-time semantic segmentation method based on context attention mechanism and information fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HONGBING YU等: "A New Technique for Diagnosis of Dental Caries on the Children’s First Permanent Molar", 《IEEE ACCESS》 *
谢飞等: "基于具有空间注意力机制的Mask R-CNN的口腔白斑分割", 《西北大学学报(自然科学版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113679500A (en) * 2021-07-29 2021-11-23 广州华视光学科技有限公司 AI algorithm-based caries and dental plaque detection and distribution method
CN113679500B (en) * 2021-07-29 2023-04-18 广州华视光学科技有限公司 AI algorithm-based caries and dental plaque detection and distribution method
WO2023246463A1 (en) * 2022-06-24 2023-12-28 杭州朝厚信息科技有限公司 Oral panoramic radiograph segmentation method

Also Published As

Publication number Publication date
CN113160151B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN111563887B (en) Intelligent analysis method and device for oral cavity image
CN113011485B (en) Multi-mode multi-disease long-tail distribution ophthalmic disease classification model training method and device
CN111968120B (en) Tooth CT image segmentation method for 3D multi-feature fusion
Chen et al. MSLPNet: multi-scale location perception network for dental panoramic X-ray image segmentation
CN113160151B (en) Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism
Kong et al. Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network
CN112151167A (en) Intelligent screening method for six-age dental caries of children based on deep learning
CN111784639A (en) Oral panoramic film dental caries depth identification method based on deep learning
Chen et al. Missing teeth and restoration detection using dental panoramic radiography based on transfer learning with CNNs
CN111798445B (en) Tooth image caries identification method and system based on convolutional neural network
CN113221945B (en) Dental caries identification method based on oral panoramic film and dual attention module
CN113643297B (en) Computer-aided age analysis method based on neural network
CN113379697B (en) Color image caries identification method based on deep learning
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
Javid et al. Marking early lesions in labial colored dental images using a transfer learning approach
CN110598724B (en) Cell low-resolution image fusion method based on convolutional neural network
CN115880266B (en) Intestinal polyp detection system and method based on deep learning
Imak et al. Dental material detection based on faster regional convolutional neural networks and shape features
CN115439409A (en) Tooth type identification method and device
CN114004970A (en) Tooth area detection method, device, equipment and storage medium
Ghafoor et al. Multiclass Segmentation using Teeth Attention Modules for Dental X-ray Images
Hossam et al. Automated Dental Diagnosis using Deep Learning
CN117152507B (en) Tooth health state detection method, device, equipment and storage medium
Li et al. Dental detection and classification of yolov3-spp based on convolutional block attention module
CN116188879B (en) Image classification and image classification model training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant