CN111784639A - Oral panoramic film dental caries depth identification method based on deep learning - Google Patents

Oral panoramic film dental caries depth identification method based on deep learning Download PDF

Info

Publication number
CN111784639A
CN111784639A CN202010506003.8A CN202010506003A CN111784639A CN 111784639 A CN111784639 A CN 111784639A CN 202010506003 A CN202010506003 A CN 202010506003A CN 111784639 A CN111784639 A CN 111784639A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
oral
caries
decayed tooth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010506003.8A
Other languages
Chinese (zh)
Inventor
朱海华
朱赴东
梁蒙蒙
黄超强
连璐雅
陈庆光
黄俊超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010506003.8A priority Critical patent/CN111784639A/en
Publication of CN111784639A publication Critical patent/CN111784639A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4542Evaluating the mouth, e.g. the jaw
    • A61B5/4547Evaluating teeth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physiology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Fuzzy Systems (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Rheumatology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a deep learning-based oral panoramic film decayed tooth depth identification method, which comprises data preprocessing; image segmentation; constructing a model; extracting characteristics; the classification identifies five parts. The invention adopts the convolution neural network to identify the decayed tooth, realizes the automatic identification of the decayed tooth in the oral panoramic film and can realize the derivation of the result of automatically identifying the depth of the decayed tooth. The medical staff is solved to look over image data for a long time and lead to visual fatigue easily, assists and improves diagnosis and treatment ability and work efficiency, reduces medical resource demand, improves medical efficiency.

Description

Oral panoramic film dental caries depth identification method based on deep learning
Technical Field
The invention belongs to the field of artificial intelligence medical image processing, and particularly relates to a depth recognition method for dental caries of a panoramic film based on deep learning.
Background
Dental caries is a progressive lesion of hard tooth tissue caused by the combined action of various factors in the oral cavity, is a main common disease of the oral cavity and is one of the most common diseases of human beings. Imaging identification of dental caries is mainly a panoramic picture of the oral cavity, and due to the increasing demand of the public for oral health at present, more and more people go to hospitals or clinics to consult or treat the oral cavity. However, because of a lot of patients and limited diagnosis and treatment time, sometimes doctors must pay attention to the symptomatic teeth and neglect other potential decayed teeth, and medical staff cannot judge the decayed tooth degree during oral examination.
Clinically, the color, shape and quality of the decayed tooth can be changed, the change of color and shape is the result of quality change, the lesion enters the dentin from the glaze along with the development of the course of disease, the tissue is continuously destroyed and disintegrated to gradually form a decayed cavity, clinically, the decayed tooth is divided into three stages of light, medium and deep decayed tooth according to the decayed degree (as shown in figure 2), and the three stages are respectively shown as follows:
1. superficial caries
Also known as enamel caries, caries is confined to the enamel. The chalky plaque caused by demineralization appears on the smooth surface in the initial period, is yellow brown due to coloring later, is dispersed in the form of ink soaking at the pit and fissure, generally has no obvious caries, has rough feeling when only probing, can appear in the shallow hole limited in enamel in the later period, has no subjective symptom, and has no response to probing. Shallow caries is identified as the A frame portion in FIG. 3.
2. Middle caries
The caries reaches the dentin superficial layer, has obvious caries cavity in clinical examination, can have probing pain, can have pain response to external stimulus (such as cold, heat, sweet, sour and food embedding, etc.), and has no spontaneous pain after stimulus is removed. In fig. 4, B is middle caries.
3. Deep caries
The carious lesion reaches the deep layer of the dentin, generally represents a large and deep cavity, or has a small entrance and a deep layer which is widely damaged, has heavier response to external stimulus than middle carious lesion, but can still immediately relieve pain after stimulus is removed, and has no spontaneous pain. Deep caries is indicated as box C in fig. 4.
Deep learning is a neural network which is established and simulates the human brain to analyze and learn, data is explained by simulating a human brain information transmission mechanism, and more abstract high-level feature representation is formed by combining low-level features so as to find the distribution characteristics of the data. In the medical field, deep learning has achieved certain achievements, such as diagnosis of lung tumor and breast cancer, and the development of the deep learning provides a new way for assisting disease diagnosis, so that the medical resource demand can be reduced, and the medical efficiency can be improved.
The identification of caries is mostly dependent on the subjective experience level of the clinician, and the sensitivity of different medical staff to identify lesions has certain differences, especially for the definition of different caries depths. The medical staff viewing the image data for a long time easily causes visual fatigue, which reduces the accuracy of the diagnosis result to some extent. At present, various systems are developed, such as an auxiliary image film reading robot, a tumor diagnosis robot and the like, but because the decayed tooth is a special disease, the diagnosis concept and the diagnosis mode of the decayed tooth are different from those of common diseases, the number of teeth in the panoramic film of the oral cavity is large, the expression range of the decayed tooth is small, the identification requirement of the decayed tooth depth cannot be met by using the current common machine learning method, and no deep learning identification method specially aiming at the decayed tooth is found at present.
Therefore, the development of a caries depth automatic identification method based on deep learning is a problem which needs to be solved urgently in the field.
Disclosure of Invention
Based on the problems, the invention provides the oral panoramic film decayed tooth depth identification method based on deep learning, so that the management and the application of clinical data are realized, the retrieval and the pushing of information are convenient, and the identification effect is greatly improved.
In order to achieve the purpose, the invention provides the following technical steps:
a deep learning-based oral panoramic film caries depth identification method comprises the following steps:
the method comprises the following steps: data preprocessing: numbering the images in a database according to a certain sequence, and carrying out standardized data preprocessing;
step two: image segmentation: segmenting teeth in the oral panoramic image by adopting a threshold segmentation method, separating an oral background region and a target region, and extracting an ROI image block;
step three: constructing a model: a convolutional neural network is constructed by adopting a transfer learning technology;
step four: feature extraction: inputting the training data set into a convolutional neural network, extracting the target image characteristics of an ROI (region of interest), searching the position of the decayed tooth, and automatically identifying the shallow decayed tooth, the middle decayed tooth and the deep decayed tooth;
step five: classification and identification: and inputting the test data into the trained convolutional neural network to obtain a caries depth identification result.
In the present invention, the first step specifically includes: collecting clear oral DICOM format panoramic film from a radiology examination database of a hospital, removing image noise by using wavelet transformation, increasing signal-to-noise ratio, uniformly numbering the de-noised images, and artificially marking carious parts and carious depths of the carious teeth;
in the present invention, the second step specifically includes: the teeth in the panoramic picture are automatically segmented by adopting an OTSU method, the ROI image of the decayed tooth is extracted and normalized to be the same size, a training set and a test set are divided according to the proportion of 7:3, and the normalization formula is as follows:
Figure BDA0002526543390000021
wherein xminAnd xmaxRespectively representing a minimum value and a maximum value;
in the present invention, the third step specifically includes: the method comprises the steps of training a convolutional neural network by adopting 500 cases of oral panoramic picture data in advance, reducing a loss function value and updating network weight, obtaining a two-classification model of the decayed tooth through continuous iterative learning, and transferring network parameters in the model to a new convolutional neural network to preliminarily form a decayed tooth depth recognition model;
in the present invention, the fourth step specifically includes: inputting the marked training set into a new convolution neural network, segmenting decayed teeth, extracting ROI image characteristics in a deep layer, and identifying the decayed tooth depth through a softmax classifier;
in the present invention, the fifth step specifically includes: inputting the divided test data set into a network model, automatically identifying the depth of the decayed tooth by using the convolutional neural network trained in the step four, and giving specific identification precision.
Preferably, in the above deep learning-based automatic identification method for dental caries of panoramic film, in the first step, image preprocessing uses microdocom to uniformly convert DICOM influence into BMP format.
Preferably, in the above deep learning-based method for identifying caries in panoramic oral scenes, the specific method for segmenting the teeth in the oral cavity in the second step is OTSU.
Preferably, in the method for identifying dental caries based on deep learning, the transfer learning technique in step three initializes the network weight, so that the network is suitable for the medical field.
Preferably, in the above method for identifying caries in oral panoramic film based on deep learning, the feature extraction in the fourth step adopts convolution kernel to perform convolution and downsampling operation, and the formula is as follows:
Hi=f(Hi-1*Wi+bi)
x denotes the original data, HiRepresents the convolution result, WiFor the ith layer of convolution weights, f (x) is the excitation function.
Figure BDA0002526543390000031
down (·) denotes the down-sampling operation, and β and b are both network biases. And combining convolution and downsampling to extract the characteristic vector of the ROI target region to form high-dimensional characteristic representation.
Preferably, in the above deep learning-based oral panoramic film caries identification method, the model classifier in the fifth step uses Softmax, inputs test data into the trained model, obtains the location of caries, and automatically identifies the caries depth (shallow caries, middle caries and deep caries).
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention adopts the convolution neural network to identify the decayed tooth, and can realize the derivation of the result of automatically identifying the decayed tooth depth.
(2) The invention can assist clinical medical staff to improve diagnosis and treatment capacity and work efficiency, solves the problem that the medical staff easily causes visual fatigue when checking image data for a long time, reduces missed diagnosis and misdiagnosis rate, and particularly can reduce medical resource requirements and improve medical efficiency.
Drawings
FIG. 1 is a flow chart of the method for identifying the caries depth of the oral panoramic film based on deep learning.
FIG. 2 is a dental caries clinical classification chart.
FIG. 3 is an intraoral caries ROI area (shallow caries) in which A frame portion is shallow caries.
FIG. 4 is the intraoral caries ROI area (medium and deep caries), where B-box is medium caries and C-box is deep caries.
Detailed Description
The invention will be further described by way of example with reference to the accompanying drawings. It should be noted that, in the implementation process of the present invention, a plurality of functional modules are involved. The applicant believes that the present invention may be implemented with the full skill of software programming available to those skilled in the art after perusal of the specification, with the true understanding of the principles and objectives of the invention, and in combination with the prior art. The memory, the processor and the computer readable storage medium are all hardware devices in the computer industry, and the invention has no special requirement on the hardware devices.
The functional modules of the system of the invention include but are not limited to: the system comprises a data acquisition module, a data preprocessing module, an ROI target region segmentation module, a convolutional neural network construction and training module, an image feature extraction module, a decayed tooth depth identification module and the like, and the applicant does not enumerate one by one as all the documents mentioned in the application of the invention belong to the category.
Example 1
Referring to fig. 1, the method for identifying the caries depth of the oral panoramic film based on deep learning provided by the invention is realized by the following steps:
1. firstly, collecting image data, preprocessing the data, uniformly numbering according to a certain sequence, adjusting the brightness and contrast of an image by adopting a MicroDicom, converting the image into a BMP format, uniformly normalizing the image, and dividing the image into a training set and a test set according to a ratio of 7: 3;
2. next, the preprocessed image is segmented, and a Region Of Interest (ROI) is extracted, as shown in fig. 3 and 4, a part in a frame Of fig. 3 is a shallow carious tooth (a), a frame B in fig. 4 is a middle carious tooth, and a frame C in fig. 4 is a deep carious tooth. The image segmentation adopts a maximum inter-class variance method (namely an OTSU algorithm), the image is divided into a background part and a target part according to the gray-scale characteristics of the image, the larger the pixel variance between the background and the target is, the more obvious the difference between the background and the target is, and the OTSU algorithm achieves a better image segmentation effect by searching a threshold value between the two types of pixels of the background and the target;
then, a convolutional neural network is constructed by using a transfer learning and gradient descent method, the transfer learning initializes the weights of the first two layers of networks in the neural network, training data are input into the networks, and the weights of the networks are finely adjusted by using the gradient descent method, so that the result is as close to a preset label as possible;
and finally, inputting the test data into the convolutional neural network trained in the last step, extracting the high-dimensional characteristics of the ROI dental caries image, automatically identifying the position of the dental caries, and performing dental caries depth identification.
Example 2
(1) Data acquisition module
The data is derived from the oral panoramic film in DICOM format derived from the hospital radiology examination repository. In order to ensure the accuracy of model training, image data which are too fuzzy and have low saturation are preliminarily screened out.
(2) Data preprocessing module
The brightness and the contrast of the collected image are adjusted, the DICOM format image is converted into the BMP format through MicroDicom, the image is normalized to be the same in size, and the image is numbered one by one according to the sequence, and the normalization formula is as follows:
Figure BDA0002526543390000051
wherein xminAnd xmaxThe minimum and maximum values are indicated, respectively. And uniformly labeling the caries region and the caries depth of the converted image by adopting a Lableme professional image labeling tool, wherein the labeling principle is that according to the clinical expression of the caries depth (figure 1), the data is recorded according to the following steps of 7: the scale of 3 is divided into a training set and a test set.
(3) ROI target region segmentation module
In order to accurately identify the depth of the decayed tooth, the preprocessed image is segmented by adopting an OTSU algorithm, and a background area and a tooth area are separated. The OTSU is a segmentation method based on a threshold value, and the principle of the OTSU is to count the number of each pixel in a gray level in an image, calculate the probability distribution of each pixel in the image, perform traversal search on the gray level, calculate the inter-class probability of a current background and a target, calculate a corresponding threshold value T under the inter-class contrast through a target function, and segment a background region and a tooth region according to the T value.
(4) Convolutional neural network construction and training module
In order to save the model training time, the invention adopts the transfer learning technology to initialize the network weights of the first two layers in the convolutional neural network for extracting the shallow layer characteristics. Inputting the data training set into a network model, training the rest weights in the model, and updating the network weight and the offset by continuously iterating by adopting a random gradient descent method. The iterative operation of the stochastic gradient descent method is as follows:
1) will be Δ W(l)And Δ b(l)Setting as a zero matrix;
2) calculating to obtain delta W by using GD algorithm(l)And Δ b(l)
Figure BDA0002526543390000052
Figure BDA0002526543390000053
3) Updating network parameters according to the following formula;
Figure BDA0002526543390000054
Figure BDA0002526543390000055
4) and continuously using the GD algorithm to reduce the loss function J (W, b).
(5) Image high-dimensional feature extraction module
And (3) performing high-dimensional feature extraction on the ROI image block extracted in the step (3), wherein the number of feature extractions is related to the number of layers of the convolutional neural network, the feature extraction formula is as follows the increase of the network depth, the feature map is smaller in size and larger in number, and the feature extraction formula is as follows:
a2=σ(z2)=σ(a1*W2+b2)
the superscript represents the number of layers, the symbol represents convolution operation, the symbol b represents bias, the symbol sigma represents an activation function, and the activation function in the invention is ReLu.
(6) Module for identifying depth of decayed tooth
And (4) inputting the test data set into the model (4), and obtaining the position of the decayed tooth and the decayed tooth depth thereof in a forward propagation mode of the convolutional neural network.
By using the method of the invention, a new panoramic image marked with the decayed tooth position can be obtained from the input oral panoramic image, and the decayed tooth depth can be identified through the convolutional neural network model, which is beneficial to releasing the dependence of the decayed tooth finding and identifying process on professional medical institutions and clinicians, thereby greatly shortening the time for finding and identifying the decayed tooth and reducing the workload of medical staff.
It should be noted that the above mentioned is a preferred embodiment of the present invention, and the scope of the right of the present invention is not limited thereby, and the disclosed examples are intended to help further understanding of the present invention, but those skilled in the art will understand that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (5)

1. A dental caries depth identification method of an oral panoramic film based on deep learning is characterized by comprising the following steps:
(1) data preprocessing: carrying out data marking and standardized preprocessing by using the oral panoramic film in the database to prepare a training set;
(2) image segmentation: extracting ROI by adopting a threshold segmentation method, and separating a background area and a target area in the oral panoramic film;
(3) constructing a model: constructing and training a convolutional neural network by adopting a transfer learning and gradient descent method;
(4) feature extraction: extracting high-dimensional characteristics of the ROI image by using the trained convolutional neural network to automatically identify the depth of the decayed tooth;
(5) classification and identification: test data is input, and the caries depth identification result is evaluated.
2. The method according to claim 1, characterized in that said step (1) comprises in particular:
(1) collecting clear oral panoramic pictures from a radiology examination database, and uniformly adjusting brightness and contrast;
(2) the carious part of the carious tooth in the oral cavity panoramic film is artificially marked.
3. The method according to claim 1, wherein the step (2) comprises in particular: and carrying out image digital sampling on the collected oral panoramic picture, and then realizing segmentation and extraction of the background and the target area by a threshold segmentation method.
4. The method according to claim 1, wherein the step (3) comprises in particular:
(1) randomly selecting 70% from the preprocessed panoramic picture data as a training set;
(2) constructing a deep convolutional neural network for learning and training; the convolutional neural network is alternately realized by a plurality of convolutional layers, a feature extraction block and a pooling layer;
(3) inputting the training set into a convolutional neural network, and training parameters in the convolutional neural network; and reducing the loss function value and updating the network weight parameter through an Adam optimization algorithm, and obtaining the learned network weight parameter after a plurality of times of training.
5. The method according to claim 1, characterized in that said step (4) comprises in particular:
(1) constructing a new convolutional neural network by adopting transfer learning;
(2) and inputting the data into a new convolutional neural network, and extracting the characteristics of the ROI region through continuous iteration of the convolutional neural network to optimize the model effect.
CN202010506003.8A 2020-06-05 2020-06-05 Oral panoramic film dental caries depth identification method based on deep learning Withdrawn CN111784639A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010506003.8A CN111784639A (en) 2020-06-05 2020-06-05 Oral panoramic film dental caries depth identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010506003.8A CN111784639A (en) 2020-06-05 2020-06-05 Oral panoramic film dental caries depth identification method based on deep learning

Publications (1)

Publication Number Publication Date
CN111784639A true CN111784639A (en) 2020-10-16

Family

ID=72754004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010506003.8A Withdrawn CN111784639A (en) 2020-06-05 2020-06-05 Oral panoramic film dental caries depth identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN111784639A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837408A (en) * 2021-02-02 2021-05-25 广州市帕菲克义齿科技有限公司 Zirconia all-ceramic data processing method and system based on big data
CN113160151A (en) * 2021-04-02 2021-07-23 浙江大学 Panoramic film dental caries depth identification method based on deep learning and attention mechanism
CN113221945A (en) * 2021-04-02 2021-08-06 浙江大学 Dental caries identification method based on oral panoramic film and dual attention module
CN113425440A (en) * 2021-06-24 2021-09-24 广州华视光学科技有限公司 System and method for detecting caries and position thereof based on artificial intelligence
CN113448246A (en) * 2021-05-25 2021-09-28 上海交通大学 Self-evolution posture adjustment method and system for oral implantation robot
CN113679500A (en) * 2021-07-29 2021-11-23 广州华视光学科技有限公司 AI algorithm-based caries and dental plaque detection and distribution method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875386A (en) * 2017-02-13 2017-06-20 苏州江奥光电科技有限公司 A kind of method for carrying out dental health detection automatically using deep learning
CN109508650A (en) * 2018-10-23 2019-03-22 浙江农林大学 A kind of wood recognition method based on transfer learning
CN109635835A (en) * 2018-11-08 2019-04-16 深圳蓝韵医学影像有限公司 A kind of breast lesion method for detecting area based on deep learning and transfer learning
CN109859203A (en) * 2019-02-20 2019-06-07 福建医科大学附属口腔医院 Defect dental imaging recognition methods based on deep learning
US20190333627A1 (en) * 2018-04-25 2019-10-31 Sota Precision Optics, Inc. Dental imaging system utilizing artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875386A (en) * 2017-02-13 2017-06-20 苏州江奥光电科技有限公司 A kind of method for carrying out dental health detection automatically using deep learning
US20190333627A1 (en) * 2018-04-25 2019-10-31 Sota Precision Optics, Inc. Dental imaging system utilizing artificial intelligence
CN109508650A (en) * 2018-10-23 2019-03-22 浙江农林大学 A kind of wood recognition method based on transfer learning
CN109635835A (en) * 2018-11-08 2019-04-16 深圳蓝韵医学影像有限公司 A kind of breast lesion method for detecting area based on deep learning and transfer learning
CN109859203A (en) * 2019-02-20 2019-06-07 福建医科大学附属口腔医院 Defect dental imaging recognition methods based on deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837408A (en) * 2021-02-02 2021-05-25 广州市帕菲克义齿科技有限公司 Zirconia all-ceramic data processing method and system based on big data
CN113160151A (en) * 2021-04-02 2021-07-23 浙江大学 Panoramic film dental caries depth identification method based on deep learning and attention mechanism
CN113221945A (en) * 2021-04-02 2021-08-06 浙江大学 Dental caries identification method based on oral panoramic film and dual attention module
CN113448246A (en) * 2021-05-25 2021-09-28 上海交通大学 Self-evolution posture adjustment method and system for oral implantation robot
CN113448246B (en) * 2021-05-25 2022-10-14 上海交通大学 Self-evolution posture adjustment method and system for oral implantation robot
CN113425440A (en) * 2021-06-24 2021-09-24 广州华视光学科技有限公司 System and method for detecting caries and position thereof based on artificial intelligence
CN113679500A (en) * 2021-07-29 2021-11-23 广州华视光学科技有限公司 AI algorithm-based caries and dental plaque detection and distribution method

Similar Documents

Publication Publication Date Title
CN111784639A (en) Oral panoramic film dental caries depth identification method based on deep learning
Panetta et al. Tufts dental database: a multimodal panoramic x-ray dataset for benchmarking diagnostic systems
Silva et al. Automatic segmenting teeth in X-ray images: Trends, a novel data set, benchmarking and future perspectives
Miki et al. Classification of teeth in cone-beam CT using deep convolutional neural network
Imak et al. Dental caries detection using score-based multi-input deep convolutional neural network
Kumar et al. Descriptive analysis of dental X-ray images using various practical methods: A review
Rajee et al. Gender classification on digital dental x-ray images using deep convolutional neural network
CN111798425B (en) Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning
CN113420826B (en) Liver focus image processing system and image processing method
Kong et al. Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network
CN111430025B (en) Disease diagnosis model training method based on medical image data augmentation
CN114758121A (en) CBCT alveolar bone segmentation system and method based on deep learning
CN111798445B (en) Tooth image caries identification method and system based on convolutional neural network
Sathya et al. Transfer learning based automatic human identification using dental traits-an aid to forensic odontology
Navarro et al. Detecting smooth surface dental caries in frontal teeth using image processing
Cui et al. CTooth: a fully annotated 3d dataset and benchmark for tooth volume segmentation on cone beam computed tomography images
Majanga et al. A survey of dental caries segmentation and detection techniques
Datta et al. Neutrosophic set-based caries lesion detection method to avoid perception error
CN114638852A (en) Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image
CN114549452A (en) New coronary pneumonia CT image analysis method based on semi-supervised deep learning
CN113160151B (en) Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism
Javid et al. Marking early lesions in labial colored dental images using a transfer learning approach
CN111986216B (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
Deserno Fundamentals of medical image processing
Jain et al. Dental image analysis for disease diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201016