CN112364924A - Deep learning-based oral medical image identification method - Google Patents

Deep learning-based oral medical image identification method Download PDF

Info

Publication number
CN112364924A
CN112364924A CN202011275509.9A CN202011275509A CN112364924A CN 112364924 A CN112364924 A CN 112364924A CN 202011275509 A CN202011275509 A CN 202011275509A CN 112364924 A CN112364924 A CN 112364924A
Authority
CN
China
Prior art keywords
stage
neural network
patient
deep neural
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011275509.9A
Other languages
Chinese (zh)
Inventor
李武军
陈龙意
房康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202011275509.9A priority Critical patent/CN112364924A/en
Publication of CN112364924A publication Critical patent/CN112364924A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an oral medical image recognition method based on deep learning, which comprises the steps of firstly, utilizing a patient oral image and a corresponding mark, and training a two-stage deep neural network model based on a plurality of continuous medical images by using the deep learning; and predicting unknown patient oral cavity images by using the model obtained by training, and identifying and obtaining the marks of the patient oral cavity images. The invention can not only obtain the prediction result of a single image, but also obtain the prediction result of the whole patient at the same time.

Description

Deep learning-based oral medical image identification method
Technical Field
The invention relates to an oral medical image identification method based on deep learning, belongs to a deep learning image processing technology, and is particularly suitable for the identification problem of multiple continuous oral medical images.
Background
At present, a deep learning technology is widely applied to various image processing tasks, for example, in a traditional image task, a plurality of security cameras adopt the deep learning technology to process images at present, and compared with a method using traditional characteristics, a better identification effect is obtained; in medical images, pulmonary nodule detection also well finds the positions of nodules of a patient through a deep learning technology, so that computer-aided diagnosis is realized.
In the traditional image processing, a multilayer convolutional neural network is generally adopted as a main structure for extracting features, and different structures are added on a basic structure according to different tasks. For the classification problem, the mainstream technique is to input the image into the network, obtain the probability of the image corresponding to each class, and finally take the class with the highest probability as the final class. However, compared with the conventional image, the medical image often contains a plurality of images, not only one image, so that the conventional classification method cannot be well applied to the medical image.
With the continuous collection of medical images, we will get larger and larger three-dimensional medical image data sets. The general classification algorithm can only process a single image, and is difficult to better apply the three-dimensional data of the whole patient.
Disclosure of Invention
The purpose of the invention is as follows: the current image processing method only considers processing of a single image and is not suitable for a plurality of continuous oral medical images. In view of the above problems, the present invention provides a method for recognizing an oral medical image based on deep learning, comprising: firstly, a two-stage deep neural network model based on a plurality of continuous medical images is trained by using deep learning by utilizing a patient oral cavity image (such as a CBCT image) and corresponding marks (such as normal, tumor and fracture); and predicting unknown patients by using the trained model to obtain the markers of the patients.
The technical scheme is as follows: a deep learning-based oral medical image recognition method comprises the steps of training a deep neural network on existing patient oral medical image data by utilizing deep learning, and predicting unknown patient data to obtain a recognition result of a patient oral medical image.
The specific steps of training the deep neural network on the existing patient data by using deep learning are as follows:
step 100, inputting a three-dimensional oral cavity image (such as a CBCT image) of each patient and a mark corresponding to the oral cavity image to a computing platform, wherein the three-dimensional oral cavity image comprises a plurality of continuous oral cavity images; each oral cavity image of the marking finger corresponding to the oral cavity image is marked as a normal or diseased mark by a doctor; if the buccal picture is marked as diseased, the marking also includes the disease category and the slice range of the lesion.
Step 101, preprocessing oral cavity image data of a patient, removing abnormal patient data, and normalizing the oral cavity image data; the normalization is the oral image data minus the mean and divided by the variance.
102, according to the mark of the section range containing the focus in the mark corresponding to the oral cavity image, marking all sections of the three-dimensional oral cavity image of the patient in the range as diseased, and simultaneously marking all sections outside the range as normal, thereby obtaining a data set of the first stage. And randomly dividing the data set of the first stage to obtain a training set of the first stage and a verification set of the first stage.
103, initializing hyper-parameters of the deep neural network model such as a positive sample resampling ratio, a regularization coefficient, a learning rate and the like, wherein the positive sample resampling ratio is 10, the regularization coefficient is 0.005, and the learning rate is 0.001.
And 104, training the deep neural network model on the computing platform by using a gradient descent method based on the training set of the first stage. Because the models with different rounds have different performances in the training process, the model for judging whether the slice is ill or not is selected according to the verification set of the first stage obtained in the step 102, and the deep neural network model of the first stage is obtained.
And 105, selecting all the slices containing the focus according to the three-dimensional oral cavity image of the patient marked as the diseased by the doctor, and marking each slice as tumor or fracture (the tumor or fracture is a disease type, and the used data only contains the two diseases), thereby obtaining a data set of the second stage. The data set is randomly divided to obtain a training set of the second stage and a verification set of the second stage.
And 106, initializing the deep neural network model by using the deep neural network model in the first stage.
Step 107, on the computing platform, training the initialized post-neural network model in step 106 based on the training set of the second stage by using a gradient descent method. Because the models obtained by different training rounds have different performances, the model with the highest classification accuracy for judging whether the section is a tumor or a fracture (i.e. the model with the highest classification accuracy for judging the section disease type) is selected according to the verification set of the second stage in the step 105, so as to obtain the deep neural network model of the second stage.
The specific steps for predicting unknown patient data are as follows:
in step 200, the three-dimensional oral cavity image (e.g., CBCT image) of the unknown patient is normalized (mean subtracted and variance divided).
Step 201, inputting all two-dimensional slices of the oral cavity image of the patient after normalization into the deep neural network model in the first stage, obtaining the probability that each slice of the unknown patient is corresponding to the sick and normal, and taking the category with the highest probability as the prediction result of each slice.
Step 202, according to the prediction result of each slice of the unknown patient (i.e. whether each slice is diseased or normal), calculating the maximum number of the diseased continuous slices of the patient and taking a threshold value, when the maximum number exceeds the threshold value, judging that the patient is diseased, otherwise, judging that the patient is normal.
Step 203, if the patient is judged to be ill in the first stage, all the slices judged to be ill by the first stage model are selected, the slices are input into the deep neural network model in the second stage, the probability that the ill slices correspond to the tumor and the fracture (disease type) is obtained, and the class with the highest probability is taken as the classification result of the ill slices.
And step 204, according to the proportion of the tumor and the fracture in all the diseased slices of the patient, taking the class with the high proportion as the specific classification result of the patient.
The specific process of the deep neural network model training in the first stage is as follows: firstly, selecting a deep neural network model structure, then initializing all model parameters to be trained of the deep neural network at random, and then entering a deep neural network model training process. In the deep neural network model training process, (1) loss is calculated according to a binary cross entropy loss function, (2) gradient of parameters to be trained is calculated, a gradient descent method (such as SGD, Adam and the like) is used for updating the values of the parameters of the deep neural network, then the processes from (1) to (2) are continuously repeated according to the number of training rounds, and finally a model which is used for judging whether a slice is ill on a first-stage verification set and has the highest classification accuracy is taken as a first-stage deep neural network model.
The specific process of the second stage model training is as follows: firstly, a deep neural network model of a first stage is used as initialization, and then a training process is started. Calculating loss according to a two-stage classification cross entropy loss function, calculating the gradient of parameters to be trained, updating the value of the neural network parameters by using a gradient descent method (such as SGD, Adam and the like), continuously repeating the process according to the number of training rounds, and finally taking a model which is used for judging whether the section is a tumor or a fracture on a second-stage verification set and has the highest classification accuracy as a second-stage deep neural network model.
The method of the invention can directly give out the final prediction result according to the three-dimensional oral medical data of the patient.
Has the advantages that: compared with the prior art, the oral medical image identification method based on deep learning provided by the invention can not only obtain the prediction result of a single image, but also obtain the prediction result of the whole patient at the same time.
Drawings
FIG. 1 is a flowchart of the first stage training of an oral medical image classification algorithm implemented in accordance with the present invention;
FIG. 2 is a flowchart of the second stage training of the oral medical image classification algorithm implemented in accordance with the present invention;
FIG. 3 is a flowchart of the operation of an oral medical image classification algorithm implemented in accordance with the present invention to predict unknown patients;
FIG. 4 is a flowchart of a first stage deep neural network training and optimization process implemented by the present invention;
FIG. 5 is a flowchart of the second stage deep neural network training and optimization performed in the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.
In the oral medical image recognition method based on deep learning, the training workflow is mainly divided into a first stage training (figure 1) and a second stage training (figure 2). The first stage training process is as follows: the collected three-dimensional oral images (e.g., CBCT images) of the patient and corresponding markers (e.g., diseased and normal, if diseased also including the disease category, which refers to tumor and bone fracture, and the range of slices containing the lesion) are first input into a computing platform (step 10). The patient data is then preprocessed to remove abnormal patient data while normalizing the data, i.e., dividing the data minus the mean by the square difference (step 11). Then, the range of the patient's lesion-containing slices labeled by the doctor is corresponding to each slice of the patient's oral image (step 12), the slices outside the range are labeled as "0" (step 13 a) and denoted as non-lesion slices, and the slices within the range are labeled as "1" (step 13 b) and denoted as lesion slices. Through the above steps, a training data set for the first stage can be obtained (step 14). And randomly dividing the data set of the first stage to obtain a training set of the first stage and a verification set of the first stage. All model parameters and training hyper-parameters are reinitialized, with a resampling ratio of 10, regularization coefficient of 0.005, and learning rate of 0.001 (step 15). Then, a model is trained by using a gradient descent method (step 16), a model for judging whether the section is affected with a disease and is highest in classification accuracy is selected according to a first-stage verification set which is divided in advance (step 17), and the model of the first stage is obtained and stored in a storage system (step 18). The second stage training process is as follows: first, a three-dimensional oral cavity image (e.g., a CBCT image) of a patient with a disease and a corresponding specific disease category (only two diseases, tumor and fracture, are included in the data used) labeled by a doctor are inputted into a computing platform (step 20). The patient's data is then pre-processed, the data normalized by dividing the mean minus the square difference (step 21), and the diseased section is labeled as a specific category according to the physician's label (step 22), the tumor section as "0" (step 23 a), and the fracture section as "1" (step 23 b). Through the above steps, a training data set for the second phase may be obtained (step 24). And then, the deep neural network model of the second stage is initialized by using the deep neural network model obtained in the first stage (step 25). Next, the model is trained by using a gradient descent method (step 26), and a model with the highest accuracy in judging classification of the disease type of the section is selected from the second-stage verification set divided in advance (step 27), and the model of the second stage is obtained and stored in a storage system (step 28).
The workflow for predicting unknown patients using the trained model is shown in figure 3. Firstly, reading a model obtained by training (step 30), normalizing three-dimensional oral cavity image data of an unknown patient (step 31), inputting all slices into a first-stage model, obtaining the probability of each slice of the unknown patient corresponding to illness and normality, taking the class with the highest probability as a prediction result (step 32), calculating the maximum number of continuous slices with illness of the patient according to the prediction result of each slice of the unknown patient and taking a threshold value (step 33), judging the patient as ill (step 34 a) when the maximum number of slices exceeds the threshold value, and judging the patient as normal (step 34 b) otherwise. And inputting all the diseased slices of the patient judged to be diseased in the first stage into the deep neural network model in the second stage (step 35), obtaining the probability that the diseased slices of the patient correspond to the tumor and the fracture, and taking the class with the highest probability as the classification result of the diseased slices (step 36). And finally, according to the proportion of the tumor and the fracture in all the affected sections of the patient, taking the patient with the high proportion as a specific classification result (step 37).
The working flow chart of the training and optimizing of the first-stage deep neural network model is shown in FIG. 4. Firstly, initializing a computing platform (step 110), then randomly initializing model parameters to be trained (step 111), and entering a training process (step 112): calculating the loss of the whole model according to a two-classification cross entropy loss function (step 114), then calculating the gradient of a parameter to be trained (step 115), updating the value of the parameter by using a gradient descent method (such as SGD, Adam and the like) (step 116), repeating the steps, continuously calculating the model loss, updating the parameter (step 113) until the number of training rounds is reached (step 117), and selecting the model which judges whether the section is affected with diseases and has the highest classification accuracy according to a verification set to output and store (step 118).
The working flow chart of the training and optimization of the second stage deep neural network model is shown in fig. 5. Firstly, initializing a computing platform (step 210), then initializing model parameters by using a model in a first stage (step 211), and entering a training process (step 212): calculating the loss of the whole model according to a two-classification cross entropy loss function (step 214), then calculating the gradient of parameters to be trained (step 215), updating the value of the neural network parameters by using a gradient descent method (such as SGD, Adam and the like) (step 216), repeating the steps, continuously calculating the model loss, updating the parameters (step 213) until the number of training rounds is reached (step 217), and selecting and outputting the model with the highest accuracy for judging the classification of the specific disease types of the slices according to a verification set and storing the model (step 218).

Claims (8)

1. An oral medical image recognition method based on deep learning is characterized in that: the deep neural network is trained on the three-dimensional oral medical image data of the existing patient by utilizing deep learning to obtain a deep neural network model of a first stage and a deep neural network model of a second stage, and then the three-dimensional oral medical image data of the unknown patient is predicted by using the deep neural network model of the first stage and the deep neural network model of the second stage to obtain the recognition result of the oral medical image of the patient.
2. The deep learning based oral medical image recognition method according to claim 1, wherein: the specific steps of training the deep neural network on the existing patient data by using deep learning are as follows:
step 100, inputting a three-dimensional oral cavity image of each patient and a mark corresponding to the oral cavity image to a computing platform;
step 101, preprocessing oral cavity image data of a patient, removing abnormal patient data, and normalizing the oral cavity image data;
102, marking all slices of the three-dimensional oral cavity image of the patient in the range as diseased according to the mark of the slice range containing the focus in the mark corresponding to the oral cavity image, and simultaneously marking all slices outside the range as normal so as to obtain a data set of a first stage; randomly dividing the data set of the first stage to obtain a training set of the first stage and a verification set of the first stage;
103, initializing hyper-parameters of the deep neural network model;
104, training a deep neural network model on a computing platform by using a gradient descent method based on a first-stage training set; because the models with different rounds of numbers have different performances in the training process, selecting the model with the highest accuracy for judging whether the section is ill or not according to the verification set of the first stage obtained in the step 102 to obtain the deep neural network model of the first stage;
105, selecting all slices containing the focus according to the three-dimensional oral cavity image of the patient marked as the diseased by the doctor, and marking each slice as a disease type so as to obtain a data set of a second stage; randomly dividing the data set to obtain a training set of a second stage and a verification set of the second stage;
106, initializing a deep neural network model by using the deep neural network model in the first stage;
step 107, on a computing platform, training a post-initialization neural network model in step 106 based on a training set of a second stage by using a gradient descent method; because the models obtained by different training rounds have different performances, the model with the highest accuracy in judging the classification of the slice disease types is selected according to the verification set of the second stage in the step 105, and the deep neural network model of the second stage is obtained.
3. The deep learning-based oral medical image recognition method according to claim 2, wherein: each oral cavity image of the marking finger corresponding to the oral cavity image is marked as a normal or diseased mark by a doctor; if the buccal picture is marked as diseased, the marking also includes the disease category and the slice range of the lesion.
4. The deep learning based oral medical image recognition method according to claim 1, wherein: the specific steps for predicting unknown patient data are as follows:
step 200, normalizing the obtained three-dimensional oral cavity image of the unknown patient;
step 201, inputting all two-dimensional slices of the oral cavity image of the patient after normalization into a deep neural network model in a first stage to obtain the probability that each slice of an unknown patient is corresponding to illness and normality, and taking the category with the highest probability as the prediction result of each slice;
step 202, according to the prediction result of each slice of an unknown patient, calculating the maximum number of the sick continuous slices of the patient and taking a threshold value, when the maximum number exceeds the threshold value, judging the patient to be sick, otherwise, judging the patient to be normal;
step 203, if the patient is determined to be ill in the first stage, all slices determined to be ill by the first-stage model are selected, the slices are input into the deep neural network model in the second stage, the probability that the ill slices correspond to disease types is obtained, and the class with the highest probability is taken as the classification result of the ill slices;
and step 204, taking the category with high proportion as the specific classification result of the patient according to the proportion of the disease types in all the diseased slices of the patient.
5. The deep learning based oral medical image recognition method according to claim 1, wherein: the specific process of the deep neural network model training in the first stage is as follows: firstly, selecting a deep neural network model structure, then initializing all model parameters to be trained of the deep neural network randomly, and entering a deep neural network model training process; in the deep neural network model training process, (1) loss is calculated according to a binary cross entropy loss function, (2) gradient of parameters to be trained is calculated, the value of the parameters of the deep neural network is updated by using a gradient descent method, then the processes (1) to (2) are continuously repeated according to the number of training rounds, and finally a model which is used for judging whether a slice is ill on a first-stage verification set and has the highest classification accuracy is taken as a first-stage deep neural network model.
6. The deep learning based oral medical image recognition method according to claim 1, wherein: the specific process of the second stage model training is as follows: firstly, initializing a deep neural network model in a first stage, and then entering a training process; calculating loss according to a two-stage classification cross entropy loss function, calculating the gradient of parameters to be trained, updating the value of the neural network parameters by using a gradient descent method, continuously repeating the process according to the number of training rounds, and finally taking a model which is used for judging whether the section is a tumor or a fracture on a second-stage verification set and has the highest classification accuracy as a second-stage deep neural network model.
7. The deep learning-based oral medical image recognition method according to claim 2, wherein: in step 103, a positive sample resampling ratio, a regularization coefficient and a learning rate are initialized, where the positive sample resampling ratio is 10, the regularization coefficient is 0.005, and the learning rate is 0.001.
8. The deep learning-based oral medical image recognition method according to claim 2, wherein: the three-dimensional oral cavity image comprises a plurality of consecutive oral cavity images.
CN202011275509.9A 2020-11-16 2020-11-16 Deep learning-based oral medical image identification method Pending CN112364924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011275509.9A CN112364924A (en) 2020-11-16 2020-11-16 Deep learning-based oral medical image identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011275509.9A CN112364924A (en) 2020-11-16 2020-11-16 Deep learning-based oral medical image identification method

Publications (1)

Publication Number Publication Date
CN112364924A true CN112364924A (en) 2021-02-12

Family

ID=74514881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011275509.9A Pending CN112364924A (en) 2020-11-16 2020-11-16 Deep learning-based oral medical image identification method

Country Status (1)

Country Link
CN (1) CN112364924A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393314A (en) * 2022-08-23 2022-11-25 北京雅德嘉企业管理有限公司 Deep learning-based oral medical image identification method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443268A (en) * 2019-05-30 2019-11-12 杭州电子科技大学 A kind of benign pernicious classification method of liver cancer CT image based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443268A (en) * 2019-05-30 2019-11-12 杭州电子科技大学 A kind of benign pernicious classification method of liver cancer CT image based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈龙意: "基于深度学习的口腔颌面外科疾病诊断", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, pages 074 - 32 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393314A (en) * 2022-08-23 2022-11-25 北京雅德嘉企业管理有限公司 Deep learning-based oral medical image identification method and system

Similar Documents

Publication Publication Date Title
WO2019200753A1 (en) Lesion detection method, device, computer apparatus and storage medium
CN112418329B (en) Cervical OCT image classification method and system based on multi-scale textural feature fusion
JP6906347B2 (en) Medical image classifiers, methods and programs
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
Gupta Pneumonia detection using convolutional neural networks
Boban et al. Lung diseases classification based on machine learning algorithms and performance evaluation
CN111784639A (en) Oral panoramic film dental caries depth identification method based on deep learning
CN111932541B (en) CT image processing method for predicting prognosis of new coronary pneumonia
Osadebey et al. Three-stage segmentation of lung region from CT images using deep neural networks
US20240161035A1 (en) Multi-model medical scan analysis system and methods for use therewith
KR20230029004A (en) System and method for prediction of lung cancer final stage using chest automatic segmentation image
CN115222674A (en) Detection device for intracranial aneurysm rupture risk based on multi-dimensional feature fusion
CN112364924A (en) Deep learning-based oral medical image identification method
CN114519705A (en) Ultrasonic standard data processing method and system for medical selection and identification
Sameer et al. Brain tumor segmentation and classification approach for MR images based on convolutional neural networks
CN110458186B (en) Breast ultrasound image classification method and system based on local reference similarity coding
CN113344887A (en) Interstitial pneumonia assessment method based on deep learning and fuzzy logic
Subramanian et al. Design and Evaluation of a Deep Learning Aided Approach for Kidney Stone Detection in CT scan Images
CN109637633B (en) Method for diagnosing breast cancer state based on big data and machine learning
CN111466877A (en) Oxygen reduction state prediction method based on L STM network
CN111209945A (en) AI-based medical image auxiliary identification method and system for department of imaging
CN115810016B (en) Automatic identification method, system, storage medium and terminal for CXR (Lung infection) image
CN117690584B (en) Intelligent AI-based chronic disease patient management system and method
Mehendale et al. A Graphical Approach For Brain Haemorrhage Segmentation
Batra et al. A brief overview on deep learning methods for lung cancer detection using medical imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination