CN117351293B - Combined learning periodontal disease image classification method and device - Google Patents
Combined learning periodontal disease image classification method and device Download PDFInfo
- Publication number
- CN117351293B CN117351293B CN202311642858.3A CN202311642858A CN117351293B CN 117351293 B CN117351293 B CN 117351293B CN 202311642858 A CN202311642858 A CN 202311642858A CN 117351293 B CN117351293 B CN 117351293B
- Authority
- CN
- China
- Prior art keywords
- periodontal disease
- data set
- model
- neural network
- learning model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 208000028169 periodontal disease Diseases 0.000 title claims abstract description 85
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000013528 artificial neural network Methods 0.000 claims abstract description 34
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000013145 classification model Methods 0.000 claims abstract description 20
- 238000013527 convolutional neural network Methods 0.000 claims description 20
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 7
- 230000008485 antagonism Effects 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000003211 malignant effect Effects 0.000 claims description 3
- 230000001537 neural effect Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 201000010099 disease Diseases 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 208000007565 gingivitis Diseases 0.000 description 3
- 230000003239 periodontal effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 201000001245 periodontitis Diseases 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001684 chronic effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 208000024693 gingival disease Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000002458 infectious effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of medical image classification, and discloses a periodontal disease image classification method and device based on joint learning, wherein the method comprises the following steps: s1: acquiring a final data set; s2: constructing a joint learning model; s3: training the combined learning model by using the final data set to obtain a final classification model of the periodontal disease image; s4: and inputting the image to be predicted into a final classification model to obtain a periodontal disease classification result output by the model. The invention can generate the combined learning model by fusing the countermeasure network, the artificial neural network and the convolution neural network, and the obtained prediction result has the characteristics of three neural networks and has higher accuracy than the prediction result obtained by a single neural network; the data balance is carried out on a few types of unbalanced data, so that various types of periodontal disease images have a certain number of data sets, and the model can fully learn the characteristics of various types of periodontal disease images, and can better classify the periodontal disease images.
Description
Technical Field
The invention relates to the technical field of medical image classification, in particular to a periodontal disease image classification method and device based on joint learning.
Background
Periodontal disease refers to diseases occurring in tooth supporting tissues (periodontal tissues), and includes two major types, namely gum disease of gum tissues and periodontitis that affects deep periodontal tissues. Severe periodontitis has become the sixth most chronic non-infectious epidemic disease worldwide.
Traditional periodontal disease diagnosis is usually performed by a periodontal specialist. In the clinical medicine field, a large number of AI models are being developed for automatic prediction of disease risk and diagnosis and prognosis evaluation of disease, so it is a very interesting thing to apply artificial intelligence techniques in periodontal disease image classification.
Today, most research is still focused on data processing, periodontal disease prediction and diagnosis using a distributed learning model on a server for periodontal disease image detection. The learner uses a federal prediction model of Convolutional Neural Network (CNN) based on a joint learning framework, wherein the model is based on the general modeling and simulation condition improvement of periodontal disease, the accuracy of periodontal disease classification data can reach more than 90%, and the accuracy is superior to that of tree model single model machine, linear model and neural network. However, the research only combines the convolutional neural network to obtain the prediction result, so that the single accuracy of the prediction result is difficult to meet the requirement, and the problems of data unbalance and data preprocessing are not solved; meanwhile, in the prior art, when unbalanced data are balanced, unbalanced data types are not distinguished, so that a normal type data set can be balanced and expanded, and the normal type data set in the data set is too many, so that the model training effect is poor.
Disclosure of Invention
The invention aims to provide a periodontal disease image classification method and device based on joint learning, which are used for solving the problems that in the prior art, only a convolutional neural network is combined to obtain a single prediction result, and unbalanced data types are not distinguished in the data balance operation process.
In order to solve the technical problems, the invention specifically provides the following technical scheme:
a method for classifying periodontal disease images by joint learning comprises the following steps:
s1: preprocessing periodontal disease images to obtain a final data set;
the specific steps for acquiring the final data set are as follows:
s11: extracting a region of interest of a periodontal disease image to obtain a first data set, and marking the first data set;
s12: selecting an image from the existing normal tooth image dataset and extracting a region of interest of the selected image as a second dataset using a mask;
s13: collecting, classifying and rearranging the first data set in S11 and the second data set in S12 to extract unbalanced data in the first data set;
s14: performing data balance on the unbalanced data by using an SMOTE method to obtain a third data set, and synthesizing the third data set and the second data set as a final data set;
s2: constructing a joint learning model, wherein the joint learning model is obtained by generating an antagonism network, an artificial neural network and a convolution neural network through weighted average;
s3: training the joint learning model by using the final data set to obtain a final classification model of periodontal disease images;
s4: and inputting the image to be predicted into the final classification model to obtain a periodontal disease classification result output by the model, wherein the periodontal disease classification result is used for determining whether the tooth has periodontal disease or not, and outputting the severity of the periodontal disease if the tooth has periodontal disease.
As a preferred embodiment of the present invention, in S11, the means for marking the first data set is:
and labeling each piece of data in the first data set, wherein a label 0 indicates normal teeth, a label 1 indicates benign periodontal disease, and a label 2 indicates malignant periodontal disease.
As a preferred embodiment of the present invention, in S12, the extracting the region of interest of the selected image using a mask is implemented by:
if the region of interest is of an initial size of 598×598 pixels, then direct extraction;
if the region of interest is less than 598×598 pixels, then the region of interest is enlarged to 598×598 pixels;
if the region of interest is greater than 598×598 pixels, the region of interest is scaled down to 598×598 pixels.
As a preferred embodiment of the present invention, in S14, the specific steps of performing data balancing on the unbalanced data using the SMOTE method are:
s141: for any one of the unbalanced data x 1 Calculating the distance between the KNN algorithm and other unbalanced data, and screening according to a distance preset value to find the unbalanced data x 1 K nearest neighbors of (2);
s142: randomly selecting one unbalanced data x from the K nearest neighbors 2 And calculates the similarity with the current sample;
S143: based on the similarityGenerating a new synthetic sample x new The specific synthesis formula is as follows:
;
s144: repeating the steps S141-S143, and generating a specified number of synthesized samples as a third data set.
As a preferred method of the present invention, in the S2, the specific formula for obtaining the joint learning model by the weighted averages of the generating countermeasure network, the convolutional neural network and the artificial neural network is:
wherein,representing a joint learning model->Representing generation of an countermeasure network, the->Representing an artificial neural network->Representing convolutional neural network, ++>Representing the weight of generating an countermeasure network, +.>Weight representing artificial neural network, +.>Representing the weights of the convolutional neural network.
As a preferred embodiment of the present invention, in the step S3, the training of the joint learning model using the final data set is performed as follows:
inputting the final data set into the joint learning model for training learning, and adjusting the weight of the joint learning modelAnd->Judging whether the classification result output by the training of the joint learning model is consistent with the labeling result of the final data set, if not, adjusting model parameters and weights of the generated countermeasure network, the artificial neural network and the convolutional neural network in the joint learning model; and then a final classification model of periodontal disease images is obtained.
A joint learning periodontal disease image classification device, using the joint learning periodontal disease image classification method as described above, comprising the following modules:
and a pretreatment module: the method comprises the steps of receiving periodontal disease images and preprocessing to obtain a data set;
model acquisition module: the preprocessing module is connected with the processing module and used for constructing a joint learning model, and the joint learning model is obtained by generating an antagonism network, an artificial neural network and a convolution neural network through weighted average;
server side: the model acquisition module is connected with the model acquisition module and is used for operating a joint learning model; and the data set is used for receiving the data set, and training the joint learning model according to the data set to obtain a final classification model.
Compared with the prior art, the invention has the following beneficial effects:
(1) The invention fuses the generated countermeasure network, the artificial neural network and the convolutional neural network to form a joint learning model, and carries out pretreatment on periodontal disease images in the existing dataset and periodontal disease images acquired in real time to obtain a final dataset, updates the weight of the joint learning model through the final dataset to obtain a final classification model, and classifies the real-time periodontal disease images by the final classification model; meanwhile, the prediction result obtained by the combined learning model has the characteristics of three neural networks, and the accuracy of the prediction result obtained by the combined learning model is higher than that obtained by a single neural network.
(2) The invention analyzes the real-time periodontal disease image data set to extract the unbalanced data of a few categories in the periodontal disease image data set, and uses the SMOTE method to balance the unbalanced data of the few categories.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
FIG. 1 is a schematic flow chart of a method according to a first embodiment of the invention;
fig. 2 is a schematic diagram of a device structure according to a second embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The concepts related to the present application will be described with reference to the accompanying drawings. It should be noted that the following descriptions of the concepts are only for making the content of the present application easier to understand, and do not represent a limitation on the protection scope of the present application; meanwhile, the embodiments and features in the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Example 1
As shown in fig. 1, the present invention provides a method for classifying periodontal disease images by joint learning, comprising the steps of:
s1: a final dataset for joint learning model training is obtained.
Wherein the specific step of obtaining the final data set comprises:
s11: extracting a region of interest of periodontal disease images to obtain a first data set, and marking the first data set.
Specifically, the periodontal disease image is an X-ray image;
in S11, the way to mark the first data set is:
and labeling each piece of data in the first data set, wherein a label 0 indicates normal teeth, a label 1 indicates benign periodontal disease, and a label 2 indicates malignant periodontal disease.
S12: selecting an image from the existing dental image dataset and extracting a region of interest of the selected image as a second dataset using a mask;
specifically, the existing tooth dataset is the Bitewing Radiology dataset.
In S12, extracting the region of interest of the selected image using a mask is achieved by:
if the region of interest is of an initial size of 598×598 pixels, then direct extraction;
if the region of interest is less than 598×598 pixels, then the region of interest is enlarged to 598×598 pixels;
if the region of interest is greater than 598×598 pixels, the region of interest is scaled down to 598×598 pixels.
Illustratively, to increase the data samples, dataset data enhancement is used, including random localization of the ROI (region of interest) in the image, random horizontal flipping, random vertical flipping, and random rotation. False elements such as white borders and overlay text are removed by examination and modification of the dental image, but many variable sized contours and white arrays still exist in the dental image, resulting in the patient's personal information being masked. To remove the contours, 7% of each side of the dental image was cropped. Since the extracted ROI is proportional to its size, rather than at a fixed scale, the dental image is scaled down by a random factor between 1.8 and 3.2 and then segmented into 598 x 598 pixels.
S13: the first data set in S11 and the second data set in S12 are collected, classified and rearranged to extract unbalanced data in the first data set.
In particular, unbalanced data refers to data of an image class in the first dataset that is far less than data of other image classes, i.e. unbalanced data is minority class data.
For example, in the first data set, there are few X-ray images indicating gingivitis in periodontal disease, and at this time, the training of the classification model by using the first data set may result in that features of the gingivitis image cannot be sufficiently learned and extracted by the classification model, and thus the accuracy of classification of the periodontal disease image caused by gingivitis is not high, and thus, unbalanced class data in the data set needs to be found out.
S14: and carrying out data balance on the unbalanced data by using an SMOTE method to obtain a third data set, and synthesizing the third data set and the second data set as a final data set.
Specifically, normal data in a dataset is in a majority category, while unbalanced data belongs to a minority category in the dataset.
In S14, the specific steps of performing data balancing on the unbalanced data by using the SMOTE method are as follows:
s141: for any one of the unbalanced data x 1 Calculating the distance between the KNN algorithm and other unbalanced data, and screening according to a distance preset value to find the unbalanced data x 1 K nearest neighbors of (2);
s142: randomly selecting one unbalanced data x from the K nearest neighbors 2 And calculates the similarity with the current sample;
S143: based on the similarityGenerating a new synthetic sample x new The specific synthesis formula is as follows:
;
s144: repeating the steps S141-S143, and generating a specified number of synthesized samples as a third data set.
In the prior art, when unbalanced data are subjected to data balance, the types of the unbalanced data are not generally distinguished, namely, data balance operation is performed, so that unbalanced data in normal tooth X-ray images without periodontal disease can be greatly expanded, the proportion of normal data with smaller model training contribution is larger, the proportion of image feature learning of periodontal disease is reduced in the model training process, and the accuracy of model training is reduced;
therefore, in this embodiment, first, the data sets are classified, the periodontal disease image is used as the first data set, the normal tooth image is used as the second data set, and the data balancing operation is performed only on the first data set, so that all types of periodontal disease images have a certain number of data sets, and the normal tooth image is not expanded, so that the model can fully learn the characteristics of all types of periodontal disease images, and further the periodontal disease images can be classified better.
S2: a joint learning model is constructed, and the joint learning model is obtained by generating an antagonism network, an artificial neural network and a convolution neural network through weighted average.
In the step S2, the specific formula for obtaining the joint learning model by the weighted averages of the countermeasure network, the convolutional neural network and the artificial neural network is as follows:
wherein,representing a joint learning model->Representing generation of an countermeasure network, the->Representing an artificial neural network->Representing convolutional neural network, ++>Representing the weight of generating an countermeasure network, +.>Weight representing artificial neural network, +.>Representing the weights of the convolutional neural network.
Illustratively, assuming that the weights for generating the countermeasure network, the artificial neural network, and the convolutional neural network are ranked 1, 2, and 3, the weights given to the three models for generating the countermeasure network, the artificial neural network, and the convolutional neural network in the joint learning model obtained by fusion are respectively、/>、/>。
S3: and training the joint learning model by using the final data set to obtain a final classification model of periodontal disease images.
Specifically, in the step S3, the specific step of training the joint learning model using the final data set is as follows:
inputting the final data set into the joint learning model for training learning, and adjusting the weight of the joint learning modelAnd->Judging whether the classification result output by the training of the joint learning model is consistent with the labeling result of the final data set, if not, adjusting model parameters and weights of the generated countermeasure network, the artificial neural network and the convolutional neural network in the joint learning model; and then a final classification model of periodontal disease images is obtained.
S4: and inputting the image to be predicted into the final classification model to obtain a periodontal disease classification result output by the model, wherein the periodontal disease classification result is used for determining whether the tooth has periodontal disease or not, and outputting the severity of the periodontal disease if the tooth has periodontal disease.
Specifically, the final classification model outputs a result of 0 or 1 or 2, corresponding to the presence or absence of periodontal disease and the severity of periodontal disease in step S11.
The invention fuses the generated countermeasure network, the artificial neural network and the convolutional neural network to form a joint learning model, and carries out preprocessing on periodontal disease images in the existing data set and periodontal disease images acquired in real time to obtain a final data set, updates the weight of the joint learning model through the final data set to obtain a final classification model, and classifies the real-time periodontal disease images by the final classification model.
Meanwhile, the prediction result obtained by the combined learning model has the characteristics of three neural networks, and the accuracy of the prediction result obtained by the combined learning model is higher than that obtained by a single neural network.
In the process of obtaining the final data set, the method analyzes the real-time periodontal disease image data set firstly to extract the unbalanced data of a few categories in the first data set, and performs data balance on the unbalanced data of the few categories by using the SMOTE oversampling method.
Example two
As shown in fig. 2, a periodontal disease image classification device for joint learning, which uses the above-described joint learning periodontal disease image classification method, includes the following modules:
and a pretreatment module: the method comprises the steps of receiving periodontal disease images and preprocessing to obtain a data set;
model acquisition module: the preprocessing module is connected with the processing module and used for constructing a joint learning model, and the joint learning model is obtained by generating an antagonism network, an artificial neural network and a convolution neural network through weighted average;
server side: the model acquisition module is connected with the model acquisition module and is used for operating a joint learning model; and the data set is used for receiving the data set, and training the joint learning model according to the data set to obtain a final classification model.
Example III
The present embodiment includes a computer-readable storage medium having stored thereon a data processing program that is executed by a processor to perform a method of classification of periodontal disease images of the joint learning of the first embodiment.
It will be apparent to one of ordinary skill in the art that embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Including but not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer, and the like. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The description herein is with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements.
It should also be noted that the terms "center," "upper," "lower," "left," "right," "vertical," "horizontal," "inner," "outer," and the like indicate an orientation or a positional relationship based on that shown in the drawings, and are merely for convenience of description and simplification of the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present application. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
The above examples and/or embodiments are merely for illustrating the preferred embodiments and/or implementations of the present technology, and are not intended to limit the embodiments and implementations of the present technology in any way, and any person skilled in the art should be able to make some changes or modifications to the embodiments and/or implementations without departing from the scope of the technical means disclosed in the present disclosure, and it should be considered that the embodiments and implementations are substantially the same as the present technology.
Specific examples are set forth herein to illustrate the principles and embodiments of the present application, and the description of the examples above is only intended to assist in understanding the methods of the present application and their core ideas. The foregoing is merely a preferred embodiment of the present application, and it should be noted that, due to the limited text expressions, there is virtually no limit to the specific structure, and that, for a person skilled in the art, modifications, alterations and combinations of the above described features may be made in an appropriate manner without departing from the principles of the present application; such modifications, variations and combinations, or the direct application of the concepts and aspects of the invention in other applications without modification, are intended to be within the scope of this application.
Claims (7)
1. The method for classifying the periodontal disease images by joint learning is characterized by comprising the following steps of:
s1: acquiring a final data set for training a joint learning model;
the specific steps for acquiring the final data set are as follows:
s11: extracting a region of interest of a periodontal disease image to obtain a first data set, and marking the first data set;
s12: selecting an image from the existing normal tooth image dataset and extracting a region of interest of the selected image as a second dataset using a mask;
s13: collecting, classifying and rearranging the first data set in S11 and the second data set in S12 to extract unbalanced data in the first data set;
s14: performing data balance on the unbalanced data by using an SMOTE method to obtain a third data set, and synthesizing the third data set and the second data set as a final data set;
s2: constructing a joint learning model, wherein the joint learning model is obtained by generating an antagonism network, an artificial neural network and a convolution neural network through weighted average;
s3: training the joint learning model by using the final data set to obtain a final classification model of periodontal disease images;
s4: and inputting the image to be predicted into the final classification model to obtain a periodontal disease classification result output by the model, wherein the periodontal disease classification result is used for determining whether the tooth has periodontal disease or not, and outputting the severity of the periodontal disease if the tooth has periodontal disease.
2. The method of classification of periodontal disease images according to claim 1, wherein in S11, the first data set is labeled in the following manner:
and labeling each piece of data in the first data set, wherein a label 0 indicates normal teeth, a label 1 indicates benign periodontal disease, and a label 2 indicates malignant periodontal disease.
3. The method according to claim 1, wherein in S12, extracting the region of interest of the selected image using a mask is performed by:
if the region of interest is of an initial size of 598×598 pixels, then direct extraction;
if the region of interest is less than 598×598 pixels, then the region of interest is enlarged to 598×598 pixels;
if the region of interest is greater than 598×598 pixels, the region of interest is scaled down to 598×598 pixels.
4. The method for classification of periodontal disease images based on joint learning according to claim 1, wherein in S14, the specific step of data balancing the unbalanced data using SMOTE method is:
s141: for any one of the unbalanced data x 1 Calculating the distance between the KNN algorithm and other unbalanced data, and screening according to a distance preset value to find the unbalanced data x 1 K nearest neighbors of (2);
s142: randomly selecting one unbalanced data x from the K nearest neighbors 2 And calculates the similarity with the current sample;
S143: based on the similarityGenerating a new synthetic sample x new The specific synthesis formula is as follows:
;
s144: repeating the steps S141-S143, and generating a specified number of synthesized samples as a third data set.
5. The method for classification of periodontal disease images according to claim 1, wherein in S2, the specific formula for obtaining the joint learning model by weighted averaging of the generating countermeasure network, the convolutional neural network, and the artificial neural network is:
;
wherein,representing a joint learning model->Representing generation of an countermeasure network, the->Representing an artificial neural network->Representing convolutional neural network, ++>Representing the weight of generating an countermeasure network, +.>Weight representing artificial neural network, +.>Representing the weights of the convolutional neural network.
6. The method for classification of periodontal disease images by joint learning according to claim 1, wherein in S3, the training of the joint learning model using the final data set is performed as follows:
inputting the final data set into the joint learning model for training learning, and adjusting the weight of the joint learning modelAnd->Judging whether the classification result output by the training of the joint learning model is consistent with the labeling result of the final data set, if not, adjusting model parameters and weights of the generated countermeasure network, the artificial neural network and the convolutional neural network in the joint learning model; thereby obtaining periodontal disease imageAnd finally classifying the model.
7. A joint-learning periodontal disease image classification apparatus using the joint-learning periodontal disease image classification method according to any one of claims 1 to 6, characterized by comprising the following modules:
and a pretreatment module: the method comprises the steps of receiving periodontal disease images and preprocessing to obtain a data set;
model acquisition module: the preprocessing module is connected with the processing module and used for constructing a joint learning model, and the joint learning model is obtained by generating an antagonism network, an artificial neural network and a convolution neural network through weighted average;
server side: the model acquisition module is connected with the model acquisition module and is used for operating a joint learning model; and the data set is used for receiving the data set, and training the joint learning model according to the data set to obtain a final classification model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311642858.3A CN117351293B (en) | 2023-12-04 | 2023-12-04 | Combined learning periodontal disease image classification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311642858.3A CN117351293B (en) | 2023-12-04 | 2023-12-04 | Combined learning periodontal disease image classification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117351293A CN117351293A (en) | 2024-01-05 |
CN117351293B true CN117351293B (en) | 2024-02-06 |
Family
ID=89363560
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311642858.3A Active CN117351293B (en) | 2023-12-04 | 2023-12-04 | Combined learning periodontal disease image classification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117351293B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389587A (en) * | 2018-09-26 | 2019-02-26 | 上海联影智能医疗科技有限公司 | A kind of medical image analysis system, device and storage medium |
WO2020047739A1 (en) * | 2018-09-04 | 2020-03-12 | 安徽中科智能感知大数据产业技术研究院有限责任公司 | Method for predicting severe wheat disease on the basis of multiple time-series attribute element depth features |
AU2020103613A4 (en) * | 2020-11-23 | 2021-02-04 | Agricultural Information and Rural Economic Research Institute of Sichuan Academy of Agricultural Sciences | Cnn and transfer learning based disease intelligent identification method and system |
CN112819076A (en) * | 2021-02-03 | 2021-05-18 | 中南大学 | Deep migration learning-based medical image classification model training method and device |
CN113239755A (en) * | 2021-04-28 | 2021-08-10 | 湖南大学 | Medical hyperspectral image classification method based on space-spectrum fusion deep learning |
CN115602325A (en) * | 2022-09-30 | 2023-01-13 | 易联众云链科技(福建)有限公司(Cn) | Chronic disease risk assessment method and system based on multi-model algorithm |
CN115878999A (en) * | 2022-12-09 | 2023-03-31 | 宝鸡文理学院 | Oversampling method and system for differential evolution of highly unbalanced data sets |
JP7386370B1 (en) * | 2022-09-09 | 2023-11-24 | 之江実験室 | Multi-task hybrid supervised medical image segmentation method and system based on federated learning |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110473283B (en) * | 2018-05-09 | 2024-01-23 | 无锡时代天使医疗器械科技有限公司 | Method for setting local coordinate system of tooth three-dimensional digital model |
KR20230147293A (en) * | 2022-04-14 | 2023-10-23 | 삼성에스디에스 주식회사 | Method for augmenting data and system thereof |
-
2023
- 2023-12-04 CN CN202311642858.3A patent/CN117351293B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020047739A1 (en) * | 2018-09-04 | 2020-03-12 | 安徽中科智能感知大数据产业技术研究院有限责任公司 | Method for predicting severe wheat disease on the basis of multiple time-series attribute element depth features |
CN109389587A (en) * | 2018-09-26 | 2019-02-26 | 上海联影智能医疗科技有限公司 | A kind of medical image analysis system, device and storage medium |
AU2020103613A4 (en) * | 2020-11-23 | 2021-02-04 | Agricultural Information and Rural Economic Research Institute of Sichuan Academy of Agricultural Sciences | Cnn and transfer learning based disease intelligent identification method and system |
CN112819076A (en) * | 2021-02-03 | 2021-05-18 | 中南大学 | Deep migration learning-based medical image classification model training method and device |
CN113239755A (en) * | 2021-04-28 | 2021-08-10 | 湖南大学 | Medical hyperspectral image classification method based on space-spectrum fusion deep learning |
JP7386370B1 (en) * | 2022-09-09 | 2023-11-24 | 之江実験室 | Multi-task hybrid supervised medical image segmentation method and system based on federated learning |
CN115602325A (en) * | 2022-09-30 | 2023-01-13 | 易联众云链科技(福建)有限公司(Cn) | Chronic disease risk assessment method and system based on multi-model algorithm |
CN115878999A (en) * | 2022-12-09 | 2023-03-31 | 宝鸡文理学院 | Oversampling method and system for differential evolution of highly unbalanced data sets |
Non-Patent Citations (1)
Title |
---|
基于测地距离的GIDGC-KNN不平衡数据分类器;张立旺;师智斌;;计算机工程与设计(02);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117351293A (en) | 2024-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mao et al. | Caries and restoration detection using bitewing film based on transfer learning with CNNs | |
CN109919230B (en) | Medical image pulmonary nodule detection method based on cyclic feature pyramid | |
CN108830326B (en) | Automatic segmentation method and device for MRI (magnetic resonance imaging) image | |
US20220198230A1 (en) | Auxiliary detection method and image recognition method for rib fractures based on deep learning | |
Rajaraman et al. | Training deep learning algorithms with weakly labeled pneumonia chest X-ray data for COVID-19 detection | |
CN115880266B (en) | Intestinal polyp detection system and method based on deep learning | |
CN112528782A (en) | Underwater fish target detection method and device | |
CN112381762A (en) | CT rib fracture auxiliary diagnosis system based on deep learning algorithm | |
CN116188879B (en) | Image classification and image classification model training method, device, equipment and medium | |
CN113011340A (en) | Cardiovascular surgery index risk classification method and system based on retina image | |
Chhabra et al. | An efficient ResNet-50 based intelligent deep learning model to predict pneumonia from medical images | |
CN113160151B (en) | Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism | |
CN114612381A (en) | Medical image focus detection algorithm with scale enhancement and attention fusion | |
CN117351293B (en) | Combined learning periodontal disease image classification method and device | |
CN111626986B (en) | Three-dimensional ultrasonic abdominal wall hernia patch detection method and system based on deep learning | |
CN116633639B (en) | Network intrusion detection method based on unsupervised and supervised fusion reinforcement learning | |
CN110705613B (en) | Object classification method | |
CN111612021A (en) | Error sample identification method and device and terminal | |
CN115641344A (en) | Method for segmenting optic disc image in fundus image | |
Al Fryan et al. | Application of Deep Learning System Technology in Identification of Women’s Breast Cancer | |
Wang et al. | A multi-stage data augmentation approach for imbalanced samples in image recognition | |
CN113077466A (en) | Medical image classification method and device based on multi-scale perception loss | |
CN113327221A (en) | Image synthesis method and device fusing ROI (region of interest), electronic equipment and medium | |
Liu et al. | Ensemble Learning with multiclassifiers on pediatric hand radiograph segmentation for bone age assessment | |
Chandra et al. | A Deep InceptionV3 Model for Detecting Tuberculosis Disease Using CXR Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |