CN110751183A - Image data classification model generation method, image data classification method and device - Google Patents

Image data classification model generation method, image data classification method and device Download PDF

Info

Publication number
CN110751183A
CN110751183A CN201910907289.8A CN201910907289A CN110751183A CN 110751183 A CN110751183 A CN 110751183A CN 201910907289 A CN201910907289 A CN 201910907289A CN 110751183 A CN110751183 A CN 110751183A
Authority
CN
China
Prior art keywords
medical image
sample medical
model
training
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910907289.8A
Other languages
Chinese (zh)
Inventor
平安
何光宇
王希
于洪勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201910907289.8A priority Critical patent/CN110751183A/en
Publication of CN110751183A publication Critical patent/CN110751183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application discloses a generation method of an image data classification model, and an image data classification method and device, which are used for improving the generalization capability and classification accuracy of the model, and the generation method of the image data classification model comprises the following steps: acquiring a training data set, wherein the training data set comprises a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label; training model parameters of the initial depth learning model according to the positive sample medical image and the negative sample medical image to generate an image data classification model; the initial deep learning model is a three-dimensional convolution dense network neural network model and comprises a convolution layer, a maximum pooling layer, dense block layers and full-connection layers which are sequentially connected, each dense block layer comprises a plurality of full-connection dense layers and a transition layer which are sequentially connected, the input of each full-connection dense layer is used as the input of the subsequent full-connection dense layer and the transition layer, and each full-connection dense layer comprises a first convolution layer and a second convolution layer which are sequentially connected.

Description

Image data classification model generation method, image data classification method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating an image data classification model, and a method and an apparatus for classifying image data.
Background
Deep learning is a method based on characterization learning of data in machine learning, and has been widely used in recent years, for example, in image classification. In practical application, a deep learning model with high discrimination accuracy can be trained by adjusting model parameters. However, in the training process, due to the limitation of the application environment, the obtained training data is limited, so that the generalization capability of the model cannot be improved, and the classification result of the data is affected.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a method and an apparatus for generating an image data classification model, and an image data classification method and an apparatus for improving generalization capability and classification accuracy of the model.
In order to solve the above problem, the technical solution provided by the embodiment of the present application is as follows:
a method for generating an image data classification model, the method comprising:
acquiring a training data set, wherein the training data set comprises a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label;
training model parameters of an initial deep learning model according to positive sample medical image images and negative sample medical image images in the training data set to generate an image data classification model;
the initial deep learning model is a three-dimensional convolution dense network neural network model, the three-dimensional convolution dense network neural network model comprises a convolution layer, a maximum pooling layer, dense block layers and full-connection layers which are sequentially connected, each dense block layer comprises a plurality of full-connection dense layers and a transition layer which are sequentially connected, the input of each full-connection dense layer is used as the input of the subsequent full-connection dense layer and the input of the transition layer, and each full-connection dense layer comprises a first convolution layer and a second convolution layer which are sequentially connected.
In one possible implementation, the method further includes:
acquiring a positive sample medical image and a negative sample medical image, and adding the positive sample medical image and the negative sample medical image into a training data set;
weighting and summing the positive sample medical image and the negative sample medical image to generate a target medical image; the sum of the weight corresponding to the positive sample medical image and the weight corresponding to the negative sample medical image is one;
when the weight corresponding to the positive sample medical image is greater than the weight corresponding to the negative sample medical image, determining the target medical image as a positive sample medical image and adding the positive sample medical image to the training data set;
and when the weight corresponding to the positive sample medical image is smaller than the weight corresponding to the negative sample medical image, determining the target medical image as a negative sample medical image and adding the negative sample medical image into the training data set.
In one possible implementation, the acquiring positive and negative sample medical image images includes:
acquiring an original positive sample medical image and an original negative sample medical image from an original medical image database;
according to actual medical image acquisition conditions, extracting preset layers of images from the original positive sample medical image at equal intervals to generate a positive sample medical image, and extracting preset layers of images from the original negative sample medical image at equal intervals to generate a negative sample medical image.
In one possible implementation manner, the training the model parameters of the initial deep learning model according to the positive sample medical image and the negative sample medical image in the training data set to generate the image data classification model includes:
determining the initial deep learning model as a current deep learning model;
extracting K groups of training data subsets from the training data set, wherein each group of training data subsets comprises positive sample medical image images and negative sample medical image images which are the same in quantity, and K is an integer larger than 1;
training the model parameters of the initial deep learning model by using the ith group of training data subset to obtain the model parameters of the classification model corresponding to the ith group of training data subset, wherein the value of i is an integer from 1 to K;
calculating the average value of the model parameters of the classification model corresponding to the K groups of training data subsets, updating the model parameters of the current deep learning model according to the average value of the model parameters, and regenerating the current deep learning model;
and re-executing the K groups of training data subsets extracted from the training data set and the subsequent steps until a preset training stopping condition is reached, and generating an image data classification model.
In a possible implementation manner, the preset training stop condition is that a preset number of times of re-executing the K groups of training data subsets extracted from the training data set and subsequent steps is reached, or the precision of the generated new current deep learning model reaches a threshold.
In one possible implementation, the three-dimensional convolution dense network neural network model further includes a random deactivation layer connected between the dense block layer and the fully-connected layer;
the convolution kernel size of the first convolution layer is 1 x 1, the convolution kernel size of the second convolution layer is m x n, m is an integer which is greater than or equal to 2 and smaller than p, p is an integer which is smaller than n, n is a positive integer, and the value of n is determined according to the image size of the positive sample medical image or the negative sample medical image.
A method of classifying image data, the method comprising:
acquiring an actual medical image;
and inputting the actual medical image into an image data classification model generated by pre-training to obtain a classification result of the actual medical image, wherein the image data classification model is generated by training according to the generation method of the image data classification model.
An apparatus for generating an image data classification model, the apparatus comprising:
a first obtaining unit, configured to obtain a training data set, where the training data set includes a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label;
the first generation unit is used for training the model parameters of the initial deep learning model according to the positive sample medical image images and the negative sample medical image images in the training data set to generate an image data classification model;
the initial deep learning model is a three-dimensional convolution dense network neural network model, the three-dimensional convolution dense network neural network model comprises a convolution layer, a maximum pooling layer, dense block layers and full-connection layers which are sequentially connected, each dense block layer comprises a plurality of full-connection dense layers and a transition layer which are sequentially connected, the input of each full-connection dense layer is used as the input of the subsequent full-connection dense layer and the input of the transition layer, and each full-connection dense layer comprises a first convolution layer and a second convolution layer which are sequentially connected.
In one possible implementation, the apparatus further includes:
a second obtaining unit, configured to obtain a positive sample medical image and a negative sample medical image, and add the positive sample medical image and the negative sample medical image to a training data set;
a second generation unit, configured to generate a target medical image by weighted summation of the positive sample medical image and the negative sample medical image; the sum of the weight corresponding to the positive sample medical image and the weight corresponding to the negative sample medical image is one;
a determining unit, configured to determine the target medical image as a positive sample medical image to be added to the training data set when the weight corresponding to the positive sample medical image is greater than the weight corresponding to the negative sample medical image; and when the weight corresponding to the positive sample medical image is smaller than the weight corresponding to the negative sample medical image, determining the target medical image as a negative sample medical image and adding the negative sample medical image into the training data set.
In a possible implementation manner, the second obtaining unit includes:
the system comprises a first acquisition subunit, a second acquisition subunit and a third acquisition subunit, wherein the first acquisition subunit is used for acquiring an original positive sample medical image and an original negative sample medical image from an original medical image database;
the first extraction subunit is used for extracting images with a preset number of layers at equal intervals from the original positive sample medical image according to the actual medical image acquisition condition to generate a positive sample medical image;
and the second extraction subunit is used for extracting images with a preset number of layers at equal intervals from the original negative sample medical image to generate a negative sample medical image.
In one possible implementation manner, the generating unit includes:
a determining subunit, configured to determine the initial deep learning model as a current deep learning model;
an extracting subunit, configured to extract K groups of training data subsets from the training data set, where each group of training data subsets includes positive sample medical image images and negative sample medical image images of the same number, and K is an integer greater than 1;
the second obtaining subunit is configured to train the model parameters of the initial deep learning model by using the ith group of training data subsets, and obtain the model parameters of the classification model corresponding to the ith group of training data subsets, where a value of i is an integer from 1 to K;
the calculation subunit is used for calculating the average value of the model parameters of the classification model corresponding to the K groups of training data subsets, updating the model parameters of the current deep learning model according to the average value of the model parameters, and regenerating the current deep learning model; and re-executing the extraction subunit until a preset training stop condition is reached, and generating an image data classification model.
In a possible implementation manner, the preset training stop condition is that a preset number of times of re-executing the K groups of training data subsets extracted from the training data set and subsequent steps is reached, or the precision of the generated new current deep learning model reaches a threshold.
In one possible implementation, the three-dimensional convolution dense network neural network model further includes a random deactivation layer connected between the dense block layer and the fully-connected layer;
the convolution kernel size of the first convolution layer is 1 x 1, the convolution kernel size of the second convolution layer is m x n, m is an integer which is greater than or equal to 2 and smaller than p, p is an integer which is smaller than n, n is a positive integer, and the value of n is determined according to the image size of the positive sample medical image or the negative sample medical image.
An apparatus for classifying image data, the apparatus comprising:
a third acquiring unit for acquiring an actual medical image;
and the fourth acquisition unit is used for inputting the actual medical image into an image data classification model generated by pre-training to obtain a classification result of the actual medical image, wherein the image data classification model is generated by training according to the generation method of the image data classification model.
A computer-readable storage medium, which stores instructions that, when executed on a terminal device, cause the terminal device to execute the above-mentioned method for generating an image data classification model or the above-mentioned method for classifying image data.
An apparatus for generating a classification model of image data, comprising: the image data classification model generation method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the computer program, the generation method of the image data classification model is realized.
An apparatus for image data classification, comprising: the image data classification method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the computer program, the image data classification method is realized.
Therefore, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, a training data set is firstly obtained, and the training data set comprises a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label. Then, the model parameters of the initial deep learning model are trained by using the training data set, so that an image data classification model is obtained. The initial deep learning model is a three-dimensional convolution dense neural network model, the model can comprise a dense block layer, the dense block layer comprises a plurality of fully-connected dense layers and a transition layer which are connected in sequence, the input of each fully-connected dense layer serves as the input of the subsequent fully-connected dense layer and the input of the transition layer, and each fully-connected dense layer comprises a first convolution layer and a second convolution layer which are connected in sequence. That is, when the initial deep learning model is trained by using the training data set, because the input of each fully-connected dense layer can be used as the input of the subsequent fully-connected dense layer and transition layer, the subsequent fully-connected dense layer can receive all the previously fully-connected features, the transfer of the features is enhanced, the initial deep learning model is trained by using more layers of features to obtain the image data classification model, and thus the generalization capability of the model is improved and the classification accuracy is improved.
In actual application, an actual medical image is obtained, the actual medical image is input into the generated image data classification model, and the accuracy of the classification result of the actual medical image is improved by obtaining more characteristic information.
Drawings
Fig. 1 is a flowchart of a method for generating an image data classification model according to an embodiment of the present disclosure;
fig. 2 is a diagram of a three-dimensional convolution dense network neural network model structure provided in an embodiment of the present application;
fig. 3 is a flowchart of a sample increment processing method according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for training an initial deep learning model according to an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating an example of a three-dimensional convolution dense network neural network model provided in an embodiment of the present application;
fig. 6 is a flowchart of an image data classification method according to an embodiment of the present disclosure;
fig. 7 is a structural diagram of an apparatus for generating an image data classification model according to an embodiment of the present application;
fig. 8 is a structural diagram of an apparatus for classifying image data according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the drawings are described in detail below.
The inventor finds that in the research of the traditional image data classification model in the training process, in practical application, the problems that the medical image data acquisition cost is high, the acquired medical image data is limited, the acquired image features are less, and the classification accuracy of the generated image data classification model is low exist.
Based on this, an embodiment of the present application provides a method for generating an image data classification model, and specifically, a training data set is obtained, and model parameters of an initial deep learning model are trained by using a positive sample medical image and a negative sample medical image in the training data set to generate the image data classification model. The trained initial deep learning model is a three-dimensional convolution dense network neural network model, and the network model comprises a convolution layer, a maximum pooling layer, a dense block layer and a full-connection layer which are sequentially connected. The dense block layer comprises a plurality of fully-connected dense layers and a transition layer which are connected in sequence, and the input of each fully-connected dense layer is used as the input of the subsequent fully-connected dense layer and the transition layer. That is, since the dense block layer adopts the full-connected mode, any subsequent layer can receive the feature information of all previous layers, so that more layers of feature information can be obtained, and further, when the image data classification model generated by training is actually applied, more layers of feature information can be obtained, so that more feature information is utilized for classification, and the classification accuracy is improved.
In order to facilitate understanding of the technical solutions provided in the present application, the technical solutions will be described below with reference to the accompanying drawings.
Referring to fig. 1, which is a flowchart of a method for generating an image data classification model according to an embodiment of the present disclosure, as shown in fig. 1, the method may include:
s101: a training data set is obtained.
In this embodiment, to train the image data classification model, a training data set used in training is first obtained, where the training data set includes a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label.
The positive sample medical image carrying the positive sample label is a medical image carrying a certain characteristic or having a certain classification result; the negative sample medical image carrying the negative sample label refers to a medical image not carrying a certain characteristic or having no certain classification result.
S102: and training the model parameters of the initial deep learning model according to the positive sample medical image and the negative sample medical image in the training data set to generate an image data classification model.
After the training data set is obtained, model parameters of the initial deep learning model are trained by using positive sample medical image images and negative sample medical image images in the training data set, so that an image data classification model is generated, and the input medical image images are classified by using the image data classification model.
The initial deep learning model is a three-dimensional convolution dense network neural network model, the three-dimensional convolution dense network neural network model comprises a convolution layer, a maximum pooling layer, dense block layers and full-connection layers which are sequentially connected, the dense block layers comprise a plurality of full-connection dense layers and a transition layer which are sequentially connected, the input of each full-connection dense layer is used as the input of the subsequent full-connection dense layer and the transition layer, and each full-connection dense layer comprises a first convolution layer and a second convolution layer which are sequentially connected. Wherein the transition layer may include a third convolution layer and an average pooling layer.
For easy understanding, referring to the three-dimensional convolution dense network neural network model structure diagram shown in fig. 2, the dense block layer including two fully connected dense layers is illustrated as an example on the right side of fig. 2. The data input into the first fully-connected dense layer can also be used as the input of the second fully-connected dense layer and the transition layer, and the data input into the second fully-connected dense layer can also be used as the input of the transition layer.
During actual training, the positive sample medical image and the negative sample medical image in the training data set are used as input data and input into the three-dimensional convolution dense network neural network model, and parameters of all layers in the network model are obtained through training, so that an image data classification model is generated. A specific implementation of generating an image data classification model by training positive sample medical image images and negative sample medical image images will be described in the following embodiments.
As can be seen from the above description, a training data set is first obtained, where the training data set includes a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label. Then, the model parameters of the initial deep learning model are trained by using the training data set, so that an image data classification model is obtained. The initial deep learning model is a three-dimensional convolution dense neural network model, the model can comprise a dense block layer, the dense block layer comprises a plurality of fully-connected dense layers and a transition layer which are connected in sequence, the input of each fully-connected dense layer serves as the input of the subsequent fully-connected dense layer and the input of the transition layer, and each fully-connected dense layer comprises a first convolution layer and a second convolution layer which are connected in sequence. That is, when the initial deep learning model is trained by using the training data set, because the input of each fully-connected dense layer can be used as the input of the subsequent fully-connected dense layer and transition layer, the subsequent fully-connected dense layer can receive all the previously fully-connected features, the transfer of the features is enhanced, the initial deep learning model is trained by using more layers of features to obtain the image data classification model, and the classification accuracy is improved.
It can be understood that, in practical application, the generalization capability of the network model is affected due to the limited number of acquired sample medical images. In order to solve the above problem, the present embodiment provides a sample increment processing method to obtain an effective increment image, enhance the generalization capability of a network model, and further improve the accuracy of classification.
Referring to fig. 3, which is a flowchart of a sample increment processing method provided in an embodiment of the present application, as shown in fig. 3, the method may include:
s301: and acquiring a positive sample medical image and a negative sample medical image, and adding the positive sample medical image and the negative sample medical image into the training data set.
That is, first, a positive sample medical image and a negative sample medical image are acquired, and the two medical images are added to a training data set as training data.
In a specific implementation, the embodiment provides a method for acquiring a positive/negative sample medical image, which may specifically include:
1) and acquiring an original positive sample medical image and an original negative sample medical image from an original medical image database.
2) According to the actual medical image acquisition condition, extracting the images with the preset number of layers at equal intervals from the original positive sample medical image to generate a positive sample medical image, and extracting the images with the preset number of layers at equal intervals from the original negative sample medical image to generate a negative sample medical image.
First, an original positive sample medical image and an original negative sample medical image are acquired from an original medical image database. Then, according to the actual medical image acquisition conditions, the original positive sample medical image and the original negative sample medical image are processed to obtain the positive sample medical image and the negative sample medical image required by training. Specifically, the original positive sample medical image is extracted at equal intervals, and the extracted image is used as the positive sample medical image; and performing equal-interval extraction on the negative sample medical image, and taking the extracted image as the negative sample medical image. That is, the original positive sample medical image and the original negative sample medical image are both multilayer images, and in order to meet the actual requirements, a preset number of layers is extracted from the original positive sample medical image to obtain a positive sample medical image with a small number of layers; and extracting a preset number of layers from the original negative sample medical image to obtain a negative sample medical image with fewer layers.
For example, each of the original positive sample medical image and the original negative sample medical image includes 256 layers of images, and 18 layers of images among the 256 layers of images are extracted at equal intervals, thereby obtaining a positive sample medical image and a negative sample medical image.
In addition, for the convenience of subsequent training, after the positive sample medical image and the negative sample medical image are obtained, normalization preprocessing can be performed on the positive sample medical image and the negative sample medical image, and the normalized positive/negative sample medical image is used for training the initial deep learning model.
S302: and weighting and summing the positive sample medical image and the negative sample medical image to generate a target medical image.
In this embodiment, after the positive/negative sample medical image is acquired, weighting processing is performed on the positive/negative sample medical image to obtain an effective incremental image, so as to increase training data. The sum of the weight corresponding to the positive sample medical image and the weight corresponding to the negative sample medical image is one. Specifically, see the following equation:
M=wd*Md+wh*Mh(1)
wherein M isdFor positive specimen medical image, wdWeights corresponding to the positive sample medical images; mhFor negative sample medical image, whThe weights corresponding to the negative sample medical images, and M is the target medical image.
S303: and when the weight corresponding to the positive sample medical image is greater than the weight corresponding to the negative sample medical image, determining the target medical image as the positive sample medical image and adding the positive sample medical image into the training data set.
S304: and when the weight corresponding to the positive sample medical image is smaller than the weight corresponding to the negative sample medical image, determining the target medical image as the negative sample medical image and adding the negative sample medical image into the training data set.
After the incremental image is obtained, whether the generated target medical image specifically belongs to the positive sample medical image or the negative sample medical image can be determined according to the respective weights of the positive sample medical image and the negative sample medical image. Specifically, when the weight corresponding to the positive sample medical image is greater than the weight corresponding to the negative sample medical image, the target medical image is determined to be the positive sample medical image; and when the weight corresponding to the negative sample medical image is greater than the weight corresponding to the eye-opening medical image, determining the target medical image as the negative sample medical image. I.e. when wd>whWhen the target sample medical image is a positive sample medical image, namely a diseased sample; when w isd<whWhen the target sample medical image is a negative sample medical image, the target sample medical image is a health sample. That is, the training data set includes not only positive/negative sample medical image images but also incrementally generated positive/negative sample medical image images.
By the image weighted combination incremental processing method provided by the embodiment, training data can be effectively increased, so that an initial deep learning model is trained by using large-scale effective incremental images, the generalization capability of the model is enhanced, and the classification accuracy is improved.
In one possible implementation manner of the embodiment of the present application, a process for training an initial deep learning model by using positive/negative sample medical image images is provided, and the process will be described below with reference to the accompanying drawings.
Referring to fig. 4, which is a flowchart of a model training method provided in an embodiment of the present application, as shown in fig. 4, the method may include:
s401: and determining the initial deep learning model as the current deep learning model.
S402: k groups of training data subsets are extracted from the training data set, each group of training data subsets comprises positive sample medical image images and negative sample medical image images which are the same in quantity, and K is an integer larger than 1.
In this embodiment, a plurality of sets of training data subsets are extracted from a training data set, where each set of training data subsets includes the same number of positive sample medical image images and negative sample medical image images.
S403: and training the model parameters of the initial deep learning model by using the ith group of training data subset to obtain the model parameters of the classification model corresponding to the ith group of training data subset, wherein the value of i is an integer from 1 to K.
Namely, each group of training data subset is used for respectively training the model parameters of the initial deep learning model, and the model parameters of the classification model corresponding to each group of training data subset are obtained, so that K groups of model parameters are obtained. For example, if the value of K is 3, the 1 st group of training data subsets may be used to train the model parameters of the initial deep learning model, so as to obtain the model parameters of the classification model corresponding to the 1 st group of training data subsets, and by analogy, the model parameters of the classification model corresponding to the 3 groups of training data subsets may be obtained respectively. It is understood that the model parameters of the classification model may be plural. For example, the model parameters of the classification model corresponding to each training data subset are 20.
S404: and calculating the average value of the model parameters of the classification model corresponding to the K groups of training data subsets, updating the model parameters of the current deep learning model according to the average value of the model parameters, and regenerating the current deep learning model.
And after model parameters of the classification models respectively corresponding to the K groups of training data subsets are obtained, calculating the average value of the K groups of model parameters, and updating the model parameters of the current deep learning model by using the average value, thereby generating a new current deep learning model.
In a specific implementation, the model parameters of the current deep learning model may be updated using the following formula:
Figure BDA0002213647250000141
wherein the content of the first and second substances,
Figure BDA0002213647250000142
is the mean value of the K sets of model parameters, θiAnd model parameters of the classification model corresponding to the ith group of training data subsets.
Figure BDA0002213647250000143
Wherein, theta0Initial values of model parameters; thetafModel parameters of a new current deep learning model generated for one cycle, α is decay coefficient, initial value is α0The cycle number is changed, specifically:
wherein j is the current cycle number, and N is the set model total cycle number.
S405: judging whether a preset training stopping condition is reached, and if so, stopping training; otherwise, S402 is executed.
That is, when performing the training operation of S402-S404 once, determining whether the loop operation reaches a preset training stop condition, and if the loop operation reaches the preset training stop condition, stopping the training to obtain an image data classification model; and if the preset training stopping condition is not met, re-executing the steps of extracting K groups of training data subsets from the training data set and the subsequent steps. The preset training stopping condition is that the preset times of extracting K groups of training data subsets from the training data set and the subsequent steps are re-executed, or the precision of the generated new current deep learning model reaches a threshold value.
In specific implementation, when the preset training stopping condition is not met, the initial value theta of the model parameter is updated0=θfAnd entering the next training operation. That is, K training data subsets are extracted from the training data set again, K classification models are generated by training, and then the joint calculation is carried out to obtain thetaf
It should be noted that the model parameters of the classification model corresponding to each training data subset may include one or more, and when a plurality of model parameters are included, the average value of each model parameter may be calculated by using formula (2), so as to update each model parameter of the current deep learning model to the average value of the corresponding model parameter. For example, model parameters of classification models corresponding to 3 sets of training data subsets are obtained, and the model parameters of the classification models corresponding to each set of training data subsets are 20, then the 1 st average value is calculated by using the 1 st model parameter in the 3 sets of model parameters, the 2 nd average value is calculated by using the 2 nd model parameter in the 3 sets of model parameters, and so on, and 20 average values are calculated in total as the average value of the 3 sets of model parameters.
By the training method with the self-learning strategy provided by the embodiment, the model parameters can be effectively learned in the training process, so that the generated model has generalization capability, and the capability of training the classification model by a small amount of sample data sets is improved. In addition, the training method is similar to simultaneous cross training of multiple groups of models, common optimization parameters can be obtained, and the overfitting phenomenon can be effectively prevented.
It should be noted that, in order to further prevent the over-fitting phenomenon occurring in the training process, the three-dimensional convolution dense network neural network model may further include a random inactivation layer, and the random inactivation layer is located between the dense block layer and the full connection layer. Specifically, the convolution kernel size of the first convolution layer of the dense block layer is 1 × 1, the convolution kernel size of the second convolution layer is m × n, m is an integer greater than or equal to 2 and smaller than p, p is an integer smaller than n, n is a positive integer, and the value of n is determined according to the image size of the positive sample medical image or the negative sample medical image.
For easy understanding, referring to fig. 5, a schematic structural diagram of a three-dimensional convolution dense network neural network model is shown, wherein the left side is a general structure diagram, and the right side is an expanded structure of a dense block layer. In the left panel, randomly inactivated layers are located between dense block layers and fully connected layers to reduce overfitting. In the figure, the full connection layer adopts a softmax function, and the output is a second classification. Wherein, in the left diagram, the convolutional layer: the convolution kernel is 3 × 8, the output is 64 channels, and the corresponding step is 1 × 4; followed by a maximum pooling layer: convolution kernel 2 x 3, step 2 x 2. In the right diagram, the convolution kernel of the first convolution layer is 1 × 1, and the output is 12 channels; the convolution kernel of the second convolution layer is 3 × 6, and the output is 12 channels; the transition layer may include a third convolutional layer having a convolution kernel of 1 × 1, an output channel of 12 channels, and an average pooling layer having a convolution kernel of 2 × 3. It is understood that, for example, the convolution kernel of the second convolution layer may be 3 × 8, etc., and the above-mentioned limiting condition on the convolution kernel of the second convolution layer is satisfied.
It should be noted that, in order to ensure the accuracy of the image data classification model generated by training, after the image data classification model is generated, a plurality of sample medical image images can be obtained to verify the accuracy of the image data classification model, if the accuracy meets a preset condition, the image data classification model can be applied, otherwise, the training and correction are continued.
The generation method of the image data classification model provided in the embodiment of the present application is described with reference to a specific application scenario.
Firstly, a magnetic resonance original image MRI data set is adopted, wherein the MRI data set comprises 300 positive samples and 300 negative samples, the data set scanned in a coronal position is converted into an axial position, extraction at equal intervals and size contraction are carried out, each positive sample medical image or negative sample medical image forms an (18 x 200) shape, namely, each layer of image pixel points are 200 x 200, and 18 layers of images are obtained. Then, a data standard normalization preprocessing is performed. The data set may be divided into 10 groups, 8 groups being randomly drawn as a training data set and 2 groups being a validation data set.
Then, each data group is subjected to incremental data processing by using the incremental processing method provided by the embodiment of the application. Positive sample increment: the incremental positive sample medical image is formed by weighted averaging using 3/4 for the positive sample medical image and 1/4 for the negative sample medical image. Negative sample increment: the incremental negative sample medical image is formed by weighted averaging using 1/4 for the positive sample medical image and 3/4 for the negative sample medical image.
Finally, a convolutional neural network with dense block layers is adopted for training, in order to prevent overfitting, each convolutional layer of the convolutional neural network model can be regularized by L2 (the decay coefficient is 0.01), and meanwhile, the next layer of the dense block layers is regularized by random inactivation dorpout (the value is 0.5). The model training adopts the model training method with the self-learning strategy, the model training optimizer adopts an adam optimizer, and the loss calculation adopts the classification cross entropy, so that the training of the image data classification model is completed.
Based on the above embodiments, an image data classification model may be generated, and in practical applications, the image data classification model may be used to realize classification of image data.
Referring to fig. 6, which is a flowchart of an image data classification method according to an embodiment of the present disclosure, as shown in fig. 6, the method may include:
s601: and acquiring an actual medical image.
S602: and inputting the actual medical image into an image data classification model generated by pre-training to obtain a classification result of the actual medical image.
In practical application, an actual medical image to be classified is acquired, and the actual medical image is input into the image data classification model generated in the above embodiment as input data, so that a classification result of the actual medical image is obtained. The image data classification model is generated by training the generation method of the image data classification model. In specific implementation, the image data classification model can output the classification result corresponding to the actual medical image and the probability value corresponding to each classification result, so that a user can directly know the classification condition of the actual medical image.
According to the image data classification model training method and device, when the image data classification model is generated through training, the three-dimensional convolution dense network neural network model is utilized, input of the fully-connected dense layer in the model can be used as input of the subsequent fully-connected dense layer and the transition layer, therefore, all the fully-connected features in the front can be received by the subsequent fully-connected dense layer, transfer of the features is enhanced, the fact that the image data classification model is obtained through training the initial deep learning model by using more layers of features is achieved, and therefore generalization capability of the model is improved, and classification accuracy is improved. In actual application, an actual medical image is obtained, the actual medical image is input into the generated image data classification model, and the classification result of the actual medical image is improved by obtaining more characteristic information.
Based on the above method embodiment, the present application further provides an apparatus for generating an image data classification model and an apparatus for classifying image data, which will be described below with reference to the accompanying drawings.
Referring to fig. 7, which is a block diagram of an apparatus for generating an image data classification model according to an embodiment of the present disclosure, as shown in fig. 7, the apparatus may include:
a first obtaining unit 701, configured to obtain a training data set, where the training data set includes a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label;
a first generating unit 702, configured to train model parameters of an initial deep learning model according to the positive sample medical image and the negative sample medical image in the training data set, and generate an image data classification model;
the initial deep learning model is a three-dimensional convolution dense network neural network model, the three-dimensional convolution dense network neural network model comprises a convolution layer, a maximum pooling layer, dense block layers and full-connection layers which are sequentially connected, each dense block layer comprises a plurality of full-connection dense layers and a transition layer which are sequentially connected, the input of each full-connection dense layer is used as the input of the subsequent full-connection dense layer and the input of the transition layer, and each full-connection dense layer comprises a first convolution layer and a second convolution layer which are sequentially connected.
In one possible implementation, the apparatus further includes:
a second obtaining unit, configured to obtain a positive sample medical image and a negative sample medical image, and add the positive sample medical image and the negative sample medical image to a training data set;
a second generation unit, configured to generate a target medical image by weighted summation of the positive sample medical image and the negative sample medical image; the sum of the weight corresponding to the positive sample medical image and the weight corresponding to the negative sample medical image is one;
a determining unit, configured to determine the target medical image as a positive sample medical image to be added to the training data set when the weight corresponding to the positive sample medical image is greater than the weight corresponding to the negative sample medical image; and when the weight corresponding to the positive sample medical image is smaller than the weight corresponding to the negative sample medical image, determining the target medical image as a negative sample medical image and adding the negative sample medical image into the training data set.
In a possible implementation manner, the second obtaining unit includes:
the system comprises a first acquisition subunit, a second acquisition subunit and a third acquisition subunit, wherein the first acquisition subunit is used for acquiring an original positive sample medical image and an original negative sample medical image from an original medical image database;
the first extraction subunit is used for extracting images with a preset number of layers at equal intervals from the original positive sample medical image according to the actual medical image acquisition condition to generate a positive sample medical image;
and the second extraction subunit is used for extracting images with a preset number of layers at equal intervals from the original negative sample medical image to generate a negative sample medical image.
In one possible implementation manner, the generating unit includes:
a determining subunit, configured to determine the initial deep learning model as a current deep learning model;
an extracting subunit, configured to extract K groups of training data subsets from the training data set, where each group of training data subsets includes positive sample medical image images and negative sample medical image images of the same number, and K is an integer greater than 1;
the second obtaining subunit is configured to train the model parameters of the initial deep learning model by using the ith group of training data subsets, and obtain the model parameters of the classification model corresponding to the ith group of training data subsets, where a value of i is an integer from 1 to K;
the calculation subunit is used for calculating the average value of the model parameters of the classification model corresponding to the K groups of training data subsets, updating the model parameters of the current deep learning model according to the average value of the model parameters, and regenerating the current deep learning model; and re-executing the extraction subunit until a preset training stop condition is reached, and generating an image data classification model.
In a possible implementation manner, the preset training stop condition is that a preset number of times of re-executing the K groups of training data subsets extracted from the training data set and subsequent steps is reached, or the precision of the generated new current deep learning model reaches a threshold.
In one possible implementation, the three-dimensional convolution dense network neural network model further includes a random deactivation layer connected between the dense block layer and the fully-connected layer;
the convolution kernel size of the first convolution layer is 1 x 1, the convolution kernel size of the second convolution layer is m x n, m is an integer which is greater than or equal to 2 and smaller than p, p is an integer which is smaller than n, n is a positive integer, and the value of n is determined according to the image size of the positive sample medical image or the negative sample medical image.
It should be noted that, in the present embodiment, implementation of each unit may refer to the foregoing method embodiment, and the present embodiment is not limited herein.
Referring to fig. 8, which is a structural diagram of an image data classification apparatus according to an embodiment of the present application, as shown in fig. 8, the apparatus includes:
a third acquiring unit 801, configured to acquire an actual medical image;
a fourth obtaining unit 802, configured to input the actual medical image into an image data classification model generated by pre-training, and obtain a classification result of the actual medical image, where the image data classification model is generated by training according to a generation method of the image data classification model.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the terminal device is caused to execute the method for generating the image data classification model or the method for classifying image data.
The embodiment of the present application further provides an image data classification model generation device, including: the image data classification model generation method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the computer program, the generation method of the image data classification model is realized.
The embodiment of the present application further provides an apparatus for classifying image data, including: the image data classification method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the computer program, the image data classification method is realized.
The embodiment of the present application further provides a computer program product, when the computer program product runs on a terminal device, the terminal device executes the method for generating the image data classification model or the method for classifying the image data.
In the embodiment of the application, a training data set is firstly obtained, and the training data set comprises a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label. Then, the model parameters of the initial deep learning model are trained by using the training data set, so that an image data classification model is obtained. The initial deep learning model is a three-dimensional convolution dense neural network model, the model can comprise a dense block layer, the dense block layer comprises a plurality of fully-connected dense layers and a transition layer which are connected in sequence, the input of each fully-connected dense layer serves as the input of the subsequent fully-connected dense layer and the input of the transition layer, and each fully-connected dense layer comprises a first convolution layer and a second convolution layer which are connected in sequence. That is, when the initial deep learning model is trained by using the training data set, because the input of each fully-connected dense layer can be used as the input of the subsequent fully-connected dense layer and transition layer, the subsequent fully-connected dense layer can receive all the previously fully-connected features, the transfer of the features is enhanced, the initial deep learning model is trained by using more layers of features to obtain the image data classification model, and thus the generalization capability of the model is improved and the classification accuracy is improved.
In actual application, an actual medical image is obtained, the actual medical image is input into the generated image data classification model, and the accuracy of the classification result of the actual medical image is improved by obtaining more characteristic information.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the system or the device disclosed by the embodiment, the description is simple because the system or the device corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for generating an image data classification model, the method comprising:
acquiring a training data set, wherein the training data set comprises a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label;
training model parameters of an initial deep learning model according to positive sample medical image images and negative sample medical image images in the training data set to generate an image data classification model;
the initial deep learning model is a three-dimensional convolution dense network neural network model, the three-dimensional convolution dense network neural network model comprises a convolution layer, a maximum pooling layer, dense block layers and full-connection layers which are sequentially connected, each dense block layer comprises a plurality of full-connection dense layers and a transition layer which are sequentially connected, the input of each full-connection dense layer is used as the input of the subsequent full-connection dense layer and the input of the transition layer, and each full-connection dense layer comprises a first convolution layer and a second convolution layer which are sequentially connected.
2. The method of claim 1, further comprising:
acquiring a positive sample medical image and a negative sample medical image, and adding the positive sample medical image and the negative sample medical image into a training data set;
weighting and summing the positive sample medical image and the negative sample medical image to generate a target medical image; the sum of the weight corresponding to the positive sample medical image and the weight corresponding to the negative sample medical image is one;
when the weight corresponding to the positive sample medical image is greater than the weight corresponding to the negative sample medical image, determining the target medical image as a positive sample medical image and adding the positive sample medical image to the training data set;
and when the weight corresponding to the positive sample medical image is smaller than the weight corresponding to the negative sample medical image, determining the target medical image as a negative sample medical image and adding the negative sample medical image into the training data set.
3. The method of claim 2, wherein said acquiring positive and negative sample medical image images comprises:
acquiring an original positive sample medical image and an original negative sample medical image from an original medical image database;
according to actual medical image acquisition conditions, extracting preset layers of images from the original positive sample medical image at equal intervals to generate a positive sample medical image, and extracting preset layers of images from the original negative sample medical image at equal intervals to generate a negative sample medical image.
4. The method of claim 1, wherein training model parameters of an initial deep learning model from the positive sample medical image and the negative sample medical image in the training data set to generate an image data classification model comprises:
determining the initial deep learning model as a current deep learning model;
extracting K groups of training data subsets from the training data set, wherein each group of training data subsets comprises positive sample medical image images and negative sample medical image images which are the same in quantity, and K is an integer larger than 1;
training the model parameters of the initial deep learning model by using the ith group of training data subset to obtain the model parameters of the classification model corresponding to the ith group of training data subset, wherein the value of i is an integer from 1 to K;
calculating the average value of the model parameters of the classification model corresponding to the K groups of training data subsets, updating the model parameters of the current deep learning model according to the average value of the model parameters, and regenerating the current deep learning model;
and re-executing the K groups of training data subsets extracted from the training data set and the subsequent steps until a preset training stopping condition is reached, and generating an image data classification model.
5. A method for classifying image data, the method comprising:
acquiring an actual medical image;
inputting the actual medical image into an image data classification model generated by pre-training to obtain a classification result of the actual medical image, wherein the image data classification model is generated by training according to the generation method of the image data classification model according to any one of claims 1 to 4.
6. An apparatus for generating a classification model of image data, the apparatus comprising:
a first obtaining unit, configured to obtain a training data set, where the training data set includes a positive sample medical image carrying a positive sample label and a negative sample medical image carrying a negative sample label;
the first generation unit is used for training the model parameters of the initial deep learning model according to the positive sample medical image images and the negative sample medical image images in the training data set to generate an image data classification model;
the initial deep learning model is a three-dimensional convolution dense network neural network model, the three-dimensional convolution dense network neural network model comprises a convolution layer, a maximum pooling layer, dense block layers and full-connection layers which are sequentially connected, each dense block layer comprises a plurality of full-connection dense layers and a transition layer which are sequentially connected, the input of each full-connection dense layer is used as the input of the subsequent full-connection dense layer and the input of the transition layer, and each full-connection dense layer comprises a first convolution layer and a second convolution layer which are sequentially connected.
7. An apparatus for classifying image data, the apparatus comprising:
a third acquiring unit for acquiring an actual medical image;
a fourth obtaining unit, configured to input the actual medical image into an image data classification model generated by pre-training, and obtain a classification result of the actual medical image, where the image data classification model is generated by training according to the generation method of the image data classification model according to any one of claims 1 to 4.
8. A computer-readable storage medium, wherein instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the instructions cause the terminal device to execute the method for generating an image data classification model according to any one of claims 1 to 4 or the method for classifying image data according to claim 5.
9. An apparatus for generating a classification model of image data, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method for generating the image data classification model according to any one of claims 1 to 4.
10. An apparatus for classifying image data, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the method of classifying image data according to claim 5 when executing the computer program.
CN201910907289.8A 2019-09-24 2019-09-24 Image data classification model generation method, image data classification method and device Pending CN110751183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910907289.8A CN110751183A (en) 2019-09-24 2019-09-24 Image data classification model generation method, image data classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910907289.8A CN110751183A (en) 2019-09-24 2019-09-24 Image data classification model generation method, image data classification method and device

Publications (1)

Publication Number Publication Date
CN110751183A true CN110751183A (en) 2020-02-04

Family

ID=69276939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910907289.8A Pending CN110751183A (en) 2019-09-24 2019-09-24 Image data classification model generation method, image data classification method and device

Country Status (1)

Country Link
CN (1) CN110751183A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709436A (en) * 2020-05-21 2020-09-25 浙江康源医疗器械有限公司 Marking method and system, and classification method and system for medical image contour
CN112309404A (en) * 2020-10-28 2021-02-02 平安科技(深圳)有限公司 Machine voice identification method, device, equipment and storage medium
CN112348808A (en) * 2020-11-30 2021-02-09 广州绿怡信息科技有限公司 Screen perspective detection method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130004044A1 (en) * 2011-06-29 2013-01-03 The Regents Of The University Of Michigan Tissue Phasic Classification Mapping System and Method
CN103985111A (en) * 2014-02-21 2014-08-13 西安电子科技大学 4D-MRI super-resolution reconstruction method based on double-dictionary learning
CN107679525A (en) * 2017-11-01 2018-02-09 腾讯科技(深圳)有限公司 Image classification method, device and computer-readable recording medium
CN108198625A (en) * 2016-12-08 2018-06-22 北京推想科技有限公司 A kind of deep learning method and apparatus for analyzing higher-dimension medical data
CN109583594A (en) * 2018-11-16 2019-04-05 东软集团股份有限公司 Deep learning training method, device, equipment and readable storage medium storing program for executing
CN109740460A (en) * 2018-12-21 2019-05-10 武汉大学 Remote sensing image Ship Detection based on depth residual error dense network
CN109903255A (en) * 2019-03-04 2019-06-18 北京工业大学 A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks
CN109978762A (en) * 2019-02-27 2019-07-05 南京信息工程大学 A kind of super resolution ratio reconstruction method generating confrontation network based on condition
CN110033848A (en) * 2019-04-16 2019-07-19 厦门大学 A kind of 3 D medical image z-axis interpolation method based on unsupervised learning
CN110084810A (en) * 2019-05-06 2019-08-02 成都医云科技有限公司 A kind of Lung neoplasm image detecting method, model training method, device and storage medium
CN110176250A (en) * 2019-05-30 2019-08-27 哈尔滨工业大学 It is a kind of based on the robust acoustics scene recognition method locally learnt

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130004044A1 (en) * 2011-06-29 2013-01-03 The Regents Of The University Of Michigan Tissue Phasic Classification Mapping System and Method
CN103985111A (en) * 2014-02-21 2014-08-13 西安电子科技大学 4D-MRI super-resolution reconstruction method based on double-dictionary learning
CN108198625A (en) * 2016-12-08 2018-06-22 北京推想科技有限公司 A kind of deep learning method and apparatus for analyzing higher-dimension medical data
CN107679525A (en) * 2017-11-01 2018-02-09 腾讯科技(深圳)有限公司 Image classification method, device and computer-readable recording medium
CN109583594A (en) * 2018-11-16 2019-04-05 东软集团股份有限公司 Deep learning training method, device, equipment and readable storage medium storing program for executing
CN109740460A (en) * 2018-12-21 2019-05-10 武汉大学 Remote sensing image Ship Detection based on depth residual error dense network
CN109978762A (en) * 2019-02-27 2019-07-05 南京信息工程大学 A kind of super resolution ratio reconstruction method generating confrontation network based on condition
CN109903255A (en) * 2019-03-04 2019-06-18 北京工业大学 A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks
CN110033848A (en) * 2019-04-16 2019-07-19 厦门大学 A kind of 3 D medical image z-axis interpolation method based on unsupervised learning
CN110084810A (en) * 2019-05-06 2019-08-02 成都医云科技有限公司 A kind of Lung neoplasm image detecting method, model training method, device and storage medium
CN110176250A (en) * 2019-05-30 2019-08-27 哈尔滨工业大学 It is a kind of based on the robust acoustics scene recognition method locally learnt

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨婧: "基于DenseNet的低分辨CT影像肺腺癌组织学亚型分类", 《浙江大学学报(工学版)》 *
王晓东: "基于注意力机制的三维超声影像的多尺度目标识别的研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709436A (en) * 2020-05-21 2020-09-25 浙江康源医疗器械有限公司 Marking method and system, and classification method and system for medical image contour
CN112309404A (en) * 2020-10-28 2021-02-02 平安科技(深圳)有限公司 Machine voice identification method, device, equipment and storage medium
CN112309404B (en) * 2020-10-28 2024-01-19 平安科技(深圳)有限公司 Machine voice authentication method, device, equipment and storage medium
CN112348808A (en) * 2020-11-30 2021-02-09 广州绿怡信息科技有限公司 Screen perspective detection method and device

Similar Documents

Publication Publication Date Title
CN108985317B (en) Image classification method based on separable convolution and attention mechanism
CN110163258B (en) Zero sample learning method and system based on semantic attribute attention redistribution mechanism
CN110048827B (en) Class template attack method based on deep learning convolutional neural network
CN109063724B (en) Enhanced generation type countermeasure network and target sample identification method
CN109754078A (en) Method for optimization neural network
CN110751183A (en) Image data classification model generation method, image data classification method and device
CN115018021B (en) Machine room abnormity detection method and device based on graph structure and abnormity attention mechanism
CN110287777B (en) Golden monkey body segmentation algorithm in natural scene
CN111814626B (en) Dynamic gesture recognition method and system based on self-attention mechanism
CN106295694A (en) A kind of face identification method of iteration weight set of constraints rarefaction representation classification
CN111967573A (en) Data processing method, device, equipment and computer readable storage medium
CN110135505A (en) Image classification method, device, computer equipment and computer readable storage medium
CN111027576A (en) Cooperative significance detection method based on cooperative significance generation type countermeasure network
CN112560710B (en) Method for constructing finger vein recognition system and finger vein recognition system
CN116386899A (en) Graph learning-based medicine disease association relation prediction method and related equipment
CN114792385A (en) Pyramid separation double-attention few-sample fine-granularity image classification method
Cheung et al. Hybrid evolution of convolutional networks
CN109934835B (en) Contour detection method based on deep strengthening network adjacent connection
CN117153268A (en) Cell category determining method and system
CN110135435B (en) Saliency detection method and device based on breadth learning system
CN112541530B (en) Data preprocessing method and device for clustering model
CN110069647B (en) Image tag denoising method, device, equipment and computer readable storage medium
CN111125329A (en) Text information screening method, device and equipment
CN112818982B (en) Agricultural pest image detection method based on depth feature autocorrelation activation
CN115131605A (en) Structure perception graph comparison learning method based on self-adaptive sub-graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200204