CN111861924A - Cardiac magnetic resonance image data enhancement method based on evolved GAN - Google Patents

Cardiac magnetic resonance image data enhancement method based on evolved GAN Download PDF

Info

Publication number
CN111861924A
CN111861924A CN202010715325.3A CN202010715325A CN111861924A CN 111861924 A CN111861924 A CN 111861924A CN 202010715325 A CN202010715325 A CN 202010715325A CN 111861924 A CN111861924 A CN 111861924A
Authority
CN
China
Prior art keywords
training
generator
image
discriminator
data enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010715325.3A
Other languages
Chinese (zh)
Other versions
CN111861924B (en
Inventor
符颖
杨光
吴锡
杨智鹏
胡金蓉
张永清
周激流
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202010715325.3A priority Critical patent/CN111861924B/en
Publication of CN111861924A publication Critical patent/CN111861924A/en
Application granted granted Critical
Publication of CN111861924B publication Critical patent/CN111861924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a cardiac magnetic resonance image data enhancement method based on an evolved GAN, which is characterized in that when generators are trained, the generators are mutated to generate a plurality of offspring generators, the adaptive scores of the generators are judged through an adaptive score function, the optimal offspring generators are selected as parent generators of the next iteration according to the scores, and simultaneously, in the training stage of a discriminator, new training samples are synthesized by combining the linear interpolation of characteristic vectors and relevant linear interpolation labels are generated, so that the distribution of the whole training set is expanded, the discrete sample space is also serialized, and the smoothness between domains is improved, so that a model can be trained better. The method for enhancing the images can generate high-quality and various samples to expand the training set, and finally improves various indexes of the classification result.

Description

Cardiac magnetic resonance image data enhancement method based on evolved GAN
Technical Field
The invention relates to the field of image processing, in particular to a cardiac magnetic resonance image data enhancement method based on an evolved GAN.
Background
Cardiac magnetic resonance is known as the gold standard for assessing cardiac function, and conventional cardiac magnetic resonance scanning techniques have become relatively mature and play a crucial role in disease diagnosis. At present, many heart magnetic resonance image auxiliary diagnosis tasks based on deep learning have achieved good effects, but the heart magnetic resonance images not only need expensive medical equipment to be obtained, but also need a great amount of manual data annotation by experienced radiologists, which is definitely extremely time-consuming and labor-consuming. In addition to this, the privacy problem of patients in the field of medical images has been quite sensitive, and thus obtaining a large number of positive and negative sample-balanced data sets requires a significant cost.
A great challenge in the field of deep learning based medical imaging is how to handle small-scale datasets and a limited amount of labeled data, especially when complex deep learning models are used, where an overfitting of a deep convolutional neural network with huge parameters occurs due to insufficient datasets or imbalanced dataset samples. In the field of computer vision, in order to solve the problem of overfitting, researchers have proposed many effective methods, such as: batch regularization, Dropout, early stop method, weight sharing, weight attenuation, and the like. In addition to the above-mentioned method of adjusting on the network structure, data enhancement is an effective method for operating on the data itself, which alleviates the phenomenon of overfitting to some extent in the analysis and classification of images. The classical data enhancement technology mainly comprises affine transformation methods such as translation, rotation, scaling, turning, shearing and the like, and an original sample and a new sample are mixed to be used as a training set and input into a convolutional neural network; wang et al uses a method of changing the brightness values to expand the sample size by also being a data enhancement method for adjusting the sample color space; although the methods are improved, the methods only operate on the original sample and do not generate new features, the diversity of the original sample is not substantially improved, and the improvement effect is weak when small-scale data is processed.
The generated countermeasure Network (GAN) is a generation model proposed by Ian Good fellow et al, and is composed of a generation networker G and a discriminator D, the generator G synthesizes an image G (z) by using noise z sampled in uniform distribution or normal distribution as input, the discriminator D tries to judge the synthesized image G (z) as false as possible, judges a real image x as true, and adjusts parameters of each model through successive countermeasure training, and finally, the generator obtains a distribution model of a real sample, thereby obtaining generation performance close to the real image.
The generative confrontation network generates new samples by fitting the original sample distribution, the new samples are generated from the distribution learned by the generative model, and the new samples have new characteristics different from the original samples. This feature makes it possible to use the samples generated by the generation network as new training samples to achieve data augmentation. Although GAN works well in many computer vision areas, GAN has many problems in practical applications. On the one hand, GAN is very difficult to train, and once the data distribution and the distribution of the generated network fit do not substantially coincide at the beginning of the training, the gradient of the generated network is easily pointed in a random direction, thereby causing a problem that the gradient disappears. On the other hand, the generator may generate a single sample that is safe but lacks diversity in order for the discriminator to give a high score, which may cause a pattern collapse problem.
Disclosure of Invention
Aiming at the defects of the prior art, a cardiac magnetic resonance image data enhancement method based on an evolved GAN comprises the following specific steps:
step 1: acquiring a cardiac magnetic resonance image dataset, said dataset comprising a benign cardiac magnetic resonance image and a malignant cardiac magnetic resonance image;
step 2: preprocessing the data set, and dividing the data set into a training set and a testing set;
and step 3: carrying out affine transformation on the preprocessed training set to obtain a data enhancement data set;
and 4, step 4: inputting the data enhancement data set into the constructed evolution GAN model for training, and specifically comprising the following steps:
step 41: collecting noise z in mixed Gaussian distribution as initial input of a generator, and synthesizing the input noise into an image by the generator;
step 42: in a generator training stage, fixing parameters of a discriminator, and training a generator through three stages of mutation, evaluation and selection;
step 43: in the training stage of the discriminator, the parameters of the generator are fixed, and the image synthesized by the generator and the image x in the data enhancement data set are synthesized into one image by a linear interpolation method to be used as the input of the discriminator;
step 44: the generator and the discriminator carry out confrontation training in stages, and the steps 42 to 43 are continuously repeated until the training times are reached, and the training is finished;
and 5: synthesizing a new image by using the trained evolved GAN model, and adding the synthesized image into the training set to obtain a second data enhancement data set;
step 6: training a classifier using the second data enhancement data set to verify the effect of data enhancement, wherein the composite image is used to train a second classifier and obtain a second classification result, and the training set is used to train a first classifier and obtain a first classification result;
and 7: testing the first classifier and the second classifier with the test set.
According to a preferred embodiment, the generator training of step 42 further comprises:
step 421: mutation, namely fixing parameters of a discriminator in a training stage of the generator, and carrying out mutation operation on the current parent generator for three times to obtain a plurality of child generators;
step 422: evaluating, namely calculating the adaptability score of each child generator under the current parent discriminator through an adaptability function, evaluating the generation performance of the child generators by using the adaptability function under the current parent discriminator, and quantizing the adaptability score into a corresponding adaptability score:
F=Fq+γFd
wherein, FqFor measuring the quality of the generated samples, FdThe method is used for measuring the diversity of generated samples, F represents an adaptability score, and gamma represents a hyper-parameter;
step 423: and selecting the child generator with the highest adaptability score as the parent generator of the next iteration through sorting.
The invention has the beneficial effects that:
1. according to the data enhancement method, the current relatively optimal generator is selected from the mutation of the generators so as to take the quality and diversity of the generated pictures into consideration, high-quality and diverse samples can be generated to expand the training set, and finally various indexes of the classification result are improved.
2. And a new training sample is synthesized by combining the linear interpolation of the characteristic vector and a related linear interpolation label is generated, so that the distribution of the whole training set is expanded, the discrete sample space is also continuous, and the smoothness between the fields is improved, so that the model can be trained better.
Drawings
FIG. 1 is a flow chart of an enhancement method of the present invention;
FIG. 2 is a schematic diagram of a residual block structure;
fig. 3(a) is a true diseased image;
fig. 3(b) is a composite diseased image;
FIG. 3(c) is a true non-diseased image; and
fig. 3(d) is a synthetic non-diseased image.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
The following detailed description is made with reference to the accompanying drawings.
Aiming at the problem that a small-scale data set is easy to generate overfitting when a deep convolutional neural network is trained, the invention provides a cardiac magnetic resonance image data enhancement method based on an evolutionary generation countermeasure network. The method selects the current relatively optimal generator from a plurality of generator mutations to take the quality and diversity of the generated picture into consideration, and combines the linear interpolation of the characteristic vector to synthesize a new training sample and generate a related linear interpolation label, thereby not only expanding the distribution of the whole training set, but also carrying out the continuity of the discrete sample space and improving the smoothness between the fields.
Fig. 1 is a flow chart of the method of the present invention, and as shown in fig. 1, the specific steps of the generative medical image data enhancement method for generating a confrontation network based on evolution include:
step 1: acquiring a cardiac magnetic resonance image dataset, said dataset comprising a benign cardiac magnetic resonance image and a malignant cardiac magnetic resonance image;
step 2: the data set is preprocessed and randomly divided into a training set and a test set. The preprocessing method comprises resampling, region-of-interest selection, normalization and final region-of-interest selection. The training set and the test set can be distributed according to actual needs, dynamic distribution is generally carried out according to the proportion of 4:1, and the attribution training set or the test set is randomly selected.
And step 3: and carrying out affine transformation on the preprocessed training set to obtain a data enhancement data set. The affine transformation operation comprises horizontal overturning, vertical overturning, random amplification rotation of 0-20 degrees, rotation of 90 degrees, 180 degrees and 270 degrees, and random amplification translation of 0-2% of longitudinal and transverse axes.
And 4, step 4: inputting the data enhancement data set into the constructed evolution GAN model for training, and the method integrates the ideas of an evolution algorithm and linear interpolation in the training process, and specifically comprises the following steps:
step 41: collecting noise z in mixed Gaussian distribution as initial input of a generator, and synthesizing the input noise into an image by the generator;
generally, noise z which is subjected to multivariate uniform distribution or multivariate normal distribution is used as the input of the model for generating the countermeasure network, the multi-modal distribution is used as the input, the inherent multi-modal distribution of real training data can be better adapted, and the quality and diversity of generated pictures can be improved by using the method of taking the multi-modal distribution as the input.
Step 42: in a generator training stage, fixing parameters of a discriminator, and training a generator through three stages of mutation, evaluation and selection;
step 421: and (4) mutation, namely fixing parameters of the discriminator in a training stage of the generator, and carrying out mutation operation on the current generator for three times to obtain a filial generation generator. The three mutation operations are maximum and minimum value mutation, heuristic mutation and least square mutation respectively.
Maximum and minimum value mutation: the mutation has small change to the original objective function, and the mutation can provide an effective gradient and relieve the phenomenon of gradient disappearance. The maximum minimum value can be written as follows:
Figure BDA0002597951220000051
heuristic mutation: unlike the minimal maximum variation that minimizes the log probability of a discriminator being correct, heuristic mutation aims to maximize the log probability of a discriminator being wrong, and when the discriminator determines a generated sample as false, heuristic mutation does not saturate, but still provides an effective gradient, enabling the generator to train continuously. Heuristic mutations can be written as follows:
Figure BDA0002597951220000052
least square mutation: inspiration comes from LSGAN, and least squares mutations can also avoid vanishing gradients. Meanwhile, compared with heuristic mutation, the least square mutation does not generate a false sample with very high cost, but does not avoid punishment with very low cost, which avoids mode collapse to a certain extent. The least squares mutation can be written as follows:
Figure BDA0002597951220000053
422: and evaluating, namely calculating the adaptability scores of all child generators under the current parent arbiter through an adaptability function. Namely, under the current parent arbiter, the generation performance of the child generator is evaluated by using an adaptive function and quantized into a corresponding adaptive score:
F=Fq+γFd
wherein F represents an adaptability score, FqAnd measuring the quality of the generated sample, namely whether the child generator can cheat the discriminator, wherein the expression is as follows:
Fq=Ez[D(G(z))]
Fdthe diversity of the generated samples is measured, which measures the gradient magnitude generated when the discriminator parameter is updated again according to the child generator, if the samples generated by the child generator are relatively concentrated, i.e. lack of diversity, the large gradient fluctuation is more easily caused when the discriminator parameter is updated correspondingly, and the expression is as follows:
Figure BDA0002597951220000061
gamma (more than or equal to 0) is a hyper-parameter used for adjusting the generated quality and the diversity weight, and can be freely adjusted in the experiment.
Step 423: and selecting the child generator with the highest adaptability score as the parent generator of the next iteration through sorting.
The training method for improving generation of the countermeasure network is to train based on mutation of multiple child generators by parent generators, and to select the optimal generator or generators as the parent generators in the next discrimination environment after evaluation of the adaptive score function.
Step 43: in the stage of training the discriminator, the parameters of the generator are fixed, and the image synthesized by the generator and the image x in the data enhancement data set are synthesized into one image by a linear interpolation method to be used as the input of the discriminator. Obtaining an interpolation new sample and a new label, wherein the loss of the discriminator is as follows:
Figure BDA0002597951220000062
the discriminator parameters are updated by calculating the average loss of the discriminator.
The method constructs a virtual training sample through an original sample, synthesizes a new training sample by combining linear interpolation of a characteristic vector and generates a related linear fork label to expand the distribution of the whole training set, and the method specifically comprises the following steps:
Figure BDA0002597951220000063
wherein xi,xjIs the original input vector, yi,yjIs the label code, (x)i,yi),(xj,yj) Is two samples randomly sampled from the original sample, lambda ∈ Beta [ alpha, alpha [ ]]Is the weight vector, and α ∈ (0, + ∞) is the hyperparameter that controls the interpolation strength between the feature-target vectors. The linear interpolation method enables the model to be linear when processing the region between the original sample and the sample, so that the inadaptability when predicting the test sample except the training sample is reduced, the generalization capability is enhanced, meanwhile, the discrete sample space can be serialized, and the smoothness between fields is improved.
Step 44: the generator and the discriminator carry out stage-by-stage confrontation training, and the steps 42 to 43 are continuously repeated until the set training times is reached, and the training is finished;
and 5: synthesizing a new image by using a confrontation model generated by the evolution after the training is finished, and adding the synthesized image into the training set to obtain a second data enhancement data set;
step 6: training a classifier using the second data-enhanced data set, wherein the composite image is used to train a second classifier and obtain a second classification result, and the training set is used to train a first classifier and obtain a first classification result.
And 7: testing the first classifier and the second classifier with the test set.
The effectiveness of the method is verified through classification experiments, and the diversity of generated samples and the influence of the number of the generated samples on the classification result of the cardiac magnetic resonance image are researched through comparison experiments
The method uses a TTUR (Two-Timescale Update Rule) method in the training process, and specifically comprises the following steps: the low-speed updating rule is used in the generation network, the high-speed updating rule is used in the judgment network, the learning rate of the generation network is set to be 0.0001, the learning rate of the judgment network is set to be 0.0004, and 1:1 updating can be achieved in an experiment.
According to the cardiac magnetic resonance image data enhancement method, the improved residual block structure and the self-attention module are used in the model to train the generator and the discriminator, the residual block structure can relieve the problem of gradient disappearance, the convergence speed of the model is increased, and therefore the high-performance generator is trained more quickly within the same training time. The residual block structure is shown in fig. 2.
The generator is trained based on the residual block structure and the self-attention module, as exemplified below:
step a 1: performing full-connection mapping on the noise z, performing size reshaping, and outputting the output size of 1024 × 4 × 4;
step a 2: the output of the previous layer is input into an improved residual structure, the input is divided into two channels, one of the two channels is a residual part, and the channel consists of 5 sub-operations: convolution operation with the step length of 1, batch normalization processing, LeakyReLU activation function, convolution operation with the step length of 1 and batch normalization, wherein the other channel is a direct connection channel and is synthesized with the output of a residual channel to form unified output, and the output size is 1024 multiplied by 4;
step a 3: after the residual structure, the output of the previous layer is processed by a transposed convolution layer with the size of 3 × 3 and the step length of 2, after that, the output size is 512 × 8 × 8 by normalization processing and a ReLU activation function;
step a 4: repeating the operations from the step a2 to the step a3 for 3 times to obtain an output with the size of 128 × 32 × 32, and inputting the output into the attention module to obtain an output with the size of 128 × 32 × 32;
step a 5: repeating the operation of the step a2 for 1 time, using the transposed convolution layer with the size of 3 × 3 and the step size of 1 to obtain the output with the output size of 64 × 32 × 32, and obtaining the image of 3 × 64 × 64 through the transposed convolution operation with the step size of 2 and the tanh function;
training the discriminator based on the residual structure and the self-attention module, for example, as follows:
step b 1: inputting a picture with the size of 3 multiplied by 64;
step b 2: using the convolution layer with the size of 3 multiplied by 3 and the step size of 1 to obtain the output with the output size of 64 multiplied by 64;
step b 3: the output of the previous layer is input into an improved residual structure, the input is divided into two channels, one of the two channels is a residual part, and the channel consists of 5 sub-operations: convolution operation with step length of 1, batch normalization processing, LeakyReLU activation function, convolution operation with step length of 1 and batch normalization, the other channel is a direct connection channel and is synthesized with the output of a residual channel to form a unified output,
step b 4: repeating the operation of the step b3 for 1 time, and then passing the output through the convolution layer with the size of 3 × 3 and the step size of 2 to obtain the output with the size of 128 × 32 × 32;
step b 5: inputting the output from the attention module, and obtaining the output with the size of 128 × 32 × 32;
step b 6: repeating the operation of the step b3 for 3 times to obtain an output with the size of 1024 × 4 × 4;
step b 7: the output of the previous layer is mapped to the output of size 1 as the final output using the full-join map.
Fig. 3 is a comparison between a composite image obtained by the method of the present invention and a real image, in which the composite image has a slightly poorer definition than the real image, and the outline is less sharp than the real image, but the composite image can be effectively applied to a data enhancement task.
In order to further illustrate the data enhancement effect of the method, the method of the invention is compared with the existing method in terms of accuracy, specificity and sensitivity, and the specific results are as follows:
Figure BDA0002597951220000081
Figure BDA0002597951220000091
as can be seen from the table, the method provided by the invention has higher accuracy, specificity and sensitivity than the prior art.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (2)

1. A cardiac magnetic resonance image data enhancement method based on an evolved GAN is characterized in that the data enhancement method comprises the following specific steps:
step 1: acquiring a cardiac magnetic resonance image dataset, said dataset comprising a benign cardiac magnetic resonance image and a malignant cardiac magnetic resonance image;
step 2: preprocessing the data set, and dividing the data set into a training set and a testing set;
and step 3: carrying out affine transformation on the preprocessed training set to obtain a data enhancement data set;
and 4, step 4: inputting the data enhancement data set into the constructed evolution GAN model for training, and specifically comprising the following steps:
step 41: collecting noise z in mixed Gaussian distribution as initial input of a generator, and synthesizing the input noise into an image by the generator;
step 42: in a generator training stage, fixing parameters of a discriminator, and training a generator through three stages of mutation, evaluation and selection;
step 43: in the training stage of the discriminator, the parameters of the generator are fixed, and the image synthesized by the generator and the image x in the data enhancement data set are synthesized into one image by a linear interpolation method to be used as the input of the discriminator;
step 44: the generator and the discriminator carry out confrontation training in stages, and the steps 42 to 43 are continuously repeated until the training times are reached, and the training is finished;
and 5: synthesizing a new image by using the trained evolved GAN model, and adding the synthesized image into the training set to obtain a second data enhancement data set;
step 6: training a classifier using the second data enhancement data set to verify the effect of data enhancement, wherein the composite image is used to train a second classifier and obtain a second classification result, and the training set is used to train a first classifier and obtain a first classification result;
and 7: testing the first classifier and the second classifier with the test set.
2. The cardiac magnetic resonance image data enhancement method as set forth in claim 1, wherein the generator training of step 42 further includes:
step 421: mutation, namely fixing parameters of a discriminator in a training stage of the generator, and carrying out mutation operation on the current parent generator for three times to obtain a plurality of child generators;
step 422: evaluating, namely calculating the adaptability score of each child generator under the current parent discriminator through an adaptability function, evaluating the generation performance of the child generators by using the adaptability function under the current parent discriminator, and quantizing the adaptability score into a corresponding adaptability score:
F=Fq+γFd
wherein, FqFor measuring the quality of the generated samples, FdFor measuring the diversity of the generated samples, F TableIndicating an adaptability score, gamma indicating a hyper-parameter;
step 423: and selecting the child generator with the highest adaptability score as the parent generator of the next iteration through sorting.
CN202010715325.3A 2020-07-23 2020-07-23 Cardiac magnetic resonance image data enhancement method based on evolutionary GAN Active CN111861924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010715325.3A CN111861924B (en) 2020-07-23 2020-07-23 Cardiac magnetic resonance image data enhancement method based on evolutionary GAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010715325.3A CN111861924B (en) 2020-07-23 2020-07-23 Cardiac magnetic resonance image data enhancement method based on evolutionary GAN

Publications (2)

Publication Number Publication Date
CN111861924A true CN111861924A (en) 2020-10-30
CN111861924B CN111861924B (en) 2023-09-22

Family

ID=72949656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010715325.3A Active CN111861924B (en) 2020-07-23 2020-07-23 Cardiac magnetic resonance image data enhancement method based on evolutionary GAN

Country Status (1)

Country Link
CN (1) CN111861924B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561838A (en) * 2020-12-02 2021-03-26 西安电子科技大学 Image enhancement method based on residual self-attention and generation countermeasure network
CN112613488A (en) * 2021-01-07 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method and device, storage medium and electronic equipment
CN114282581A (en) * 2021-01-29 2022-04-05 北京有竹居网络技术有限公司 Training sample obtaining method and device based on data enhancement and electronic equipment
CN114545255A (en) * 2022-01-18 2022-05-27 广东工业大学 Lithium battery SOC estimation method based on competitive generation type antagonistic neural network

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039708A1 (en) * 2014-04-25 2017-02-09 The Regents Of The University Of California Quantitating disease progression from the mri images of multiple sclerosis patients
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
US20190122120A1 (en) * 2017-10-20 2019-04-25 Dalei Wu Self-training method and system for semi-supervised learning with generative adversarial networks
CN109902602A (en) * 2019-02-16 2019-06-18 北京工业大学 A kind of airfield runway foreign materials recognition methods based on confrontation Neural Network Data enhancing
CN109948693A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Expand and generate confrontation network hyperspectral image classification method based on super-pixel sample
CN110310345A (en) * 2019-06-11 2019-10-08 同济大学 A kind of image generating method generating confrontation network based on hidden cluster of dividing the work automatically
CN110415261A (en) * 2019-08-06 2019-11-05 山东财经大学 A kind of the expression animation conversion method and system of subregion training
CN110457457A (en) * 2019-08-02 2019-11-15 腾讯科技(深圳)有限公司 Dialogue generates the training method, dialogue generation method and device of model
CN110826639A (en) * 2019-11-12 2020-02-21 福州大学 Zero sample image classification method by using full data training
CN111062880A (en) * 2019-11-15 2020-04-24 南京工程学院 Underwater image real-time enhancement method based on condition generation countermeasure network
CN111091059A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Data equalization method in household garbage plastic bottle classification
CN111325236A (en) * 2020-01-21 2020-06-23 南京大学 Ultrasonic image classification method based on convolutional neural network
CN111353995A (en) * 2020-03-31 2020-06-30 成都信息工程大学 Cervical single cell image data generation method based on generation countermeasure network

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039708A1 (en) * 2014-04-25 2017-02-09 The Regents Of The University Of California Quantitating disease progression from the mri images of multiple sclerosis patients
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
US20190122120A1 (en) * 2017-10-20 2019-04-25 Dalei Wu Self-training method and system for semi-supervised learning with generative adversarial networks
CN109902602A (en) * 2019-02-16 2019-06-18 北京工业大学 A kind of airfield runway foreign materials recognition methods based on confrontation Neural Network Data enhancing
CN109948693A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Expand and generate confrontation network hyperspectral image classification method based on super-pixel sample
CN110310345A (en) * 2019-06-11 2019-10-08 同济大学 A kind of image generating method generating confrontation network based on hidden cluster of dividing the work automatically
CN110457457A (en) * 2019-08-02 2019-11-15 腾讯科技(深圳)有限公司 Dialogue generates the training method, dialogue generation method and device of model
CN110415261A (en) * 2019-08-06 2019-11-05 山东财经大学 A kind of the expression animation conversion method and system of subregion training
CN110826639A (en) * 2019-11-12 2020-02-21 福州大学 Zero sample image classification method by using full data training
CN111062880A (en) * 2019-11-15 2020-04-24 南京工程学院 Underwater image real-time enhancement method based on condition generation countermeasure network
CN111091059A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Data equalization method in household garbage plastic bottle classification
CN111325236A (en) * 2020-01-21 2020-06-23 南京大学 Ultrasonic image classification method based on convolutional neural network
CN111353995A (en) * 2020-03-31 2020-06-30 成都信息工程大学 Cervical single cell image data generation method based on generation countermeasure network

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CHAOYUE WANG等: "Evolutionary Generative Adversarial Networks", IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, vol. 23, no. 6, pages 921 - 934, XP011758301, DOI: 10.1109/TEVC.2019.2895748 *
HONGYI ZHANG等: "mixup: BEYOND EMPIRICAL RISK MINIMIZATION", ARXIV, pages 1 - 13 *
MATAN BEN-YOSEF等: "Gaussian Mixture Generative Adversarial Networks for Diverse Datasets, and the Unsupervised Clustering of Images", ARXIV, pages 1 - 20 *
SHENGYU ZHAO等: "Differentiable Augmentation for Data-Efficient GAN Training", HTTPS://ARXIV.ORG/PDF/2006.10738V1.PDF, pages 1 - 18 *
于贺等: "基于多尺寸卷积与残差单元的快速收敛GAN胸部X射线图像数据增强", 信号处理, vol. 35, no. 12, pages 2045 - 2054 *
姚哲维等: "改进型循环生成对抗网络的血管内超声图像增强", 计算机科学, no. 5, pages 228 - 234 *
李顼晟: "基于自编码结构的生成对抗网络人脸图像生成技术研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 7, pages 138 - 920 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561838A (en) * 2020-12-02 2021-03-26 西安电子科技大学 Image enhancement method based on residual self-attention and generation countermeasure network
CN112561838B (en) * 2020-12-02 2024-01-30 西安电子科技大学 Image enhancement method based on residual self-attention and generation of countermeasure network
CN112613488A (en) * 2021-01-07 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method and device, storage medium and electronic equipment
CN112613488B (en) * 2021-01-07 2024-04-05 上海明略人工智能(集团)有限公司 Face recognition method and device, storage medium and electronic equipment
CN114282581A (en) * 2021-01-29 2022-04-05 北京有竹居网络技术有限公司 Training sample obtaining method and device based on data enhancement and electronic equipment
CN114282581B (en) * 2021-01-29 2023-10-13 北京有竹居网络技术有限公司 Training sample acquisition method and device based on data enhancement and electronic equipment
CN114545255A (en) * 2022-01-18 2022-05-27 广东工业大学 Lithium battery SOC estimation method based on competitive generation type antagonistic neural network

Also Published As

Publication number Publication date
CN111861924B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN111861924A (en) Cardiac magnetic resonance image data enhancement method based on evolved GAN
Wyatt et al. Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise
CN108648197B (en) Target candidate region extraction method based on image background mask
CN109493308B (en) Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination
Oliva et al. Cross entropy based thresholding for magnetic resonance brain images using Crow Search Algorithm
CN108460726B (en) Magnetic resonance image super-resolution reconstruction method based on enhanced recursive residual network
CN111539467A (en) GAN network architecture and method for data augmentation of medical image data set based on generation of countermeasure network
CN111145116A (en) Sea surface rainy day image sample augmentation method based on generation of countermeasure network
CN111861906B (en) Pavement crack image virtual augmentation model establishment and image virtual augmentation method
CN112818764B (en) Low-resolution image facial expression recognition method based on feature reconstruction model
CN108614992A (en) A kind of sorting technique of high-spectrum remote sensing, equipment and storage device
CN112288645B (en) Skull face restoration model construction method and restoration method and system
CN109711401A (en) A kind of Method for text detection in natural scene image based on Faster Rcnn
CN107590775B (en) Image super-resolution amplification method using regression tree field
CN114581550B (en) Magnetic resonance imaging down-sampling and reconstruction method based on cross-domain network
CN114581552A (en) Gray level image colorizing method based on generation countermeasure network
CN113256749A (en) Rapid magnetic resonance imaging reconstruction algorithm based on high-dimensional correlation prior information
CN114565594A (en) Image anomaly detection method based on soft mask contrast loss
CN113378472B (en) Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network
CN114550260A (en) Three-dimensional face point cloud identification method based on countermeasure data enhancement
CN117291803B (en) PAMGAN lightweight facial super-resolution reconstruction method
CN117173464A (en) Unbalanced medical image classification method and system based on GAN and electronic equipment
CN112991402A (en) Cultural relic point cloud registration method and system based on improved differential evolution algorithm
Wang An Expression Recognition Method based on Improved Convolutional Network
Saaim et al. Generative Models for Data Synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant