CN109493308B - Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination - Google Patents

Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination Download PDF

Info

Publication number
CN109493308B
CN109493308B CN201811350964.3A CN201811350964A CN109493308B CN 109493308 B CN109493308 B CN 109493308B CN 201811350964 A CN201811350964 A CN 201811350964A CN 109493308 B CN109493308 B CN 109493308B
Authority
CN
China
Prior art keywords
network
lesion
image
rois
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811350964.3A
Other languages
Chinese (zh)
Other versions
CN109493308A (en
Inventor
王生生
邢春上
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201811350964.3A priority Critical patent/CN109493308B/en
Publication of CN109493308A publication Critical patent/CN109493308A/en
Application granted granted Critical
Publication of CN109493308B publication Critical patent/CN109493308B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Epidemiology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a medical image synthesis and classification method for generating a confrontation network based on condition multi-discrimination, which comprises the following steps: firstly, segmenting a lesion Region in a Computed Tomography (CT) image, and extracting a Region of Interest (ROIs) of the lesion; secondly, preprocessing data of the lesion ROIs extracted from the first step; thirdly, designing a model architecture for generating a confrontation Network (CMDGAN) based on Multi-condition judgment, and training the model architecture by using the image in the second step to obtain a generated model; fourthly, performing synthetic data enhancement on the extracted lesion ROIs by using the generated model obtained in the third step; and fifthly, designing and training a multi-scale residual error Network (MRNet for short). The method provided by the invention can generate a high-quality synthetic medical image data set, and the classification accuracy of the classification network on the test image is higher, so that auxiliary diagnosis can be better provided for medical workers.

Description

Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination
Technical Field
The invention relates to a medical image synthesis and classification for conditional multi-discriminant generation of confrontation networks
Background
In recent years, the classification effect of images has surpassed that of human beings in many aspects by relying on the powerful hierarchical feature extraction capability of a Convolutional Neural Network (CNN), and one main reason is to train a deep Neural Network by using a large-scale labeled data set, such as a computer vision task like handwriting recognition. However, there is still a great space for improvement in some aspects such as the classification problem in the specific field of medical images, and one of the biggest challenges in this is that a large-scale medical image data set cannot be obtained, and a deep neural network trained using a small-scale data set cannot learn some features in the medical images, so that the classification effect is poor.
One of the biggest challenges facing the field of medical imaging is how to handle small-scale datasets and a limited number of labeled samples, since in some supervised learning algorithms, usually labeled data and a larger training set are used to train neural networks. However, in the medical field, a large-scale medical image data set is difficult to obtain, and researchers usually use some classical image enhancement methods to expand the medical image data set, so that the disadvantages caused by a small-scale data set are overcome. However, the traditional image expansion method mainly comprises translation, rotation, inversion and scaling, which are only simple modifications to the data set image, and the training process of improving the network by using the classical image enhancement technology is a standard process. Little additional information is obtained by modifying the image in a small scale using classical means. The method does not greatly contribute to the training effect of the neural network.
At the same time, some radiologists with expertise in data and image tasks are usually required to perform annotations in medical imaging tasks, but most image annotations of medical images are time consuming, especially for some precise annotations, such as segmentation of an organ or a diseased region into multiple 2D slices or 3D volume views. Although some of the presently disclosed data sets are available on the web, most data sets are limited in their size and are only suitable for certain medical problems, such as only certain organs or certain diseases, and collecting a medical data set is a complex and expensive process requiring multi-faceted collaboration.
In summary, a medical image synthesis and classification technology based on a Conditional Multi-discriminant generation countermeasure Network (CMDGAN for short) is proposed, two networks are used for training in a countermeasure process, meanwhile, a plurality of discriminant networks with added conditions are used for comprehensively judging generated images, and further generation of high-quality images is guided, generated comprehensive data enhancement of high-quality examples is a novel and complex data enhancement type, a data set using the synthetic data type has more variability, and the synthesized data set can better improve a training process of a deep neural Network.
The invention content is as follows:
the method aims to solve the problem that the existing network model has poor medical image classification effect, such as incapability of obtaining a large number of medical images, no real label in the images and the like. The invention provides a medical image synthesis and classification method named CMDGAN, which is used for synthesizing a high-quality data set and training a classification model of a medical image. The invention mainly comprises the following steps: the method comprises the steps of automatically constructing the concept of medical image synthesis and classification, constructing a flow of accurate extraction of a medical CT image lesion area, using CMDGAN for the synthesis data enhancement of the medical lesion area image, constructing a flow of a medical image classification network, using depth characteristics for the process of real medical image classification, and preprocessing the flow of a data set.
A medical image synthesis and classification method for generating a confrontation network based on conditional multiple discrimination is characterized in that: at least comprises the following steps:
firstly, according to the labeling of a pathological change Region in a Computed Tomography (CT) image by a radiologist, carrying out image segmentation on a medical CT image, and extracting a pathological change Region of interest (ROIs);
step two, carrying out data preprocessing on the lesion ROIs extracted in the step one to balance image data among different lesions;
step three, designing a model architecture for generating a countermeasure Network (CMDGAN for short) based on Multi-condition judgment, and performing guide training on the model architecture by using the preprocessed image obtained in the step two to obtain a generated model;
step four, performing synthetic data enhancement on the extracted lesion ROIs by using the generation model obtained in the step three to obtain a high-quality lesion ROIs data set;
and step five, designing a multi-scale residual error Network (MRNet for short), and training the Network by using the synthesized lesion ROIs data set obtained in the step four to obtain a high-precision classification neural Network.
Has the advantages that:
compared with the prior art, the design scheme of the invention can achieve the following technical effects:
1. the countermeasure network is generated by adopting multi-condition judgment to synthesize the medical image, the synthesized ROIs lesion data set has more additional information compared with the original data set, the variability is more, the generated comprehensive data enhancement of the high-quality example is a novel and complex data enhancement type, and compared with the classification network trained by the traditional data enhancement method, the accuracy of the classification network trained by adopting the lesion data set synthesized by the method is higher;
2. the confrontation network is generated by using multiple judgment conditions to synthesize the medical images, so that a large number of labeled lesion ROIs can be obtained, and a radiologist with professional knowledge is not required to label the lesion ROIs, so that a large amount of time and extra expenses are saved in the preprocessing stage of the data set;
3. a medical image data set with a specific field can be acquired in a short time, the quality of the synthesized medical image is high, and the classification network trained by the data set has better performance on a test data set;
4. the method for synthesizing the medical image by adopting the multi-condition discrimination generation countermeasure network has excellent performance on one lesion and strong applicability on many other organs or lesions.
5. The MRNet trained by the synthetic data is used for classifying the medical images, and the classification effect is much better than that of other classification networks, the classification precision is high, and the sensitivity and the specificity are better.
Description of the drawings:
FIG. 1. conditional multiple-decision generation of a countermeasure network architecture diagram
FIG. 2 is a flow chart of conditional multi-decision generation of confrontation network composite image
FIG. 3 is a flow chart of DUTFCN extraction of lesion ROIs
FIG. 4 is a diagram of a multi-scale residual network architecture
FIG. 5 is a flowchart of the overall process of conditional multi-discriminant generation of confrontation network composite image and classification
The specific implementation mode is as follows:
firstly, according to the labeling of a pathological change Region in a Computed Tomography (CT) image by a radiologist, image segmentation is carried out on a medical CT image, and a pathological change Region of interest (ROIs) is extracted.
(1) Extracting a suspected lesion area: taking thyroid lesion as an example, a professional radiologist analyzes a thyroid CT image, marks the edge of each lesion and determines a corresponding diagnosis result, and simultaneously, the determination is performed through physical examination and clinical random access, and a suspected lesion area is finally obtained.
(2) Characteristic analysis: the gray value of the suspected lesion area is different from that of the normal tissue, so that the suspected lesion area needs to be subjected to feature analysis to better determine the lesion area, wherein the analyzed features mainly comprise an average gray value, a gray value standard deviation, a diameter and the like.
(3) Extracting final ROIs by utilizing the gray features: and accurately segmenting the image by adopting a Deep U-Type Full Convolutional Neural Network (DUTFCN for short), and extracting final ROIs. The idea of DUTFCN is to use a full convolution neural network to segment biomedical images accurately without any full connectivity layer, so that the loss of position information is reduced, and the input of the network can use pictures of any size. The sampling method is adopted to replace the pooling operation to increase the resolution of the output. The network consists of a contraction sub-network and an expansion sub-network, wherein the contraction sub-network is used for capturing context characteristics, the expansion sub-network is used for positioning, the contraction sub-network and the expansion sub-network are in quick connection in time, the bottom layer characteristics and the high layer characteristics are fused, loss information in pooling is supplemented, a better segmentation effect is obtained, meanwhile, a loss function uses softmax and a cross loss entropy function, and the softmax function formula is as follows:
Figure GDA0003199026270000041
wherein, ak(x) And expressing the activation function of the Kth characteristic channel on the Xth pixel point. K is the number of classes, pk(x) Is approximately the mostA large function (approximated maximum function). While the weighted cross entropy is defined as:
Figure GDA0003199026270000042
where l ═ {1,2, …, K } denotes the correct label for each pixel, and the weighting function W is defined as:
Figure GDA0003199026270000043
wherein, wcWeight graph with function representing the frequency of the equilibrium class, d1Indicates the distance to the nearest ROI boundary, d2Indicating the distance to the second near ROI boundary. The model does not need a large amount of training data and can be well converged.
And step two, carrying out data preprocessing on the lesion ROIs extracted in the step one to balance the image data among different lesions.
(1) Sample equalization: and (3) carrying out sample equalization on the accurate thyroid lesion ROIs divided by the U-FCN full convolution network in the step one, and adjusting the ratio of the images of the four types of thyroid lesion ROIs to be 1:1:1: 1.
(2) Generating a data set to be synthesized: and (4) carrying out sample equalization on the thyroid lesion ROIs to obtain a presynthesized data set sample.
(3) Generating a classification test data set: after sample equalization, a classification test data set is obtained for verifying the accuracy of the classification effect.
And step three, designing a model architecture for generating a countermeasure Network (CMDGAN for short) based on Multi-condition judgment, and performing guide training on the model architecture by using the preprocessed image obtained in the step two to obtain a generated model.
(1) In the CMDGAN, there are two generator inputs, respectively a class label C and noise Z corresponding to each generated sample, for generating a false sample.
(2) Real sampleCategories as true data tags Xreal(data) and false sample X generated by the generatorfakeAnd simultaneously, each discrimination network has two parts of output of discrimination picture true and false and type corresponding to true.
(3) The outputs of multiple discriminant networks are simultaneously input into an averaging function, the output of the function is the final real result, and the function is as follows:
Figure GDA0003199026270000051
(4) and (3) training a conditional multi-discriminant generation countermeasure network (CMDGAN) by using the thyroid lesion ROIs image data to obtain a synthesized thyroid image lesion data set, wherein the thyroid image lesion data set mainly comprises four diseases, namely goiter, hyperthyroidism, thyroiditis and thyroid tumor. The training process is a process of mutually 'gaming' the convolutional neural networks of the generation network and the discrimination network, the object of the generation network G is to generate a real picture to deceive the discrimination network D as much as possible, the object of the D is to separate the picture generated by the G from the real picture as much as possible, and finally, the result of the game is that the generation network G can generate a picture G (z) which is 'false and true' under the most ideal condition. Since it is finally difficult to determine whether the generated picture is real or not in the discrimination network D, D (G (z)) is 0.5, and one generation formula model G is finally obtained, and a composite image of different types of lesions is generated using the model G.
Defining 1, generating a network: g is a network of generated pictures whose input is a random noise z from which the pictures are generated, denoted G (z).
Definition 2, network discrimination: and D is a judging network used for judging the truth of a picture. The input parameter is X, X represents a picture, and the output D (X) is used for representing the probability that X is a real picture, if the probability is 1, 100 percent of the picture is real, and the output is 0, the picture cannot be real.
(5) Training two networks by gradient descent method
for training iteration number do:
for k steps do:
from the prior distribution of noise pgM noise samples { z) are sampled in (z)(1),…,z(m)};
In generating a sample distribution pdata(x) To select m samples { x(1),…,x(m)};
A fixed generator G for training a discrimination network D to discriminate a real sample and generate a sample as accurately as possible;
continuously updating the parameters of the discrimination network using gradient descent:
Figure GDA0003199026270000052
end for
a prior noise pg(z) m noise samples { z) are selected(1),…,z(m)}:
The parameters of the generating network are continuously updated using gradient descent:
Figure GDA0003199026270000061
end for
gradient-based updates can use any standard gradient learning rule, using momentum in experiments to avoid falling into local optima.
(6) Optimizing a minimum loss function
The objective function for each discriminating network has two parts, respectively the log-likelihood L of the correct sourcesAnd the log-likelihood of the correct class LcThe corresponding formula is as follows:
Ls=E[logP(S=real|Xreal)]+E[logP(S=fake|Xfake)] (7)
Lc=E[logP(C=c|Xreal)]+E[logP(C=c|Xfake)] (8)
the loss function is as follows:
Figure GDA0003199026270000062
wherein p isdata(x) Representing the true sample distribution, pz(z) represents a random single distribution value. By continuously optimizing the minimum loss function, a generative model can be finally obtained.
And step four, performing synthetic data enhancement on the extracted lesion ROIs by using the generation model obtained in the step three to obtain a high-quality lesion ROIs data set.
(1) Random points are plotted in the underlying space, i.e., a random noise is generated.
(2) The noise is input to the above-described generation model, and the distribution of the random noise is learned to generate an image.
(3) The generated image is mixed with the actual image.
(4) The mixed image is input to discrimination for discrimination, and the corresponding target is used.
(5) Finally, the judging network cannot identify the authenticity of the images of the birthday, and then a large number of high-quality medical images can be synthesized.
And step five, designing a multi-scale residual error Network (MRNet for short), and training the Network by using the synthesized lesion ROIs data set obtained in the step four to obtain a high-precision classification neural Network.
(1) The multi-scale residual error network is formed by using various convolution modes and a large number of residual error blocks, the network depth is increased, the parameter number of the network is reduced as far as possible, and meanwhile, the loss of a large amount of information caused by dimension reduction operation of a 1 x 1 convolution kernel is prevented.
Definition 1, residual block: assuming a two-layer neural network, activation at layer l yields a[l+1]Activating again to obtain a[l+2]The calculation process is from a[l]At the beginning, the fast residual will be a[l]Is put in linear activation z[l+2]Then, before nonlinear activation, a is finally obtained[l+2]
(2) According to the characteristics of the synthesized thyroid lesion ROIs, the network is a classification network containing more residual blocks, and a dropout layer is added to prevent overfitting, so that the problems of gradient disappearance and gradient explosion in the training process can be effectively prevented, and the training can be more quickly and stably converged.
(3) And training the classification network according to the synthesized medical image obtained in the step four, testing on the test set, and obtaining a more accurate classification result and higher specificity and sensitivity of the image through the classification network trained by the synthesized medical image.

Claims (1)

1. A medical image synthesis and classification method based on conditional multi-discriminant generation countermeasure network is characterized in that: at least comprises the following steps:
firstly, a radiologist obtains a suspected lesion area on a Computed Tomography (CT) image by combining biopsy and clinic, then performs characteristic analysis, distinguishes the suspected lesion from normal tissues by the image characteristics of average gray value, standard deviation of gray value and diameter, designs a deep U-shaped full convolution neural network DUTFCN, the DUTFCN does not have any full connecting layer, reduces the loss of position information, can use pictures with any size for network input, adopts an upper sampling method to replace pooling operation to increase the output resolution, the network consists of a contraction sub-network and an expansion sub-network, the contraction sub-network is used for capturing context characteristics, the expansion sub-network is used for positioning, the two are in quick connection, the bottom layer characteristics and the high layer characteristics are fused, the lost information in pooling is supplemented, the characteristics are input into the DUTFCN to accurately segment the CT image, extracting a Region of interest (Region of interest, ROIs for short) of a lesion;
step two, adjusting the number of the ROIs samples extracted in the step one to ensure that the number of the samples in each category is equal, so that the image data among different focuses is balanced;
designing a CMDGAN model architecture based on condition Multi-discrimination generation countermeasure Network (CMDGAN for short), wherein the CMDGAN has two types of input of labels and noise, real sample types are used as real data labels and false samples generated by a generator and are simultaneously input into a plurality of discrimination networks to be respectively discriminated, a Network target function is discriminated to have two parts of log-likelihood of a correct source and log-likelihood of a correct type, a thyroid lesion image data set is adopted to train the CMDGAN, the data set comprises four diseases of goiter, hyperthyroidism, thyroiditis and thyroid tumor, a gradient descent algorithm aiming at the CMDGAN is designed to synchronously train and generate and Discriminate two networks, momentum is used for gradient updating to avoid falling into local optimality, and a generation model is finally obtained;
step four, performing synthetic data enhancement on the extracted lesion ROIs by using the generation model obtained in the step three to obtain a high-quality lesion ROIs data set;
and step five, designing a multi-scale residual error Network (MRNet for short), training the Network by using the synthesized lesion ROIs data set obtained in the step four to obtain a high-precision classification neural Network, adding a dropout layer to prevent overfitting, and finally realizing medical image classification.
CN201811350964.3A 2018-11-14 2018-11-14 Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination Expired - Fee Related CN109493308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811350964.3A CN109493308B (en) 2018-11-14 2018-11-14 Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811350964.3A CN109493308B (en) 2018-11-14 2018-11-14 Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination

Publications (2)

Publication Number Publication Date
CN109493308A CN109493308A (en) 2019-03-19
CN109493308B true CN109493308B (en) 2021-10-26

Family

ID=65696002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811350964.3A Expired - Fee Related CN109493308B (en) 2018-11-14 2018-11-14 Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination

Country Status (1)

Country Link
CN (1) CN109493308B (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070935B (en) * 2019-03-20 2021-04-30 中国科学院自动化研究所 Medical image synthesis method, classification method and device based on antagonistic neural network
US11049239B2 (en) * 2019-03-29 2021-06-29 GE Precision Healthcare LLC Deep neural network based identification of realistic synthetic images generated using a generative adversarial network
CN109978897B (en) * 2019-04-09 2020-05-08 中国矿业大学 Registration method and device for heterogeneous remote sensing images of multi-scale generation countermeasure network
CN110176302A (en) * 2019-04-17 2019-08-27 南京医科大学 Utilize the lower limb line of force Intelligent Calibration confirmation method for generating confrontation network model
CN109984841B (en) * 2019-04-17 2021-12-17 南京医科大学 System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model
CN110101401B (en) * 2019-04-18 2023-04-07 浙江大学山东工业技术研究院 Liver contrast agent digital subtraction angiography method
CN110070129B (en) * 2019-04-23 2021-07-16 上海联影智能医疗科技有限公司 Image detection method, device and storage medium
CN110074813B (en) * 2019-04-26 2022-03-04 深圳大学 Ultrasonic image reconstruction method and system
CN110070540B (en) * 2019-04-28 2023-01-10 腾讯科技(深圳)有限公司 Image generation method and device, computer equipment and storage medium
CN110147830B (en) * 2019-05-07 2022-02-11 东软集团股份有限公司 Method for training image data generation network, image data classification method and device
CN110189272B (en) * 2019-05-24 2022-11-01 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN110264424B (en) * 2019-06-20 2021-05-04 北京理工大学 Fuzzy retina fundus image enhancement method based on generation countermeasure network
CN110459303B (en) * 2019-06-27 2022-03-08 浙江工业大学 Medical image abnormity detection device based on depth migration
CN110544275B (en) * 2019-08-19 2022-04-26 中山大学 Methods, systems, and media for generating registered multi-modality MRI with lesion segmentation tags
CN110600105B (en) * 2019-08-27 2022-02-01 武汉科技大学 CT image data processing method, device and storage medium
CN110675353A (en) * 2019-08-31 2020-01-10 电子科技大学 Selective segmentation image synthesis method based on conditional generation countermeasure network
CN110796080B (en) * 2019-10-29 2023-06-16 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation countermeasure network
CN113261012B (en) * 2019-11-28 2022-11-11 华为云计算技术有限公司 Method, device and system for processing image
CN111047546A (en) * 2019-11-28 2020-04-21 中国船舶重工集团公司第七一七研究所 Infrared image super-resolution reconstruction method and system and electronic equipment
CN111274429A (en) * 2020-01-14 2020-06-12 广东工业大学 Data-enhanced unsupervised trademark retrieval system and method based on GAN
CN113449755B (en) * 2020-03-26 2022-12-02 阿里巴巴集团控股有限公司 Data processing method, model training method, device, equipment and storage medium
CN111539467A (en) * 2020-04-17 2020-08-14 北京工业大学 GAN network architecture and method for data augmentation of medical image data set based on generation of countermeasure network
CN111639676B (en) * 2020-05-07 2022-07-29 安徽医科大学第二附属医院 Chest medical image identification and classification method applicable to new coronary pneumonia image analysis
CN111882509A (en) * 2020-06-04 2020-11-03 江苏大学 Medical image data generation and detection method based on generation countermeasure network
CN111767861B (en) * 2020-06-30 2024-03-12 苏州兴钊防务研究院有限公司 SAR image target recognition method based on multi-discriminant generation countermeasure network
CN112241766B (en) * 2020-10-27 2023-04-18 西安电子科技大学 Liver CT image multi-lesion classification method based on sample generation and transfer learning
CN112560575B (en) * 2020-11-09 2023-07-18 北京物资学院 Red Fuji apple shape data enhancement device and method
CN112462001B (en) * 2020-11-17 2021-07-23 吉林大学 Gas sensor array model calibration method for data amplification based on condition generation countermeasure network
CN112837317A (en) * 2020-12-31 2021-05-25 无锡祥生医疗科技股份有限公司 Focus classification method and device based on breast ultrasound image enhancement and storage medium
CN113327221A (en) * 2021-06-30 2021-08-31 北京工业大学 Image synthesis method and device fusing ROI (region of interest), electronic equipment and medium
TWI762375B (en) * 2021-07-09 2022-04-21 國立臺灣大學 Semantic segmentation failure detection system
CN113592752B (en) * 2021-07-12 2023-06-23 四川大学 Road traffic light offset image enhancement method and device based on countermeasure network
CN114782443A (en) * 2022-06-22 2022-07-22 深圳科亚医疗科技有限公司 Device and storage medium for data-based enhanced aneurysm risk assessment
CN116402812B (en) * 2023-06-07 2023-09-19 江西业力医疗器械有限公司 Medical image data processing method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016406A (en) * 2017-02-24 2017-08-04 中国科学院合肥物质科学研究院 The pest and disease damage image generating method of network is resisted based on production
CN107563995A (en) * 2017-08-14 2018-01-09 华南理工大学 A kind of confrontation network method of more arbiter error-duration models
CN107844743B (en) * 2017-09-28 2020-04-28 浙江工商大学 Image multi-subtitle automatic generation method based on multi-scale hierarchical residual error network
CN107909621A (en) * 2017-11-16 2018-04-13 深圳市唯特视科技有限公司 It is a kind of based on it is twin into confrontation network medical image synthetic method
CN108198179A (en) * 2018-01-03 2018-06-22 华南理工大学 A kind of CT medical image pulmonary nodule detection methods for generating confrontation network improvement
CN108765333B (en) * 2018-05-24 2021-08-10 华南理工大学 Depth map perfecting method based on depth convolution neural network
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning

Also Published As

Publication number Publication date
CN109493308A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109493308B (en) Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination
Xu et al. Efficient multiple organ localization in CT image using 3D region proposal network
Sori et al. DFD-Net: lung cancer detection from denoised CT scan image using deep learning
CN108268870B (en) Multi-scale feature fusion ultrasonic image semantic segmentation method based on counterstudy
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN110276745B (en) Pathological image detection algorithm based on generation countermeasure network
CN110084318B (en) Image identification method combining convolutional neural network and gradient lifting tree
CN110853011B (en) Method for constructing convolutional neural network model for pulmonary nodule detection
Yi et al. Optimizing and visualizing deep learning for benign/malignant classification in breast tumors
CN109754007A (en) Peplos intelligent measurement and method for early warning and system in operation on prostate
Shen et al. Mass image synthesis in mammogram with contextual information based on GANs
Cao et al. A multi-kernel based framework for heterogeneous feature selection and over-sampling for computer-aided detection of pulmonary nodules
Guo et al. Multi-level semantic adaptation for few-shot segmentation on cardiac image sequences
CN111882509A (en) Medical image data generation and detection method based on generation countermeasure network
CN115731178A (en) Cross-modal unsupervised domain self-adaptive medical image segmentation method
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
CN111767952A (en) Interpretable classification method for benign and malignant pulmonary nodules
CN114693933A (en) Medical image segmentation device based on generation of confrontation network and multi-scale feature fusion
CN107729926A (en) A kind of data amplification method based on higher dimensional space conversion, mechanical recognition system
CN114332572B (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
Wan et al. Hierarchical temporal attention network for thyroid nodule recognition using dynamic CEUS imaging
CN116664911A (en) Breast tumor image classification method based on interpretable deep learning
Anaam et al. Studying the applicability of generative adversarial networks on HEp-2 cell image augmentation
CN108090507A (en) A kind of medical imaging textural characteristics processing method based on integrated approach
Xie et al. Joint segmentation and classification task via adversarial network: Application to HEp-2 cell images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211026