CN113537031B - Radar image target identification method for generating countermeasure network based on condition of multiple discriminators - Google Patents

Radar image target identification method for generating countermeasure network based on condition of multiple discriminators Download PDF

Info

Publication number
CN113537031B
CN113537031B CN202110783193.2A CN202110783193A CN113537031B CN 113537031 B CN113537031 B CN 113537031B CN 202110783193 A CN202110783193 A CN 202110783193A CN 113537031 B CN113537031 B CN 113537031B
Authority
CN
China
Prior art keywords
image
label
discriminator
discriminators
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110783193.2A
Other languages
Chinese (zh)
Other versions
CN113537031A (en
Inventor
黄钰林
裴季方
刘小钰
王陈炜
霍伟博
杨海光
张寅�
杨建宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110783193.2A priority Critical patent/CN113537031B/en
Publication of CN113537031A publication Critical patent/CN113537031A/en
Application granted granted Critical
Publication of CN113537031B publication Critical patent/CN113537031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a radar image target identification method for generating a countermeasure network based on a condition of multiple discriminators, which comprises the following steps: s1, inputting random noise and image data samples with labels into a generator of a conditional countermeasure generation network to generate a pseudo image; s2, respectively inputting the pseudo image and the real image data sample with the label into two discriminators for training; s3, inputting the image data samples without the labels into two trained discriminators to obtain image data samples with predicted labels; and S4, combining the real image data sample with the label initially and the image data sample with the prediction label allocated by the discriminator to serve as a training data set of the network, training the CNN on the basis of semi-supervised learning, and realizing radar image target identification. The invention has more accurate image generation capability and can improve the identification precision. Meanwhile, the method can effectively improve the identification precision and robustness of the network with a small number of marked samples.

Description

Radar image target identification method for generating countermeasure network based on condition of multiple discriminators
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a radar image target recognition method for generating a countermeasure network based on conditions of multiple discriminators.
Background
In the remote sensing image processing, the main information of the imaging radar microwave remote sensing detection is microwave information reflected after the interaction of a target and microwave band electromagnetic waves. Therefore, the radar image contains the shape and scattering information of the target object, and target characteristic information can be well represented. Therefore, the target identification technology based on the radar image has important significance in the military and civil fields.
Deep Convolutional Neural Network (CNN) has made a certain progress in the field of radar image target recognition due to its powerful advanced feature learning capability. However, such methods require a large number of labeled datasets to train the model, while most radar image datasets are unlabeled or sparsely labeled, which can result in overfitting when training deep networks. The document "Chen, sizhe, et al," Target classification using the deep connected networks for SAR images, "IEEE Transactions on Geoscience and Remote Sensing 54.8 (2016): 4806-4817," proposes a full convolutional network in which all fully connected layers are replaced by convolutional layers. The method reduces overfitting by reducing the model parameters. However, its recognition performance decreases significantly with the decrease in labeled training samples. Due to the fact that the radar image data set is marked manually, time and labor are consumed, and it is important to improve the performance of training the CNN in the label limited training set.
Migration learning can overcome the tag-constrained problem by initializing the model using parameters learned from a large dataset and then fine-tuning the initialized model with a small number of tagged samples of the target task. The document "Huang, zhongling, zongxu Pan, and Bin Lei." Transfer learning with deep associated neural network for SAR target classification with limited labeled data "Remote Sensing 9.9 (2017): 907." an unsupervised generation network Auto-Encoder was pre-trained to learn features of a large number of unlabeled radar images. Subsequently, a classification layer is added to fine-tune the entire network. The method enhances the recognition performance of CNN under a small label sample training set. However, such methods only train samples with labels in the target domain, and ignore information of unlabeled samples in the target domain. Thus, semi-supervised learning based target recognition studies have emerged. Semi-supervised learning aims at building better classifiers by learning the features of labeled data samples, thereby predicting a large number of unlabeled data samples.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a radar image target identification method for generating an anti-network based on multi-discriminator conditions, which has more accurate image generation capability and can improve identification precision and effectively improve the identification precision and robustness of a network with a small number of marked samples.
The purpose of the invention is realized by the following technical scheme: the radar image target identification method for generating the countermeasure network based on the condition of multiple discriminators comprises the following steps:
s1, inputting random noise and image data samples with labels into a generator of a conditional countermeasure generation network to generate a pseudo image;
s2, respectively inputting the pseudo image and a real image data sample with a label into two discriminators for training, wherein the first discriminator is used for predicting the label of the input image, and the second discriminator is used for judging the input image to be a real image or a pseudo image;
s3, inputting image data samples without labels into two trained discriminators, and distributing the class labels to the image data samples by the discriminators to obtain image data samples with prediction labels;
and S4, combining the real image data sample with the label initially and the image data sample with the predictive label distributed by the discriminator to serve as a training data set of the network, training the CNN on the basis of semi-supervised learning, and realizing radar image target identification.
Further, the specific implementation method of step S2 is as follows:
the loss function of the first discriminator is set as a cross-entropy function, which is expressed by the following equation
Figure BDA0003157778950000021
Wherein N represents the total number of target label categories, x represents a real label, L is a probability vector for judging each category by the discriminator, and y i A label representing the prediction;
the multi-discriminator condition generates a penalty function against the network expressed as:
Figure BDA0003157778950000022
wherein G denotes a generator, D denotes a discriminator, V (G, D) is an objective function,
Figure BDA0003157778950000023
representing a mathematical expectation calculation, n being the input random noise, P n (n) distribution of random noise, P data (r) is the distribution of input sample data, r is the input labeled data sample, z is the output generated sample, G (n | r) represents the output of the generator given the random noise r under the condition of the labeled data sample, D (G (n | r)) represents the probability estimation of the discriminator judging the false data as the true data under the condition of the labeled data sample, and D (z | r) represents the probability estimation of the discriminator judging the true data as the true data under the condition of the labeled data sample;
further, the specific implementation method of step S3 is as follows: inputting image data samples without labels into trained discriminators, and outputting N-dimensional logarithmic vectors by the two discriminators
l=[l 1 ,l 2 ,...,l N ]
Then, the last layer Softmax is converted into an N-dimensional class label probability vector:
p=[p 1 ,p 2 ,...,p N ]
wherein l is a logarithmic vector output by the multiple discriminators, p is a category label probability vector output by the multiple discriminators, and N is the total number of categories of the target label; the processing equation of the Softmax layer is expressed as
Figure BDA0003157778950000031
Selecting positive samples with a threshold value, assigning class labels corresponding to probabilities greater than the threshold value to input unlabeled samples, the threshold value being set to P th ≥0.8;
If N elements in the category probability vector p finally output by the image input into the multi-discriminator are all smaller than 0.8, judging the image as a pseudo image and abandoning the pseudo image; if there are elements greater than 0.8 in the class probability vector p, the class corresponding to the maximum value among the elements is selected, and the label is assigned to the input image.
The beneficial effects of the invention are: compared with other methods based on semi-supervised learning, the method provided by the invention has more accurate image generation capability and can improve the identification accuracy. Meanwhile, the method can effectively improve the identification precision and robustness of the network with a small number of marked samples.
Drawings
FIG. 1 is a flow chart of a radar image target identification method of the present invention;
FIG. 2 is a sample image data with labels of the present embodiment;
fig. 3 is a pseudo image generated in the present embodiment.
Detailed Description
The present invention trains two discriminators using pseudo-images and real tagged images generated by a condition generator. The method comprises the following three stages: training generators and discriminators, predictive labels, training and testing.
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, a radar image target identification method for generating a countermeasure network based on multiple discriminator conditions of the present invention includes the following steps:
s1, inputting random noise and image data samples with labels into a generator of a Conditional countermeasure generation Network (CGAN) to generate a pseudo image.
The loss function of the condition generator can be regarded as a minimum of one problem, represented by the following equation
Figure BDA0003157778950000032
Wherein G denotes a generator, D denotes a discriminator, V (G, D) is an objective function,
Figure BDA0003157778950000033
representing a mathematical expectation calculation, n being the input random noise, P n (n) distribution of random noise, P data (r) distribution of input sample dataWhere r is the input tagged data sample, z is the output generated sample, G (n | r) represents the output of the generator given a random noise r under the tagged data sample condition, D (G (n | r)) represents the probability estimate that the discriminator determined the dummy data to be real data under the tagged data sample condition, and D (z | r) represents the probability estimate that the discriminator determined the real data to be real data under the tagged data sample condition.
The class labels and the random noise signal may be combined as a hidden spatial representation. Wherein the label of the image data sample is a condition of the generator for the purpose of conditionally supervising the image data generated by the generator such that the way in which the generator generates the result is not completely free and unsupervised. The condition generator uses the category r of the number as a category label and carries out one-hot coding, the real analog label in the r is '1', the rest elements are '0', and random noise n is uniformly distributed. And after mapping the random noise n and the class label r to the hidden layer, splicing the hidden layer and the hidden layer as the input of the next full-connection layer, wherein the ReLU is used as an activation function. Two fully connected layers are followed by two deconvolution layers, which can be viewed as the inverse of the convolution layer, having the same convolution kernel and pooling indices. When the input is a picture, deconvolution inverse maps the feature values back into the pixel space of the input picture, thereby accounting for which pixels in the input participated in activating the feature values. And the last layer of the condition generator network uses a Sigmoid function to obtain a final generated image sample. The generator of the present invention and the network parameter settings for authenticating it are shown in table 1.
TABLE 1
Figure BDA0003157778950000041
The sample labeled image data of this embodiment is shown in fig. 2, and the generated pseudo image is shown in fig. 3.
S2, respectively inputting the pseudo image and a real image data sample with a label into two discriminators for training, wherein the first discriminator is used for predicting the label of the input image, and the second discriminator is used for judging the input image to be a real image or a pseudo image; the specific implementation method comprises the following steps:
the loss function of the first discriminator is set as a cross-entropy function, which is expressed by the following equation
Figure BDA0003157778950000051
Wherein N represents the total number of categories of the target label, x represents a one-hot vector of a real label, L is a probability vector for judging each category by the discriminator, and y i A label representing the prediction;
from equations (1) and (2), the multi-discriminator conditional generation penalty function for the challenge network is given as:
Figure BDA0003157778950000052
parameters of generators and discriminators of the generation network are countered using a loss function optimization condition until the loss function converges.
S3, inputting image data samples without labels into two trained discriminators, and distributing the class labels to the image data samples by the discriminators to obtain image data samples with prediction labels; the specific implementation method comprises the following steps: inputting unlabeled image data samples into trained discriminators, and outputting N-dimensional logarithmic vectors by using two discriminators
l=[l 1 ,l 2 ,...,l N ]
Subsequently, the class label probability vector is converted into an N-dimensional class label probability vector through the last Softmax layer:
p=[p 1 ,p 2 ,...,p N ]
wherein, l is a logarithmic vector output by a plurality of discriminators, each element represents a logarithmic vector output by a corresponding category, p is a category label probability vector output by the plurality of discriminators, each element represents a probability vector output by a corresponding category, and N is the total number of target label categories; the processing equation of Softmax layer is expressed as
Figure BDA0003157778950000053
Since the output probabilities of the Softmax layer may be very close, positive samples are selected using a threshold, with class labels corresponding to probabilities greater than the threshold being assigned to unlabeled samples of the input, the threshold being set at P th ≥0.8;
If N elements in the category probability vector p finally output by the image input into the multi-discriminator are all smaller than 0.8, judging the image as a pseudo image and abandoning the pseudo image; if there are elements greater than 0.8 in the class probability vector p, the class corresponding to the maximum value among the elements is selected, and the label is assigned to the input image.
The multi-discriminator assigns a class label to the unlabeled image data samples in such a way as to obtain image data samples with predictive labels.
And S4, combining the real image data sample with the label initially and the image data sample with the prediction label allocated by the discriminator to serve as a training data set of the network, training the CNN on the basis of semi-supervised learning, and realizing radar image target identification. The comparison of the recognition accuracy of the recognition method of the present invention with that of other recognition methods obtained by inputting the generated model using the image of the present embodiment as test data is shown in table 2.
TABLE 2
Figure BDA0003157778950000061
/>
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (2)

1. The radar image target identification method for generating the countermeasure network based on the condition of multiple discriminators is characterized by comprising the following steps of:
s1, inputting random noise and image data samples with labels into a generator of a conditional countermeasure generation network to generate a pseudo image;
s2, respectively inputting the pseudo image and a real image data sample with a label into two discriminators for training, wherein the first discriminator is used for predicting the label of the input image, and the second discriminator is used for judging the input image to be a real image or a pseudo image; the specific implementation method comprises the following steps:
the loss function of the first discriminator is set as a cross-entropy function, which is expressed by the following equation
Figure FDA0004071540670000011
Wherein N represents the total number of target label categories, x represents a real label, L is a probability vector for judging each category by the discriminator, and y i A label representing the prediction;
the multi-discriminator condition generates a penalty function against the network expressed as:
Figure FDA0004071540670000012
wherein G denotes a generator, D denotes a discriminator, V (G, D) is an objective function,
Figure FDA0004071540670000013
representing a mathematical expectation calculation, n being the input random noise, P n (n) distribution of random noise, P data (r) is the distribution of input sample data, r is the input labeled data sample, z is the output generated sample, G (n | r) represents the output of the generator given the random noise r under the labeled data sample condition, D (G (n | r)) represents the judgment of the discriminator under the labeled data sample conditionThe false data is probability estimation of real data, and D (z | r) represents probability estimation of judging that the real data is the real data by the discriminator under the condition of a labeled data sample;
optimizing a generator and a discriminator of the conditional countermeasure generation network by using a loss function until the loss function is converged;
s3, inputting image data samples without labels into two trained discriminators, and distributing the class labels to the image data samples by the discriminators to obtain image data samples with prediction labels;
and S4, combining the real image data sample with the label initially and the image data sample with the predictive label distributed by the discriminator to serve as a training data set of the network, training the CNN on the basis of semi-supervised learning, and realizing radar image target identification.
2. The method for identifying radar image targets based on multi-discriminator conditional generation countermeasure network according to claim 1, wherein the step S3 is implemented by: inputting unlabeled image data samples into trained discriminators, and outputting N-dimensional logarithmic vectors by using two discriminators
l=[l 1 ,l 2 ,...,l N ]
Subsequently, the class label probability vector is converted into an N-dimensional class label probability vector through the last Softmax layer:
p=[p 1 ,p 2 ,...,p N ]
wherein l is a logarithmic vector output by the multiple discriminators, p is a category label probability vector output by the multiple discriminators, and N is the total number of categories of the target label; the processing equation of the Softmax layer is expressed as
Figure FDA0004071540670000021
Selecting positive samples with a threshold value, assigning class labels corresponding to probabilities greater than the threshold value to input unlabeled samples, the threshold value being set to P th ≥0.8;
If N elements in the category probability vector p finally output by the image input into the multi-discriminator are all smaller than 0.8, the image is judged to be a pseudo image and discarded; if there are elements greater than 0.8 in the class probability vector p, the class corresponding to the maximum value among the elements is selected, and the label is assigned to the input image.
CN202110783193.2A 2021-07-12 2021-07-12 Radar image target identification method for generating countermeasure network based on condition of multiple discriminators Active CN113537031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110783193.2A CN113537031B (en) 2021-07-12 2021-07-12 Radar image target identification method for generating countermeasure network based on condition of multiple discriminators

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110783193.2A CN113537031B (en) 2021-07-12 2021-07-12 Radar image target identification method for generating countermeasure network based on condition of multiple discriminators

Publications (2)

Publication Number Publication Date
CN113537031A CN113537031A (en) 2021-10-22
CN113537031B true CN113537031B (en) 2023-04-07

Family

ID=78127391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110783193.2A Active CN113537031B (en) 2021-07-12 2021-07-12 Radar image target identification method for generating countermeasure network based on condition of multiple discriminators

Country Status (1)

Country Link
CN (1) CN113537031B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897901B (en) * 2022-07-13 2022-11-01 东声(苏州)智能科技有限公司 Battery quality detection method and device based on sample expansion and electronic equipment
CN116363329B (en) * 2023-03-08 2023-11-03 广州中望龙腾软件股份有限公司 Three-dimensional image generation method and system based on CGAN and LeNet-5
CN116106833B (en) * 2023-04-12 2023-07-04 中南大学 Deep learning-based processing method and system for restraining surface layer steel bar echo
CN116563169B (en) * 2023-07-07 2023-09-05 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning
CN116681790B (en) * 2023-07-18 2024-03-22 脉得智能科技(无锡)有限公司 Training method of ultrasound contrast image generation model and image generation method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764173A (en) * 2018-05-31 2018-11-06 西安电子科技大学 The hyperspectral image classification method of confrontation network is generated based on multiclass
CN108805188A (en) * 2018-05-29 2018-11-13 徐州工程学院 A kind of feature based recalibration generates the image classification method of confrontation network
CN109145992A (en) * 2018-08-27 2019-01-04 西安电子科技大学 Cooperation generates confrontation network and sky composes united hyperspectral image classification method
CN109766835A (en) * 2019-01-11 2019-05-17 西安电子科技大学 The SAR target identification method of confrontation network is generated based on multi-parameters optimization
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111027439A (en) * 2019-12-03 2020-04-17 西北工业大学 SAR target recognition method for generating countermeasure network based on auxiliary classification
CN111037365A (en) * 2019-12-26 2020-04-21 大连理工大学 Cutter state monitoring data set enhancing method based on generative countermeasure network
CN111428758A (en) * 2020-03-06 2020-07-17 重庆邮电大学 Improved remote sensing image scene classification method based on unsupervised characterization learning
CN112232395A (en) * 2020-10-08 2021-01-15 西北工业大学 Semi-supervised image classification method for generating confrontation network based on joint training
CN112529806A (en) * 2020-12-15 2021-03-19 哈尔滨工程大学 SAR image data enhancement method based on generation of countermeasure network information maximization
CN113095218A (en) * 2021-04-09 2021-07-09 西北工业大学 Hyperspectral image target detection algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11003995B2 (en) * 2017-05-19 2021-05-11 Huawei Technologies Co., Ltd. Semi-supervised regression with generative adversarial networks

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805188A (en) * 2018-05-29 2018-11-13 徐州工程学院 A kind of feature based recalibration generates the image classification method of confrontation network
CN108764173A (en) * 2018-05-31 2018-11-06 西安电子科技大学 The hyperspectral image classification method of confrontation network is generated based on multiclass
CN109145992A (en) * 2018-08-27 2019-01-04 西安电子科技大学 Cooperation generates confrontation network and sky composes united hyperspectral image classification method
CN109766835A (en) * 2019-01-11 2019-05-17 西安电子科技大学 The SAR target identification method of confrontation network is generated based on multi-parameters optimization
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111027439A (en) * 2019-12-03 2020-04-17 西北工业大学 SAR target recognition method for generating countermeasure network based on auxiliary classification
CN111037365A (en) * 2019-12-26 2020-04-21 大连理工大学 Cutter state monitoring data set enhancing method based on generative countermeasure network
CN111428758A (en) * 2020-03-06 2020-07-17 重庆邮电大学 Improved remote sensing image scene classification method based on unsupervised characterization learning
CN112232395A (en) * 2020-10-08 2021-01-15 西北工业大学 Semi-supervised image classification method for generating confrontation network based on joint training
CN112529806A (en) * 2020-12-15 2021-03-19 哈尔滨工程大学 SAR image data enhancement method based on generation of countermeasure network information maximization
CN113095218A (en) * 2021-04-09 2021-07-09 西北工业大学 Hyperspectral image target detection algorithm

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Baiying Lei等.Skin lesion segmentation via generative adversarial networks with dual discriminators.《Medical Image Analysis》.2020,第64卷1-12. *
Changjie Cao等.Cost-Sensitive Awareness-Based SAR Automatic Target Recognition for Imbalanced Data.《IEEE Transactions on Geoscience and Remote Sensing》.2021,第60卷1-16. *
Jiayi Ma等.DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion.《IEEE Transactions on Image Processing》.2020,第29卷4980 - 4995. *
Miao Yang等.Underwater image enhancement based on conditional generative adversarial network.《Signal Processing: Image Communication》.2019,第81卷1-9. *
刘汝佳.基于随机丢弃的多鉴别器生成对抗网络研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2020,I138-1259. *
范斯诺.基于深度学习的视频预测与异常检测方法.《中国优秀硕士学位论文全文数据库 信息科技辑》.2021,I138-744. *

Also Published As

Publication number Publication date
CN113537031A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN113537031B (en) Radar image target identification method for generating countermeasure network based on condition of multiple discriminators
CN110472627B (en) End-to-end SAR image recognition method, device and storage medium
Gu et al. Stack-captioning: Coarse-to-fine learning for image captioning
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN114492574A (en) Pseudo label loss unsupervised countermeasure domain adaptive picture classification method based on Gaussian uniform mixing model
CN111126134A (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
CN111783841A (en) Garbage classification method, system and medium based on transfer learning and model fusion
CN111986125A (en) Method for multi-target task instance segmentation
CN111126226A (en) Radiation source individual identification method based on small sample learning and feature enhancement
CN114926693A (en) SAR image small sample identification method and device based on weighted distance
CN111986126A (en) Multi-target detection method based on improved VGG16 network
Ye et al. Recognition algorithm of emitter signals based on PCA+ CNN
Zhu Selection of multi-level deep features via spearman rank correlation for synthetic aperture radar target recognition using decision fusion
CN117237733A (en) Breast cancer full-slice image classification method combining self-supervision and weak supervision learning
Li et al. Transductive distribution calibration for few-shot learning
Tian et al. SAR object classification using the DAE with a modified triplet restriction
Liang et al. Adaptive multiple kernel fusion model using spatial-statistical information for high resolution SAR image classification
CN116630816B (en) SAR target recognition method, device, equipment and medium based on prototype comparison learning
CN109993050B (en) Synthetic aperture radar image identification method
CN116630694A (en) Target classification method and system for partial multi-label images and electronic equipment
CN116895016A (en) SAR image ship target generation and classification method
CN116340846A (en) Aliasing modulation signal identification method for multi-example multi-label learning under weak supervision
CN116030300A (en) Progressive domain self-adaptive recognition method for zero-sample SAR target recognition
Huan et al. SAR multi‐target interactive motion recognition based on convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant