CN111767861A - SAR image target identification method based on multi-discriminator generation countermeasure network - Google Patents

SAR image target identification method based on multi-discriminator generation countermeasure network Download PDF

Info

Publication number
CN111767861A
CN111767861A CN202010614959.XA CN202010614959A CN111767861A CN 111767861 A CN111767861 A CN 111767861A CN 202010614959 A CN202010614959 A CN 202010614959A CN 111767861 A CN111767861 A CN 111767861A
Authority
CN
China
Prior art keywords
discriminator
generator
unit
sample
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010614959.XA
Other languages
Chinese (zh)
Other versions
CN111767861B (en
Inventor
袁瑛
毛涵秋
冯玉尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xingzhao Defense Research Institute Co ltd
Original Assignee
Suzhou Xingzhao Defense Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xingzhao Defense Research Institute Co ltd filed Critical Suzhou Xingzhao Defense Research Institute Co ltd
Priority to CN202010614959.XA priority Critical patent/CN111767861B/en
Publication of CN111767861A publication Critical patent/CN111767861A/en
Application granted granted Critical
Publication of CN111767861B publication Critical patent/CN111767861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The invention relates to a multi-discriminator-based SAR image target recognition method for generating a countermeasure network, which comprises the following steps: obtaining an original real sample, generating a confrontation network model based on a multi-discriminator, and generating a training sample data set; training the convolutional neural network by using a training sample data set to obtain a trained target identification model; and inputting the SAR image to be detected to a target recognition model, and outputting a target recognition result corresponding to the SAR image to be detected through feature extraction and feature matching. Compared with the prior art, the invention provides the multi-discriminator generated confrontation network model for generating the high-quality and high-stability training sample data set, the updating training of the generator is completed by adopting the training mode of multi-discriminator combined feedback and fusing the output results of each discriminator by utilizing the dynamic adjustment selection function, the training stability and the generated sample quality are ensured, the reliability of the subsequent target recognition model training is further improved, and the accuracy of SAR image target recognition is improved.

Description

SAR image target identification method based on multi-discriminator generation countermeasure network
Technical Field
The invention relates to the technical field of SAR image target identification, in particular to an SAR image target identification method based on a multi-discriminator generation countermeasure network.
Background
The SAR image is generated by an SAR (Synthetic Aperture Radar) system, and the SAR is an advanced active microwave earth observation device, has certain penetration capacity, can obtain an image similar to an optical photo and effectively detect a target under camouflage, so that SAR image target identification has very important significance to the fields of national economy and military application.
At present, target recognition of SAR images is mostly based on deep learning related technologies, the SAR images have high requirements on the quantity and quality of training data, and the existing SAR target image samples are limited in quantity and difficult to obtain, so that a target recognition model is difficult to obtain sufficient training. The recently proposed generation countermeasure network (GAN) technology is one of the data expansion algorithms with prominent effect, and the GAN technology is applied to data expansion of SAR image training samples, so that the quantity of training data can be increased to a certain extent, however, the traditional GAN model has poor training robustness, which causes unstable quality of generated training data, affects the training of subsequent target recognition models, and cannot ensure the accuracy of SAR image target recognition.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a SAR image target recognition method based on a Multi-Discriminator generated countermeasure network, and provides a Multi-Discriminator-general adaptive network (MD-GAN) generated countermeasure network by improving a traditional GAN model so as to fully improve the quantity and quality of generated training data, thereby ensuring the reliable training of a subsequent target by a model and the accuracy of SAR image target recognition.
The purpose of the invention can be realized by the following technical scheme: a SAR image target recognition method based on a multi-discriminator generation countermeasure network comprises the following steps:
s1, obtaining an original real sample, generating a confrontation network model based on a multi-discriminator, and generating a training sample data set;
s2, training the convolutional neural network by utilizing the training sample data set to obtain a trained target recognition model;
and S3, inputting the SAR image to be detected to the target recognition model, and outputting a target recognition result corresponding to the SAR image to be detected through feature extraction and feature matching.
Further, the multi-arbiter generation confrontation network model in step S1 includes a generator, a multi-arbiter unit and a trial unit, where the input of the generator is random noise, the input of the multi-arbiter is an original real sample, the output end of the generator is connected to the multi-arbiter unit, the multi-arbiter unit is connected to the input end of the trial unit, the output end of the trial unit is connected to the generator and the multi-arbiter unit, respectively, and the generator generates a false sample to the multi-arbiter unit according to the random noise;
the multi-discriminator unit is used for discriminating the false samples and the original real samples and outputting feedback results;
and the judging unit is used for judging whether the feedback result is accurate or not and respectively updating the generator and the multi-discriminator unit according to the judging result.
Further, the multi-discriminator unit comprises a plurality of parallel discriminators, the inputs of the discriminators are both false samples and original real samples, the output ends of the discriminators are connected to the dynamic adjusting module, the dynamic adjusting module is connected to the judging unit, and the dynamic adjusting module dynamically selects the judging results output by the discriminators to obtain the feedback results.
Further, the step S1 specifically includes the following steps:
s11, acquiring an original real sample, and inputting the original real sample to a plurality of discriminators;
s12, inputting random noise to a generator, generating corresponding false samples by the generator, and inputting the false samples to a plurality of discriminators;
s13, according to the false sample and the original real sample, a plurality of discriminators respectively output a plurality of judgment results correspondingly, wherein the judgment results are specifically the difference degree between the false sample and the original real sample;
s14, according to the dynamic selection function, the dynamic adjustment module performs dynamic selection processing on the plurality of judgment results to obtain feedback results through screening;
s15, the judging unit judges the accuracy of the feedback result, so that the generator and the multi-discriminator unit are respectively updated, then the step S11 is returned, the objective function of the countermeasure network is generated according to the multi-discriminator, the generator is trained based on the feedback result of the multi-discriminator unit, and the trained generator is obtained according to the preset iteration times;
and S16, inputting random noise to the trained generator to generate a training sample data set.
Further, the dynamic selection function includes a mean function and a maximum function.
Further, the dynamic selection function is specifically:
Figure BDA0002563423050000031
wherein, FC(. cndot.) is a dynamic selection function, which can use mean (-) function or maximum max (-) function, λ ∈ [0, + ∞ ] as dynamic adjustment parameter to control FCSelecting the value of lambda → 0 and lambda → + ∞correspondsto mean (-) and max (-) respectively.
Further, the objective function of the multi-arbiter to generate the countermeasure network is:
Figure BDA0002563423050000032
wherein G denotes a generator, D denotes a discriminator, V is a merit function representing the discriminating performance of the discriminator, and V (G, D)1) As a result of the decision of the first arbiter in the multi-arbiter unit, i.e. the first decisionDegree of difference between the false sample and the original true sample determined by the discriminator, and so on, V (G, D)i) The judgment result of the ith discriminator in the multi-discriminator unit is the difference degree between the false sample judged by the ith discriminator and the original real sample;
k is the total number of the discriminators in the multi-discriminator unit;
Figure BDA0002563423050000033
for the fixed generator G, the discriminator D can maximally discriminate the false samples from the original true samples,
Figure BDA0002563423050000034
meaning that a generator G is obtained under the condition of a fixed discriminator D, this generator G can minimize the difference between the output dummy samples and the original real samples.
Further, the specific process of the generator performing training based on the feedback result of the multi-arbiter unit in step S15 is as follows:
s151, firstly setting the dynamic adjustment parameter lambda as lambda → 0, namely adopting F as the dynamic selection functionC((-) mean), so that the generator is trained based on the overall average output feedback result of all the discriminators;
s152, when the feedback result of the discriminator is judged to be true by the judging unit for N times continuously, setting the dynamic adjusting parameter lambda to lambda → + ∞, namely adopting F to adopt the dynamic selection functionCAnd (c) (. max. cndot.) so that the generator is trained based on the feedback result output by the most optimized discriminator until the preset iteration number is reached, and the trained generator is obtained, wherein N is more than or equal to 5.
Further, when the generator performs training based on the feedback result output by the most optimized discriminator in step S152, an arithmetic mean method is specifically adopted to avoid that the gradient of the generator disappears due to solving the maximum value of the plurality of discriminators, where a mathematical expression of the adopted arithmetic mean method is:
Figure BDA0002563423050000041
Figure BDA0002563423050000042
wherein, ω isiIs the arithmetic weight of the ith discriminator in the multi-discriminator unit.
Further, the training formula when the generator generates the dummy sample is as follows:
Figure BDA0002563423050000043
where z is the random noise of the input, G (z) is the dummy samples generated by the generator, Di(G (z)) is the probability value that the i-th discriminator in the multi-discriminator unit judges the false sample to be true, 1-Di(G (z)) is the probability value of the i-th discriminator in the multi-discriminator unit judging the false sample as false, E is expectation, P isGA generated data distribution for the generator;
the gradient of the generator is specifically:
Figure BDA0002563423050000044
Figure BDA0002563423050000045
wherein the content of the first and second substances,
Figure BDA0002563423050000046
the cumulative value of the probability value for all the discriminators in the multi-discriminator unit to decide false samples as false
Figure BDA0002563423050000047
The generator gradient minimum will be at
Figure BDA0002563423050000048
Is obtained at this time
Figure BDA0002563423050000049
Thus, the gradient vanishing only occurs when all the discriminators decide that a false sample is false, which enables the generator to receive more positive feedback from the multi-discriminators.
Compared with the prior art, the invention has the following advantages:
the invention improves the structure of the traditional GAN model, changes the original single discriminator structure into a training mode of multi-discriminator combined feedback, utilizes a selection function to fuse the output results of each discriminator to complete the updating training of a generator, and dynamically adjusts the type of the selection function through parameter self-adaption, thereby effectively improving the training stability and the generated sample quality of the GAN model, realizing the high-quality and high-stability data expansion of an SAR training sample data set, better training the subsequent target recognition model and further improving the target recognition accuracy of the SAR image.
Secondly, the invention adopts the dynamically adjustable parameters to realize the type conversion of the dynamically selected function, on one hand, the maximum function can be utilized to solve the problem that the deep network structure causes V (G, D) non-convexity, and meanwhile, the arithmetic mean method is combined to avoid the gradient disappearance of the generator, on the other hand, the mean function can be utilized to effectively reduce the variance provided by the multi-discriminator for the feedback of the generator, thereby greatly improving the training stability of the whole MD-GAN model to a certain extent, especially when the discriminator is not trained to be optimal.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of a basic structure of a conventional GAN model;
FIG. 3 is a schematic diagram of a multi-discriminator unit according to the present invention;
FIG. 4 is a schematic diagram of the basic structure of the MD-GAN model of the present invention;
FIG. 5 is a schematic diagram of a process for generating training sample data by the method of the present invention in an embodiment;
FIG. 6 is a diagram of an original real sample in an embodiment;
fig. 7 is a sample example of a training sample data set finally generated in the embodiment.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
As shown in fig. 1, a method for identifying an SAR image target based on a multi-arbiter generation countermeasure network includes the following steps:
s1, obtaining an original real sample, generating a confrontation network model based on a multi-discriminator, and generating a training sample data set;
s2, training the convolutional neural network by utilizing the training sample data set to obtain a trained target recognition model;
and S3, inputting the SAR image to be detected to the target recognition model, and outputting a target recognition result corresponding to the SAR image to be detected through feature extraction and feature matching.
The invention provides a multi-discriminator generation confrontation network model, which is characterized in that on the basis of the alternating training principle of a traditional GAN model generator and a discriminator, the original single discriminator structure is changed into a training mode of multi-discriminator combined feedback, the updating of the generator is completed by utilizing a selection Function (Choose Function) to fuse the output result of each discriminator, and the type of the selection Function is dynamically adjusted through parameter self-adaption, so that the problems of gradient disappearance, mode collapse and the like in the training process are avoided, the training stability and the quality of a generated sample of the GAN model are improved, wherein the structure of the traditional GAN model is shown in figure 2 and is a deep neural network consisting of two alternately trained multilayer perceptron (MLP) networks, the basic framework of the model comprises a generator G (Generator) and a discriminator D (discriminator), G continuously learns the feature distribution in a real sample generator, the method aims to convert the initially input random noise into a false sample which is similar to real sample data as much as possible; the discriminator D judges whether the input sample image is a real sample, in order to accurately distinguish the real sample from the false sample generated by the generator G. The optimization process of the whole GAN network can be regarded as a game problem with a maximum minimum value, namely
Figure BDA0002563423050000061
In the alternate training, the two models G and D are optimized simultaneously, and finally the Nash Equilibrium (Nash Equilibrium) between G and D is achieved, namely the generator G generates a false sample with extremely high similarity to a real sample, and the discriminator D cannot accurately distinguish the real sample from the false sample generated by G.
In the MD-GAN model proposed by the present invention (as shown in FIG. 3 and FIG. 4 in detail), a plurality of discriminators { D }1,D2,...,DkReceiving real samples and false samples output by a generator G in parallel, wherein the generator G is trained based on the feedback result summary of all discriminators, and the objective function of MD-GAN is as follows:
Figure BDA0002563423050000062
wherein FC(. represents a selection Function) for processing the outputs of a plurality of discriminators, each discriminator D during the training of the MD-GANiRespectively maximize their corresponding V (G, D)i) And the optimization goal of generator G is based on FCDynamic change of form ofC(. DEG) using a maximum or mean function, when FCWhen (·) max (·), the generator and the arbiter which optimizes fastest train synchronously; when F isCWhere mean, the generator is trained based on the overall average output of all discriminators.
In the MD-GAN framework, if the selection function is set to FC(. max. cndot.), then for the fixed generator G, solve for FC(V(G,D1),V(G,D2),....,V(G,Dk) Maximum value of) can be equivalently utilized after random initialization
Figure BDA0002563423050000063
The method can effectively solve the problem of non-convex V (G, D) caused by the deep network structure by optimizing V (G, D) as the loss of a generator. At the same time, it has been trainedIn-process generator G minimization
Figure BDA0002563423050000064
The requirement of (G) forces G to generate high similarity dummy samples that can be discriminated by all discriminators, thereby enhancing the data generation capability of the network.
However, keeping the best trained arbiter D trained in sync continuously, there is a high probability that the learning of generator G will be hindered. In practical terms, for maxV (G, D)i) Is usually difficult to converge due to the computation of maxV (G, D) for multiple discriminatorsi) The requirements for generator training are too demanding, making it difficult for the generator to generate samples that meet all of the discriminator-discriminating criteria (i.e., the gradient disappears). Therefore, in the MD-GAN model, an Arithmetic Mean (arithmetric Mean) of one of three Pythagorean Means (Pythagorean Means) needs to be introduced to weaken the influence of solving the maximum of multiple discriminators on the generator, and the mathematical expression is as follows:
Figure BDA0002563423050000065
in the formula
Figure BDA0002563423050000066
Representing the arithmetic weight, λ ∈ [0, + ∞) is a dynamically adjustable parameter for controlling FCSelecting the value of lambda → 0 and lambda → + ∞correspondsto mean (-) and max (-) respectively. Thus, when λ is 0, the training formula for the generator G after adding the arithmetic mean λ is 0 can be written as:
Figure BDA0002563423050000071
order to
Figure BDA0002563423050000072
The generator gradient can be expressed as:
Figure BDA0002563423050000073
when in use
Figure BDA0002563423050000074
The gradient minimum of the generator G will be at
Figure BDA0002563423050000075
Get it (at this moment have
Figure BDA0002563423050000076
Thus, the gradient vanishing is only likely to occur when all the discriminators decide that the generated samples are false, which enables the generator to receive more positive feedback from the multi-discriminators. In addition, the operation of solving the overall mean value can effectively reduce the variance of feedback provided by the multi-discriminant for the generator, and the training stability of the system is greatly improved to a certain extent, especially when the discriminant is not trained to be optimal.
In specific application, in the initial stage, firstly, the parameter lambda is set to lambda → 0, namely, F is adopted by the selection functionCMean is a relatively smooth training network, and as training progresses and the ability of the arbiter gradually increases, λ is increased again to shift the selection function to FCMax () is close to train the generator more accurately.
In this embodiment, the iteration times of the MD-GAN model are respectively set to 50, 100, 300, 400, and 500, the corresponding generated sample data is shown in fig. 5, when the iteration times is 500, the corresponding generated sample data is already very close to the original real sample data (as shown in fig. 6), and the embodiment is based on the limited original real sample data, and after the MD-GAN model provided by the present invention is adopted, the number and quality of samples in the generated training sample data set (as shown in fig. 7) can be effectively expanded and improved, as can be seen from fig. 6 and 7, the MD-GAN model provided by the present invention is a GAN sample generation technology combining with multi-discriminator dynamic joint training, so as to solve the problem that the depth classification model cannot be sufficiently trained due to insufficient real samples of the SAR, and ensure the quality of the generated samples while realizing stable training of the generated model, therefore, the accuracy of subsequent target recognition model training is ensured, and the SAR image target recognition accuracy is improved.

Claims (10)

1. A SAR image target recognition method based on a multi-discriminator generation countermeasure network is characterized by comprising the following steps:
s1, obtaining an original real sample, generating a confrontation network model based on a multi-discriminator, and generating a training sample data set;
s2, training the convolutional neural network by utilizing the training sample data set to obtain a trained target recognition model;
and S3, inputting the SAR image to be detected to the target recognition model, and outputting a target recognition result corresponding to the SAR image to be detected through feature extraction and feature matching.
2. The method for identifying the SAR image target based on the multi-arbiter generated countermeasure network according to claim 1, wherein the multi-arbiter generated countermeasure network model in step S1 includes a generator, a multi-arbiter unit and a trial unit, wherein the input of the generator is random noise, the input of the multi-arbiter is an original real sample, the output end of the generator is connected to the multi-arbiter unit, the multi-arbiter unit is connected to the input end of the trial unit, the output ends of the trial unit are respectively connected to the generator and the multi-arbiter unit, and the generator generates a dummy sample to the multi-arbiter unit according to the random noise;
the multi-discriminator unit is used for discriminating the false samples and the original real samples and outputting feedback results;
and the judging unit is used for judging whether the feedback result is accurate or not and respectively updating the generator and the multi-discriminator unit according to the judging result.
3. The SAR image target recognition method based on multi-discriminator antagonistic network generation according to claim 2, characterized in that the multi-discriminator unit comprises a plurality of parallel discriminators, the inputs of the discriminators are all false samples and original real samples, the output ends of the discriminators are all connected to a dynamic adjustment module, the dynamic adjustment module is connected to the trial unit, and the dynamic adjustment module dynamically selects the decision results output by the discriminators to obtain the feedback result.
4. The method for recognizing the target of the SAR image based on multi-discriminant generation countermeasure network according to claim 3, wherein the step S1 specifically includes the following steps:
s11, acquiring an original real sample, and inputting the original real sample to a plurality of discriminators;
s12, inputting random noise to a generator, generating corresponding false samples by the generator, and inputting the false samples to a plurality of discriminators;
s13, according to the false sample and the original real sample, a plurality of discriminators respectively output a plurality of judgment results correspondingly, wherein the judgment results are specifically the difference degree between the false sample and the original real sample;
s14, according to the dynamic selection function, the dynamic adjustment module performs dynamic selection processing on the plurality of judgment results to obtain feedback results through screening;
s15, the judging unit judges the accuracy of the feedback result, so that the generator and the multi-discriminator unit are respectively updated, then the step S11 is returned, the objective function of the countermeasure network is generated according to the multi-discriminator, the generator is trained based on the feedback result of the multi-discriminator unit, and the trained generator is obtained according to the preset iteration times;
and S16, inputting random noise to the trained generator to generate a training sample data set.
5. The SAR image target recognition method based on multi-discriminant generation countermeasure network of claim 4, wherein the dynamic selection function comprises a mean function and a maximum function.
6. The SAR image target recognition method based on multi-arbiter generation countermeasure network as claimed in claim 5, characterized in that the dynamic selection function is specifically:
Figure FDA0002563423040000021
wherein, FC(. cndot.) is a dynamic selection function, which can use mean (-) function or maximum max (-) function, λ ∈ [0, + ∞ ] as dynamic adjustment parameter to control FCSelecting the value of lambda → 0 and lambda → + ∞correspondsto mean (-) and max (-) respectively.
7. The SAR image target recognition method based on multi-arbiter generation countermeasure network of claim 6, characterized in that the objective function of the multi-arbiter generation countermeasure network is:
Figure FDA0002563423040000022
i=1,2,...,k
wherein G denotes a generator, D denotes a discriminator, V is a merit function representing the discriminating performance of the discriminator, and V (G, D)1) The judgment result of the first discriminator in the multi-discriminator unit, namely the difference degree between the false sample judged by the first discriminator and the original real sample, and so on, V (G, D)i) The judgment result of the ith discriminator in the multi-discriminator unit is the difference degree between the false sample judged by the ith discriminator and the original real sample;
k is the total number of the discriminators in the multi-discriminator unit;
Figure FDA0002563423040000023
for the fixed generator G, the discriminator D can maximally discriminate the false samples from the original true samples,
Figure FDA0002563423040000024
meaning that a generator G is obtained with the fixed arbiter D, this generator G can be minimizedThe difference between the output ghost sample and the original true sample is normalized.
8. The method for recognizing the SAR image target based on the multi-discriminant generation countermeasure network of claim 7, wherein the specific process of the generator training based on the feedback result of the multi-discriminant unit in the step S15 is as follows:
s151, firstly setting the dynamic adjustment parameter lambda as lambda → 0, namely adopting F as the dynamic selection functionC((-) mean), so that the generator is trained based on the overall average output feedback result of all the discriminators;
s152, when the feedback result of the discriminator is judged to be true by the judging unit for N times continuously, setting the dynamic adjusting parameter lambda to lambda → + ∞, namely adopting F to adopt the dynamic selection functionCAnd (c) (. max. cndot.) so that the generator is trained based on the feedback result output by the most optimized discriminator until the preset iteration number is reached, and the trained generator is obtained, wherein N is more than or equal to 5.
9. The method as claimed in claim 8, wherein in the step S152, when the generator is trained based on the feedback result output by the most optimized discriminator, an arithmetic mean method is specifically adopted to avoid gradient disappearance of the generator caused by solving the maximum value of the plurality of discriminators, wherein a mathematical expression of the adopted arithmetic mean method is as follows:
Figure FDA0002563423040000031
Figure FDA0002563423040000032
wherein, ω isiIs the arithmetic weight of the ith discriminator in the multi-discriminator unit.
10. The method for recognizing the target of the SAR image based on the multi-discriminant generation countermeasure network of claim 9, wherein the training formula when the generator generates the dummy samples is as follows:
Figure FDA0002563423040000033
where z is the random noise of the input, G (z) is the dummy samples generated by the generator, Di(G (z)) is the probability value that the i-th discriminator in the multi-discriminator unit judges the false sample to be true, 1-Di(G (z)) is the probability value of the i-th discriminator in the multi-discriminator unit judging the false sample as false, E is expectation, P isGA generated data distribution for the generator;
the gradient of the generator is specifically:
Figure FDA0002563423040000034
Figure FDA0002563423040000035
wherein the content of the first and second substances,
Figure FDA0002563423040000036
the cumulative value of the probability value for all the discriminators in the multi-discriminator unit to decide false samples as false
Figure FDA0002563423040000037
The generator gradient minimum will be at
Figure FDA0002563423040000038
Is obtained, at this time Di(G(z))=0,
Figure FDA0002563423040000039
Thus, gradient vanishing occurs only when all discriminators decide a false sample as false, which enables the generator to receive more positive from multiple discriminatorsAnd feeding back.
CN202010614959.XA 2020-06-30 2020-06-30 SAR image target recognition method based on multi-discriminant generation countermeasure network Active CN111767861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010614959.XA CN111767861B (en) 2020-06-30 2020-06-30 SAR image target recognition method based on multi-discriminant generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010614959.XA CN111767861B (en) 2020-06-30 2020-06-30 SAR image target recognition method based on multi-discriminant generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111767861A true CN111767861A (en) 2020-10-13
CN111767861B CN111767861B (en) 2024-03-12

Family

ID=72723527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010614959.XA Active CN111767861B (en) 2020-06-30 2020-06-30 SAR image target recognition method based on multi-discriminant generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111767861B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415514A (en) * 2020-11-16 2021-02-26 北京环境特性研究所 Target SAR image generation method and device
CN112668529A (en) * 2020-12-31 2021-04-16 神思电子技术股份有限公司 Dish sample image enhancement identification method
CN112766348A (en) * 2021-01-12 2021-05-07 云南电网有限责任公司电力科学研究院 Method and device for generating sample data based on antagonistic neural network
CN113066049A (en) * 2021-03-10 2021-07-02 武汉大学 MEMS sensor defect type identification method and system
CN113343124A (en) * 2021-06-21 2021-09-03 中国科学技术大学 Training method, detection method and device for generating confrontation network
CN113485863A (en) * 2021-07-14 2021-10-08 北京航空航天大学 Method for generating heterogeneous unbalanced fault samples based on improved generation countermeasure network
CN113298007B (en) * 2021-06-04 2024-05-03 西北工业大学 Small sample SAR image target recognition method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399625A (en) * 2018-02-28 2018-08-14 电子科技大学 A kind of SAR image orientation generation method generating confrontation network based on depth convolution
CN108564115A (en) * 2018-03-30 2018-09-21 西安电子科技大学 Semi-supervised polarization SAR terrain classification method based on full convolution GAN
CN109493308A (en) * 2018-11-14 2019-03-19 吉林大学 The medical image synthesis and classification method for generating confrontation network are differentiated based on condition more
CN109785258A (en) * 2019-01-10 2019-05-21 华南理工大学 A kind of facial image restorative procedure generating confrontation network based on more arbiters
CN110516561A (en) * 2019-08-05 2019-11-29 西安电子科技大学 SAR image target recognition method based on DCGAN and CNN
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399625A (en) * 2018-02-28 2018-08-14 电子科技大学 A kind of SAR image orientation generation method generating confrontation network based on depth convolution
CN108564115A (en) * 2018-03-30 2018-09-21 西安电子科技大学 Semi-supervised polarization SAR terrain classification method based on full convolution GAN
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN109493308A (en) * 2018-11-14 2019-03-19 吉林大学 The medical image synthesis and classification method for generating confrontation network are differentiated based on condition more
CN109785258A (en) * 2019-01-10 2019-05-21 华南理工大学 A kind of facial image restorative procedure generating confrontation network based on more arbiters
CN110516561A (en) * 2019-08-05 2019-11-29 西安电子科技大学 SAR image target recognition method based on DCGAN and CNN

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张彬: "《图像复原优化算法》", 《北京:国防工业出版社》, pages: 239 - 240 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415514A (en) * 2020-11-16 2021-02-26 北京环境特性研究所 Target SAR image generation method and device
CN112415514B (en) * 2020-11-16 2023-05-02 北京环境特性研究所 Target SAR image generation method and device
CN112668529A (en) * 2020-12-31 2021-04-16 神思电子技术股份有限公司 Dish sample image enhancement identification method
CN112766348A (en) * 2021-01-12 2021-05-07 云南电网有限责任公司电力科学研究院 Method and device for generating sample data based on antagonistic neural network
CN113066049A (en) * 2021-03-10 2021-07-02 武汉大学 MEMS sensor defect type identification method and system
CN113298007B (en) * 2021-06-04 2024-05-03 西北工业大学 Small sample SAR image target recognition method
CN113343124A (en) * 2021-06-21 2021-09-03 中国科学技术大学 Training method, detection method and device for generating confrontation network
CN113343124B (en) * 2021-06-21 2023-10-24 中国科学技术大学 Training method, detecting method and device for generating countermeasure network
CN113485863A (en) * 2021-07-14 2021-10-08 北京航空航天大学 Method for generating heterogeneous unbalanced fault samples based on improved generation countermeasure network

Also Published As

Publication number Publication date
CN111767861B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN111767861A (en) SAR image target identification method based on multi-discriminator generation countermeasure network
CN110163110B (en) Pedestrian re-recognition method based on transfer learning and depth feature fusion
CN107194336B (en) Polarized SAR image classification method based on semi-supervised depth distance measurement network
CN109711254B (en) Image processing method and device based on countermeasure generation network
CN111563275B (en) Data desensitization method based on generation countermeasure network
CN111814871A (en) Image classification method based on reliable weight optimal transmission
CN113705526B (en) Hyperspectral remote sensing image classification method
CN109063724B (en) Enhanced generation type countermeasure network and target sample identification method
US11741572B2 (en) Method and system for directed transfer of cross-domain data based on high-resolution remote sensing images
CN113326731A (en) Cross-domain pedestrian re-identification algorithm based on momentum network guidance
WO2008016109A1 (en) Learning data set optimization method for signal identification device and signal identification device capable of optimizing the learning data set
CN111736125A (en) Radar target identification method based on attention mechanism and bidirectional stacked cyclic neural network
CN108550163A (en) Moving target detecting method in a kind of complex background scene
US20230111287A1 (en) Learning proxy mixtures for few-shot classification
CN111580097A (en) Radar target identification method based on single-layer bidirectional cyclic neural network
CN114186672A (en) Efficient high-precision training algorithm for impulse neural network
CN110096976A (en) Human behavior micro-Doppler classification method based on sparse migration network
CN113505855A (en) Training method for anti-attack model
CN111832404A (en) Small sample remote sensing ground feature classification method and system based on feature generation network
CN111596276A (en) Radar HRRP target identification method based on spectrogram transformation and attention mechanism recurrent neural network
CN113011523A (en) Unsupervised depth field adaptation method based on distributed countermeasure
CN110427804B (en) Iris identity verification method based on secondary transfer learning
Tun et al. Federated learning with intermediate representation regularization
CN115481685A (en) Radiation source individual open set identification method based on prototype network
CN112308089A (en) Attention mechanism-based capsule network multi-feature extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant