CN111767861A - SAR image target identification method based on multi-discriminator generation countermeasure network - Google Patents
SAR image target identification method based on multi-discriminator generation countermeasure network Download PDFInfo
- Publication number
- CN111767861A CN111767861A CN202010614959.XA CN202010614959A CN111767861A CN 111767861 A CN111767861 A CN 111767861A CN 202010614959 A CN202010614959 A CN 202010614959A CN 111767861 A CN111767861 A CN 111767861A
- Authority
- CN
- China
- Prior art keywords
- discriminator
- generator
- unit
- sample
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 230000008034 disappearance Effects 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims description 2
- 230000003042 antagnostic effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 24
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Abstract
The invention relates to a multi-discriminator-based SAR image target recognition method for generating a countermeasure network, which comprises the following steps: obtaining an original real sample, generating a confrontation network model based on a multi-discriminator, and generating a training sample data set; training the convolutional neural network by using a training sample data set to obtain a trained target identification model; and inputting the SAR image to be detected to a target recognition model, and outputting a target recognition result corresponding to the SAR image to be detected through feature extraction and feature matching. Compared with the prior art, the invention provides the multi-discriminator generated confrontation network model for generating the high-quality and high-stability training sample data set, the updating training of the generator is completed by adopting the training mode of multi-discriminator combined feedback and fusing the output results of each discriminator by utilizing the dynamic adjustment selection function, the training stability and the generated sample quality are ensured, the reliability of the subsequent target recognition model training is further improved, and the accuracy of SAR image target recognition is improved.
Description
Technical Field
The invention relates to the technical field of SAR image target identification, in particular to an SAR image target identification method based on a multi-discriminator generation countermeasure network.
Background
The SAR image is generated by an SAR (Synthetic Aperture Radar) system, and the SAR is an advanced active microwave earth observation device, has certain penetration capacity, can obtain an image similar to an optical photo and effectively detect a target under camouflage, so that SAR image target identification has very important significance to the fields of national economy and military application.
At present, target recognition of SAR images is mostly based on deep learning related technologies, the SAR images have high requirements on the quantity and quality of training data, and the existing SAR target image samples are limited in quantity and difficult to obtain, so that a target recognition model is difficult to obtain sufficient training. The recently proposed generation countermeasure network (GAN) technology is one of the data expansion algorithms with prominent effect, and the GAN technology is applied to data expansion of SAR image training samples, so that the quantity of training data can be increased to a certain extent, however, the traditional GAN model has poor training robustness, which causes unstable quality of generated training data, affects the training of subsequent target recognition models, and cannot ensure the accuracy of SAR image target recognition.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a SAR image target recognition method based on a Multi-Discriminator generated countermeasure network, and provides a Multi-Discriminator-general adaptive network (MD-GAN) generated countermeasure network by improving a traditional GAN model so as to fully improve the quantity and quality of generated training data, thereby ensuring the reliable training of a subsequent target by a model and the accuracy of SAR image target recognition.
The purpose of the invention can be realized by the following technical scheme: a SAR image target recognition method based on a multi-discriminator generation countermeasure network comprises the following steps:
s1, obtaining an original real sample, generating a confrontation network model based on a multi-discriminator, and generating a training sample data set;
s2, training the convolutional neural network by utilizing the training sample data set to obtain a trained target recognition model;
and S3, inputting the SAR image to be detected to the target recognition model, and outputting a target recognition result corresponding to the SAR image to be detected through feature extraction and feature matching.
Further, the multi-arbiter generation confrontation network model in step S1 includes a generator, a multi-arbiter unit and a trial unit, where the input of the generator is random noise, the input of the multi-arbiter is an original real sample, the output end of the generator is connected to the multi-arbiter unit, the multi-arbiter unit is connected to the input end of the trial unit, the output end of the trial unit is connected to the generator and the multi-arbiter unit, respectively, and the generator generates a false sample to the multi-arbiter unit according to the random noise;
the multi-discriminator unit is used for discriminating the false samples and the original real samples and outputting feedback results;
and the judging unit is used for judging whether the feedback result is accurate or not and respectively updating the generator and the multi-discriminator unit according to the judging result.
Further, the multi-discriminator unit comprises a plurality of parallel discriminators, the inputs of the discriminators are both false samples and original real samples, the output ends of the discriminators are connected to the dynamic adjusting module, the dynamic adjusting module is connected to the judging unit, and the dynamic adjusting module dynamically selects the judging results output by the discriminators to obtain the feedback results.
Further, the step S1 specifically includes the following steps:
s11, acquiring an original real sample, and inputting the original real sample to a plurality of discriminators;
s12, inputting random noise to a generator, generating corresponding false samples by the generator, and inputting the false samples to a plurality of discriminators;
s13, according to the false sample and the original real sample, a plurality of discriminators respectively output a plurality of judgment results correspondingly, wherein the judgment results are specifically the difference degree between the false sample and the original real sample;
s14, according to the dynamic selection function, the dynamic adjustment module performs dynamic selection processing on the plurality of judgment results to obtain feedback results through screening;
s15, the judging unit judges the accuracy of the feedback result, so that the generator and the multi-discriminator unit are respectively updated, then the step S11 is returned, the objective function of the countermeasure network is generated according to the multi-discriminator, the generator is trained based on the feedback result of the multi-discriminator unit, and the trained generator is obtained according to the preset iteration times;
and S16, inputting random noise to the trained generator to generate a training sample data set.
Further, the dynamic selection function includes a mean function and a maximum function.
Further, the dynamic selection function is specifically:
wherein, FC(. cndot.) is a dynamic selection function, which can use mean (-) function or maximum max (-) function, λ ∈ [0, + ∞ ] as dynamic adjustment parameter to control FCSelecting the value of lambda → 0 and lambda → + ∞correspondsto mean (-) and max (-) respectively.
Further, the objective function of the multi-arbiter to generate the countermeasure network is:
wherein G denotes a generator, D denotes a discriminator, V is a merit function representing the discriminating performance of the discriminator, and V (G, D)1) As a result of the decision of the first arbiter in the multi-arbiter unit, i.e. the first decisionDegree of difference between the false sample and the original true sample determined by the discriminator, and so on, V (G, D)i) The judgment result of the ith discriminator in the multi-discriminator unit is the difference degree between the false sample judged by the ith discriminator and the original real sample;
k is the total number of the discriminators in the multi-discriminator unit;
for the fixed generator G, the discriminator D can maximally discriminate the false samples from the original true samples,meaning that a generator G is obtained under the condition of a fixed discriminator D, this generator G can minimize the difference between the output dummy samples and the original real samples.
Further, the specific process of the generator performing training based on the feedback result of the multi-arbiter unit in step S15 is as follows:
s151, firstly setting the dynamic adjustment parameter lambda as lambda → 0, namely adopting F as the dynamic selection functionC((-) mean), so that the generator is trained based on the overall average output feedback result of all the discriminators;
s152, when the feedback result of the discriminator is judged to be true by the judging unit for N times continuously, setting the dynamic adjusting parameter lambda to lambda → + ∞, namely adopting F to adopt the dynamic selection functionCAnd (c) (. max. cndot.) so that the generator is trained based on the feedback result output by the most optimized discriminator until the preset iteration number is reached, and the trained generator is obtained, wherein N is more than or equal to 5.
Further, when the generator performs training based on the feedback result output by the most optimized discriminator in step S152, an arithmetic mean method is specifically adopted to avoid that the gradient of the generator disappears due to solving the maximum value of the plurality of discriminators, where a mathematical expression of the adopted arithmetic mean method is:
wherein, ω isiIs the arithmetic weight of the ith discriminator in the multi-discriminator unit.
Further, the training formula when the generator generates the dummy sample is as follows:
where z is the random noise of the input, G (z) is the dummy samples generated by the generator, Di(G (z)) is the probability value that the i-th discriminator in the multi-discriminator unit judges the false sample to be true, 1-Di(G (z)) is the probability value of the i-th discriminator in the multi-discriminator unit judging the false sample as false, E is expectation, P isGA generated data distribution for the generator;
the gradient of the generator is specifically:
wherein the content of the first and second substances,the cumulative value of the probability value for all the discriminators in the multi-discriminator unit to decide false samples as falseThe generator gradient minimum will be atIs obtained at this timeThus, the gradient vanishing only occurs when all the discriminators decide that a false sample is false, which enables the generator to receive more positive feedback from the multi-discriminators.
Compared with the prior art, the invention has the following advantages:
the invention improves the structure of the traditional GAN model, changes the original single discriminator structure into a training mode of multi-discriminator combined feedback, utilizes a selection function to fuse the output results of each discriminator to complete the updating training of a generator, and dynamically adjusts the type of the selection function through parameter self-adaption, thereby effectively improving the training stability and the generated sample quality of the GAN model, realizing the high-quality and high-stability data expansion of an SAR training sample data set, better training the subsequent target recognition model and further improving the target recognition accuracy of the SAR image.
Secondly, the invention adopts the dynamically adjustable parameters to realize the type conversion of the dynamically selected function, on one hand, the maximum function can be utilized to solve the problem that the deep network structure causes V (G, D) non-convexity, and meanwhile, the arithmetic mean method is combined to avoid the gradient disappearance of the generator, on the other hand, the mean function can be utilized to effectively reduce the variance provided by the multi-discriminator for the feedback of the generator, thereby greatly improving the training stability of the whole MD-GAN model to a certain extent, especially when the discriminator is not trained to be optimal.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of a basic structure of a conventional GAN model;
FIG. 3 is a schematic diagram of a multi-discriminator unit according to the present invention;
FIG. 4 is a schematic diagram of the basic structure of the MD-GAN model of the present invention;
FIG. 5 is a schematic diagram of a process for generating training sample data by the method of the present invention in an embodiment;
FIG. 6 is a diagram of an original real sample in an embodiment;
fig. 7 is a sample example of a training sample data set finally generated in the embodiment.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
As shown in fig. 1, a method for identifying an SAR image target based on a multi-arbiter generation countermeasure network includes the following steps:
s1, obtaining an original real sample, generating a confrontation network model based on a multi-discriminator, and generating a training sample data set;
s2, training the convolutional neural network by utilizing the training sample data set to obtain a trained target recognition model;
and S3, inputting the SAR image to be detected to the target recognition model, and outputting a target recognition result corresponding to the SAR image to be detected through feature extraction and feature matching.
The invention provides a multi-discriminator generation confrontation network model, which is characterized in that on the basis of the alternating training principle of a traditional GAN model generator and a discriminator, the original single discriminator structure is changed into a training mode of multi-discriminator combined feedback, the updating of the generator is completed by utilizing a selection Function (Choose Function) to fuse the output result of each discriminator, and the type of the selection Function is dynamically adjusted through parameter self-adaption, so that the problems of gradient disappearance, mode collapse and the like in the training process are avoided, the training stability and the quality of a generated sample of the GAN model are improved, wherein the structure of the traditional GAN model is shown in figure 2 and is a deep neural network consisting of two alternately trained multilayer perceptron (MLP) networks, the basic framework of the model comprises a generator G (Generator) and a discriminator D (discriminator), G continuously learns the feature distribution in a real sample generator, the method aims to convert the initially input random noise into a false sample which is similar to real sample data as much as possible; the discriminator D judges whether the input sample image is a real sample, in order to accurately distinguish the real sample from the false sample generated by the generator G. The optimization process of the whole GAN network can be regarded as a game problem with a maximum minimum value, namely
In the alternate training, the two models G and D are optimized simultaneously, and finally the Nash Equilibrium (Nash Equilibrium) between G and D is achieved, namely the generator G generates a false sample with extremely high similarity to a real sample, and the discriminator D cannot accurately distinguish the real sample from the false sample generated by G.
In the MD-GAN model proposed by the present invention (as shown in FIG. 3 and FIG. 4 in detail), a plurality of discriminators { D }1,D2,...,DkReceiving real samples and false samples output by a generator G in parallel, wherein the generator G is trained based on the feedback result summary of all discriminators, and the objective function of MD-GAN is as follows:
wherein FC(. represents a selection Function) for processing the outputs of a plurality of discriminators, each discriminator D during the training of the MD-GANiRespectively maximize their corresponding V (G, D)i) And the optimization goal of generator G is based on FCDynamic change of form ofC(. DEG) using a maximum or mean function, when FCWhen (·) max (·), the generator and the arbiter which optimizes fastest train synchronously; when F isCWhere mean, the generator is trained based on the overall average output of all discriminators.
In the MD-GAN framework, if the selection function is set to FC(. max. cndot.), then for the fixed generator G, solve for FC(V(G,D1),V(G,D2),....,V(G,Dk) Maximum value of) can be equivalently utilized after random initializationThe method can effectively solve the problem of non-convex V (G, D) caused by the deep network structure by optimizing V (G, D) as the loss of a generator. At the same time, it has been trainedIn-process generator G minimizationThe requirement of (G) forces G to generate high similarity dummy samples that can be discriminated by all discriminators, thereby enhancing the data generation capability of the network.
However, keeping the best trained arbiter D trained in sync continuously, there is a high probability that the learning of generator G will be hindered. In practical terms, for maxV (G, D)i) Is usually difficult to converge due to the computation of maxV (G, D) for multiple discriminatorsi) The requirements for generator training are too demanding, making it difficult for the generator to generate samples that meet all of the discriminator-discriminating criteria (i.e., the gradient disappears). Therefore, in the MD-GAN model, an Arithmetic Mean (arithmetric Mean) of one of three Pythagorean Means (Pythagorean Means) needs to be introduced to weaken the influence of solving the maximum of multiple discriminators on the generator, and the mathematical expression is as follows:
in the formulaRepresenting the arithmetic weight, λ ∈ [0, + ∞) is a dynamically adjustable parameter for controlling FCSelecting the value of lambda → 0 and lambda → + ∞correspondsto mean (-) and max (-) respectively. Thus, when λ is 0, the training formula for the generator G after adding the arithmetic mean λ is 0 can be written as:
when in useThe gradient minimum of the generator G will be atGet it (at this moment haveThus, the gradient vanishing is only likely to occur when all the discriminators decide that the generated samples are false, which enables the generator to receive more positive feedback from the multi-discriminators. In addition, the operation of solving the overall mean value can effectively reduce the variance of feedback provided by the multi-discriminant for the generator, and the training stability of the system is greatly improved to a certain extent, especially when the discriminant is not trained to be optimal.
In specific application, in the initial stage, firstly, the parameter lambda is set to lambda → 0, namely, F is adopted by the selection functionCMean is a relatively smooth training network, and as training progresses and the ability of the arbiter gradually increases, λ is increased again to shift the selection function to FCMax () is close to train the generator more accurately.
In this embodiment, the iteration times of the MD-GAN model are respectively set to 50, 100, 300, 400, and 500, the corresponding generated sample data is shown in fig. 5, when the iteration times is 500, the corresponding generated sample data is already very close to the original real sample data (as shown in fig. 6), and the embodiment is based on the limited original real sample data, and after the MD-GAN model provided by the present invention is adopted, the number and quality of samples in the generated training sample data set (as shown in fig. 7) can be effectively expanded and improved, as can be seen from fig. 6 and 7, the MD-GAN model provided by the present invention is a GAN sample generation technology combining with multi-discriminator dynamic joint training, so as to solve the problem that the depth classification model cannot be sufficiently trained due to insufficient real samples of the SAR, and ensure the quality of the generated samples while realizing stable training of the generated model, therefore, the accuracy of subsequent target recognition model training is ensured, and the SAR image target recognition accuracy is improved.
Claims (10)
1. A SAR image target recognition method based on a multi-discriminator generation countermeasure network is characterized by comprising the following steps:
s1, obtaining an original real sample, generating a confrontation network model based on a multi-discriminator, and generating a training sample data set;
s2, training the convolutional neural network by utilizing the training sample data set to obtain a trained target recognition model;
and S3, inputting the SAR image to be detected to the target recognition model, and outputting a target recognition result corresponding to the SAR image to be detected through feature extraction and feature matching.
2. The method for identifying the SAR image target based on the multi-arbiter generated countermeasure network according to claim 1, wherein the multi-arbiter generated countermeasure network model in step S1 includes a generator, a multi-arbiter unit and a trial unit, wherein the input of the generator is random noise, the input of the multi-arbiter is an original real sample, the output end of the generator is connected to the multi-arbiter unit, the multi-arbiter unit is connected to the input end of the trial unit, the output ends of the trial unit are respectively connected to the generator and the multi-arbiter unit, and the generator generates a dummy sample to the multi-arbiter unit according to the random noise;
the multi-discriminator unit is used for discriminating the false samples and the original real samples and outputting feedback results;
and the judging unit is used for judging whether the feedback result is accurate or not and respectively updating the generator and the multi-discriminator unit according to the judging result.
3. The SAR image target recognition method based on multi-discriminator antagonistic network generation according to claim 2, characterized in that the multi-discriminator unit comprises a plurality of parallel discriminators, the inputs of the discriminators are all false samples and original real samples, the output ends of the discriminators are all connected to a dynamic adjustment module, the dynamic adjustment module is connected to the trial unit, and the dynamic adjustment module dynamically selects the decision results output by the discriminators to obtain the feedback result.
4. The method for recognizing the target of the SAR image based on multi-discriminant generation countermeasure network according to claim 3, wherein the step S1 specifically includes the following steps:
s11, acquiring an original real sample, and inputting the original real sample to a plurality of discriminators;
s12, inputting random noise to a generator, generating corresponding false samples by the generator, and inputting the false samples to a plurality of discriminators;
s13, according to the false sample and the original real sample, a plurality of discriminators respectively output a plurality of judgment results correspondingly, wherein the judgment results are specifically the difference degree between the false sample and the original real sample;
s14, according to the dynamic selection function, the dynamic adjustment module performs dynamic selection processing on the plurality of judgment results to obtain feedback results through screening;
s15, the judging unit judges the accuracy of the feedback result, so that the generator and the multi-discriminator unit are respectively updated, then the step S11 is returned, the objective function of the countermeasure network is generated according to the multi-discriminator, the generator is trained based on the feedback result of the multi-discriminator unit, and the trained generator is obtained according to the preset iteration times;
and S16, inputting random noise to the trained generator to generate a training sample data set.
5. The SAR image target recognition method based on multi-discriminant generation countermeasure network of claim 4, wherein the dynamic selection function comprises a mean function and a maximum function.
6. The SAR image target recognition method based on multi-arbiter generation countermeasure network as claimed in claim 5, characterized in that the dynamic selection function is specifically:
wherein, FC(. cndot.) is a dynamic selection function, which can use mean (-) function or maximum max (-) function, λ ∈ [0, + ∞ ] as dynamic adjustment parameter to control FCSelecting the value of lambda → 0 and lambda → + ∞correspondsto mean (-) and max (-) respectively.
7. The SAR image target recognition method based on multi-arbiter generation countermeasure network of claim 6, characterized in that the objective function of the multi-arbiter generation countermeasure network is:
i=1,2,...,k
wherein G denotes a generator, D denotes a discriminator, V is a merit function representing the discriminating performance of the discriminator, and V (G, D)1) The judgment result of the first discriminator in the multi-discriminator unit, namely the difference degree between the false sample judged by the first discriminator and the original real sample, and so on, V (G, D)i) The judgment result of the ith discriminator in the multi-discriminator unit is the difference degree between the false sample judged by the ith discriminator and the original real sample;
k is the total number of the discriminators in the multi-discriminator unit;
for the fixed generator G, the discriminator D can maximally discriminate the false samples from the original true samples,meaning that a generator G is obtained with the fixed arbiter D, this generator G can be minimizedThe difference between the output ghost sample and the original true sample is normalized.
8. The method for recognizing the SAR image target based on the multi-discriminant generation countermeasure network of claim 7, wherein the specific process of the generator training based on the feedback result of the multi-discriminant unit in the step S15 is as follows:
s151, firstly setting the dynamic adjustment parameter lambda as lambda → 0, namely adopting F as the dynamic selection functionC((-) mean), so that the generator is trained based on the overall average output feedback result of all the discriminators;
s152, when the feedback result of the discriminator is judged to be true by the judging unit for N times continuously, setting the dynamic adjusting parameter lambda to lambda → + ∞, namely adopting F to adopt the dynamic selection functionCAnd (c) (. max. cndot.) so that the generator is trained based on the feedback result output by the most optimized discriminator until the preset iteration number is reached, and the trained generator is obtained, wherein N is more than or equal to 5.
9. The method as claimed in claim 8, wherein in the step S152, when the generator is trained based on the feedback result output by the most optimized discriminator, an arithmetic mean method is specifically adopted to avoid gradient disappearance of the generator caused by solving the maximum value of the plurality of discriminators, wherein a mathematical expression of the adopted arithmetic mean method is as follows:
wherein, ω isiIs the arithmetic weight of the ith discriminator in the multi-discriminator unit.
10. The method for recognizing the target of the SAR image based on the multi-discriminant generation countermeasure network of claim 9, wherein the training formula when the generator generates the dummy samples is as follows:
where z is the random noise of the input, G (z) is the dummy samples generated by the generator, Di(G (z)) is the probability value that the i-th discriminator in the multi-discriminator unit judges the false sample to be true, 1-Di(G (z)) is the probability value of the i-th discriminator in the multi-discriminator unit judging the false sample as false, E is expectation, P isGA generated data distribution for the generator;
the gradient of the generator is specifically:
wherein the content of the first and second substances,the cumulative value of the probability value for all the discriminators in the multi-discriminator unit to decide false samples as falseThe generator gradient minimum will be atIs obtained, at this time Di(G(z))=0,Thus, gradient vanishing occurs only when all discriminators decide a false sample as false, which enables the generator to receive more positive from multiple discriminatorsAnd feeding back.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010614959.XA CN111767861B (en) | 2020-06-30 | 2020-06-30 | SAR image target recognition method based on multi-discriminant generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010614959.XA CN111767861B (en) | 2020-06-30 | 2020-06-30 | SAR image target recognition method based on multi-discriminant generation countermeasure network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111767861A true CN111767861A (en) | 2020-10-13 |
CN111767861B CN111767861B (en) | 2024-03-12 |
Family
ID=72723527
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010614959.XA Active CN111767861B (en) | 2020-06-30 | 2020-06-30 | SAR image target recognition method based on multi-discriminant generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111767861B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112415514A (en) * | 2020-11-16 | 2021-02-26 | 北京环境特性研究所 | Target SAR image generation method and device |
CN112668529A (en) * | 2020-12-31 | 2021-04-16 | 神思电子技术股份有限公司 | Dish sample image enhancement identification method |
CN112766348A (en) * | 2021-01-12 | 2021-05-07 | 云南电网有限责任公司电力科学研究院 | Method and device for generating sample data based on antagonistic neural network |
CN113066049A (en) * | 2021-03-10 | 2021-07-02 | 武汉大学 | MEMS sensor defect type identification method and system |
CN113343124A (en) * | 2021-06-21 | 2021-09-03 | 中国科学技术大学 | Training method, detection method and device for generating confrontation network |
CN113485863A (en) * | 2021-07-14 | 2021-10-08 | 北京航空航天大学 | Method for generating heterogeneous unbalanced fault samples based on improved generation countermeasure network |
CN113298007B (en) * | 2021-06-04 | 2024-05-03 | 西北工业大学 | Small sample SAR image target recognition method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399625A (en) * | 2018-02-28 | 2018-08-14 | 电子科技大学 | A kind of SAR image orientation generation method generating confrontation network based on depth convolution |
CN108564115A (en) * | 2018-03-30 | 2018-09-21 | 西安电子科技大学 | Semi-supervised polarization SAR terrain classification method based on full convolution GAN |
CN109493308A (en) * | 2018-11-14 | 2019-03-19 | 吉林大学 | The medical image synthesis and classification method for generating confrontation network are differentiated based on condition more |
CN109785258A (en) * | 2019-01-10 | 2019-05-21 | 华南理工大学 | A kind of facial image restorative procedure generating confrontation network based on more arbiters |
CN110516561A (en) * | 2019-08-05 | 2019-11-29 | 西安电子科技大学 | SAR image target recognition method based on DCGAN and CNN |
WO2020029356A1 (en) * | 2018-08-08 | 2020-02-13 | 杰创智能科技股份有限公司 | Method employing generative adversarial network for predicting face change |
-
2020
- 2020-06-30 CN CN202010614959.XA patent/CN111767861B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399625A (en) * | 2018-02-28 | 2018-08-14 | 电子科技大学 | A kind of SAR image orientation generation method generating confrontation network based on depth convolution |
CN108564115A (en) * | 2018-03-30 | 2018-09-21 | 西安电子科技大学 | Semi-supervised polarization SAR terrain classification method based on full convolution GAN |
WO2020029356A1 (en) * | 2018-08-08 | 2020-02-13 | 杰创智能科技股份有限公司 | Method employing generative adversarial network for predicting face change |
CN109493308A (en) * | 2018-11-14 | 2019-03-19 | 吉林大学 | The medical image synthesis and classification method for generating confrontation network are differentiated based on condition more |
CN109785258A (en) * | 2019-01-10 | 2019-05-21 | 华南理工大学 | A kind of facial image restorative procedure generating confrontation network based on more arbiters |
CN110516561A (en) * | 2019-08-05 | 2019-11-29 | 西安电子科技大学 | SAR image target recognition method based on DCGAN and CNN |
Non-Patent Citations (1)
Title |
---|
张彬: "《图像复原优化算法》", 《北京:国防工业出版社》, pages: 239 - 240 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112415514A (en) * | 2020-11-16 | 2021-02-26 | 北京环境特性研究所 | Target SAR image generation method and device |
CN112415514B (en) * | 2020-11-16 | 2023-05-02 | 北京环境特性研究所 | Target SAR image generation method and device |
CN112668529A (en) * | 2020-12-31 | 2021-04-16 | 神思电子技术股份有限公司 | Dish sample image enhancement identification method |
CN112766348A (en) * | 2021-01-12 | 2021-05-07 | 云南电网有限责任公司电力科学研究院 | Method and device for generating sample data based on antagonistic neural network |
CN113066049A (en) * | 2021-03-10 | 2021-07-02 | 武汉大学 | MEMS sensor defect type identification method and system |
CN113298007B (en) * | 2021-06-04 | 2024-05-03 | 西北工业大学 | Small sample SAR image target recognition method |
CN113343124A (en) * | 2021-06-21 | 2021-09-03 | 中国科学技术大学 | Training method, detection method and device for generating confrontation network |
CN113343124B (en) * | 2021-06-21 | 2023-10-24 | 中国科学技术大学 | Training method, detecting method and device for generating countermeasure network |
CN113485863A (en) * | 2021-07-14 | 2021-10-08 | 北京航空航天大学 | Method for generating heterogeneous unbalanced fault samples based on improved generation countermeasure network |
Also Published As
Publication number | Publication date |
---|---|
CN111767861B (en) | 2024-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111767861A (en) | SAR image target identification method based on multi-discriminator generation countermeasure network | |
CN110163110B (en) | Pedestrian re-recognition method based on transfer learning and depth feature fusion | |
CN107194336B (en) | Polarized SAR image classification method based on semi-supervised depth distance measurement network | |
CN109711254B (en) | Image processing method and device based on countermeasure generation network | |
CN111563275B (en) | Data desensitization method based on generation countermeasure network | |
CN111814871A (en) | Image classification method based on reliable weight optimal transmission | |
CN113705526B (en) | Hyperspectral remote sensing image classification method | |
CN109063724B (en) | Enhanced generation type countermeasure network and target sample identification method | |
US11741572B2 (en) | Method and system for directed transfer of cross-domain data based on high-resolution remote sensing images | |
CN113326731A (en) | Cross-domain pedestrian re-identification algorithm based on momentum network guidance | |
WO2008016109A1 (en) | Learning data set optimization method for signal identification device and signal identification device capable of optimizing the learning data set | |
CN111736125A (en) | Radar target identification method based on attention mechanism and bidirectional stacked cyclic neural network | |
CN108550163A (en) | Moving target detecting method in a kind of complex background scene | |
US20230111287A1 (en) | Learning proxy mixtures for few-shot classification | |
CN111580097A (en) | Radar target identification method based on single-layer bidirectional cyclic neural network | |
CN114186672A (en) | Efficient high-precision training algorithm for impulse neural network | |
CN110096976A (en) | Human behavior micro-Doppler classification method based on sparse migration network | |
CN113505855A (en) | Training method for anti-attack model | |
CN111832404A (en) | Small sample remote sensing ground feature classification method and system based on feature generation network | |
CN111596276A (en) | Radar HRRP target identification method based on spectrogram transformation and attention mechanism recurrent neural network | |
CN113011523A (en) | Unsupervised depth field adaptation method based on distributed countermeasure | |
CN110427804B (en) | Iris identity verification method based on secondary transfer learning | |
Tun et al. | Federated learning with intermediate representation regularization | |
CN115481685A (en) | Radiation source individual open set identification method based on prototype network | |
CN112308089A (en) | Attention mechanism-based capsule network multi-feature extraction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |