CN109871902A - It is a kind of to fight the SAR small sample recognition methods for generating cascade network based on super-resolution - Google Patents

It is a kind of to fight the SAR small sample recognition methods for generating cascade network based on super-resolution Download PDF

Info

Publication number
CN109871902A
CN109871902A CN201910177075.XA CN201910177075A CN109871902A CN 109871902 A CN109871902 A CN 109871902A CN 201910177075 A CN201910177075 A CN 201910177075A CN 109871902 A CN109871902 A CN 109871902A
Authority
CN
China
Prior art keywords
network
resolution
image
super
sar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910177075.XA
Other languages
Chinese (zh)
Other versions
CN109871902B (en
Inventor
关键
孙建国
袁野
龙云飞
吴嘉恒
林尤添
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201910177075.XA priority Critical patent/CN109871902B/en
Publication of CN109871902A publication Critical patent/CN109871902A/en
Application granted granted Critical
Publication of CN109871902B publication Critical patent/CN109871902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to radar data process fields, and in particular to a kind of that the SAR small sample recognition methods for generating cascade network is fought based on super-resolution.Unobvious for the low caused target signature of SAR target image resolution ratio, affected by environment larger, the confusing problem of data sample proposes a super-resolution network based on deep learning, amplifies to the small sample image of the SAR of low resolution.Allow sorter network that can extract more features.It is different from traditional super-resolution method, the feature of image can be effectively extracted using the image super-resolution that GAN is carried out, and generate the high-definition picture true to nature of non-excess smoothness.For the low resolution that SAR small sample image has, feature Fuzzy, sample is easy the characteristics of obscuring, for the GAN super-resolution model of SAR image inherent characteristics.It will realize the oversubscription model of 4 times of amplification factors, the pixel quantity of original image can be amplified to original 16 times.It can be supplied to the more contents of classifier and feature in this way.

Description

SAR small sample identification method based on super-resolution countermeasure generation cascade network
Technical Field
The invention belongs to the field of radar data processing, and particularly relates to an SAR small sample identification method based on a super-resolution countermeasure generation cascade network.
Background
Synthetic Aperture Radar (SAR) is an active microwave imaging sensor, and by transmitting broadband signals and combining with the synthetic aperture technology, the SAR can simultaneously obtain two-dimensional high-resolution images in the distance direction and the azimuth direction. Compared with the traditional optical remote sensing and hyperspectral remote sensing, the SAR has all-weather and all-time imaging capability and certain penetrability, and the obtained image can reflect the microwave scattering characteristic of a target, so that the SAR is an important technical means for acquiring the ground feature information by human beings. SAR has been widely used in the field of civilian life, and is an important technical means for realizing general survey of natural resources, monitoring of natural disasters and the like.
With the increasing maturity of the SAR technology, the SAR technology is widely applied to the fields of natural disaster monitoring and the like, and has high research value. Through the classification and identification of the SAR image target, the target information can be quickly and effectively acquired. The SAR image reflects the backscattering intensity of electromagnetic waves, has geometric characteristics and electromagnetic characteristics and is a high-resolution radar image. Therefore, SAR image interpretation is a complex project, and the SAR image discrimination by manpower consumes a lot of resources, which indicates that an efficient algorithm is needed in the SAR automatic target recognition field.
SAR target recognition refers to analyzing a target scattering echo signal, extracting signal characteristics and target attribute characteristics from the target scattering echo signal, and distinguishing the type and the attribute of a target. The SAR image is not as intuitive as an optical image, the edge is easy to detect, the acquired target information needs to be processed by a computer, and then the acquired target information is compared and measured with the known target characteristic line in a database, so that the aim of automatically identifying the target is fulfilled. In recent decades, the SAR target recognition technology has achieved a lot of results in theoretical research, but has a great gap from practical application.
This is because although the SAR image often has an ultrahigh resolution, the target image generated after the target segmentation is performed on the SAR image with a high resolution is extremely small and has a low resolution because the included spatial scale of the SAR is huge. And due to the factors of complex environment, speckle noise existing in the SAR and the like, when the target in the image needs to be identified, a conventional method cannot be used generally, the SAR image in the complex environment needs to be denoised, and the target is segmented from the high-resolution SAR image by using a target segmentation technology to identify the corresponding target.
At present, the development of the deep learning technology has achieved remarkable results in many fields, which also accelerates the application of the deep learning technology in more complex scenes. The related problems in the field of computer vision are greatly developed under the support of deep learning technology. The deep learning technology is a machine learning technology based on a neural network, and is a more classical deep learning network such as a deep Convolutional Neural Network (CNN), and the network structure of the deep learning technology enables a model to have strong feature extraction and classification capabilities, so that the detection and identification of targets under general conditions can be better realized. The SAR image analysis processing based on the deep learning technology also becomes a current research hotspot, and many researches try to solve the target identification problem under the condition of low resolution in SAR small sample identification by using CNN. Although the method has good effect in partial fields, a plurality of defects still exist. This is because there is a characteristic that the target is confusable because of the feature ambiguity in the SAR small sample recognition.
The generative countermeasure network is one of the most promising methods for unsupervised learning in complex distribution in recent years. The generative confrontation network comprises two basic modules: generating a model G and a discrimination model D; the input of the model is a random Gaussian white noise signal z, and the noise signal is mapped to a new data space through a generation model G to obtain generation data G (z); then, the judging network D respectively outputs a probability value according to the input of the real data x and the generated data G (z), which represents the confidence degree that D judges whether the input is the real data or other data, so as to judge the performance of the generated data of G.
The invention innovatively provides an SAR small sample identification method based on a super-resolution countermeasure generation cascade network. The invention designs a super-resolution countermeasure generation network (SRGAN), wherein a generated countermeasure network model is used for respectively improving the resolution of an image in SAR small sample identification, and an inclusion-ResNet-V2 deep neural network aiming at the characteristics of an SAR image is used for carrying out target identification, so that the accuracy of SAR small sample identification is improved while the sample quality is improved.
Disclosure of Invention
The method aims to solve the problems of low image resolution and low identification accuracy in the SAR small sample identification problem. The invention provides an SAR small sample identification method based on a super-resolution countermeasure generation cascade network.
A SAR small sample identification method based on super-resolution countermeasure generation cascade network comprises the following steps:
step 1: training SRGAN pre-training through a large number of unlabeled SAR map databases;
step 2: pre-training an inclusion target recognition network through an existing target recognition database;
and step 3: after the super-resolution network and the classification network are cascaded, the whole network is trained by using the data with the labels again;
and 4, step 4: and the new SAR target sample obtains a recognition result through the network.
The inclusion target recognition network is pre-trained through the existing target recognition database in the step 2, a training generation function { G (z) } is used for generating an HR output image of a specified LR input, and the training inclusion network is used as a perception network for extracting high-level features in a final HR image, and the method comprises the following steps:
step 1.1: inputting the low-resolution image LR into a generation network to obtain a generated high-resolution image SR;
step 1.2: inputting the high-resolution image SR and the original real image HR into a discrimination network together to obtain the accuracy rate of discriminating the authenticity of the network identification image sample, namely the accuracy rate of identifying the original real image HR and the high-resolution image SR;
step 1.3: feeding back to the generative model;
step 1.4: generating a model network and discriminating the model network to form a countermeasure.
The high-resolution image SR and the original real image HR are input to the discrimination network together, wherein the super-resolution is trained using pixel-based MSE loss, and the calculation method of pixel-based MSE loss is represented as:
where r is the magnification, W is the long pixel value of the image, H is the wide pixel value of the image,is the pixel value of the (x, y) coordinate point of the original image,is the pixel value of the (x, y) coordinate point of the generated image. The feedback of step 1.3 to the generative models introduces the countermeasure loss in each generative model, and trains a discriminant network using the countermeasure loss, and except for the loss of the discriminant model, the countermeasure loss is defined based on the probability of the discriminant network on all training samples, and is expressed as:
wherein,is a super-resolution imageThe probability of being judged as a true image by the confronted network. Step 1.4, the generative model network and the discriminant model network form a countermeasure, wherein, when the discriminant discriminates, discriminant loss is generated, and the loss is calculated as the following formula:
wherein,is a super-resolution imageThe probability of being judged by the confronted network as a true image,is an original imageProbability of being discriminated as a true image.
Step 2, the inclusion target recognition network is pre-trained through the existing target recognition database, and the perception loss is predicted by using the inclusion loss, which is expressed as:
wherein, Wi,jAnd Hi,jRespectively represents the dimension of each characteristic in the Incep network, phii,jRepresents sample features extracted after the ith addition module and before the jth addition module, where IHRRepresenting high definition pictures, ILRRepresenting a low-definition picture, x, y is the pixel value of the (x, y) coordinate point on the image.
After the super-resolution network and the classification network are cascaded in the step 3, the whole network is trained by using the data with the labels again, wherein the sensing network based on the increment takes the output of the middle layer as input, the target identification network takes the super-resolution picture output by the super-resolution generation type countermeasure network as input, the opposition network and the increment-ReNet-V2 network are generated by utilizing the super-resolution, the increment-ResNet-V2 network comprises three increment structures of increment-A, Inception-B and increment-C, the increment-ResNet-V2 network comprises a Stem layer, 4 increment-A layers, a Reduction-A layer, 7 increment-B layers, a Reduction-B layer, 3 increment-C layers and finally passes through an aggregation layer, i.e., the average pooling layer, and Softmax outputs the results while using Dropout in the network, the cross-entropy loss calculation for authentication loss is expressed as:
where p (m) represents the probability when the label output by the model is m, and y is assumed to be a true ID label, therefore, q (y) is 1, q (m) is 0 for all m ≠ y, and finally, all loss functions are merged into one and represented as:
the invention has the beneficial effects that:
the anti-network and the inclusion-ReNet-V2 network are generated by utilizing the super-resolution, and the network structure and the loss function of the anti-network and the inclusion-ReNet-V2 network are optimized, so that the accuracy of SAR small sample identification is improved while the sample quality is improved.
Drawings
Fig. 1 is a general technical route of the present invention.
Fig. 2(a) shows a super-resolution generation model network according to the present invention.
FIG. 2(b) is a schematic diagram of a super-resolution decision model network according to the present invention.
Figure 3 is the inclusion initial structure.
FIG. 4 is a diagram of the inclusion-ResNet-V2 network architecture.
Fig. 5(a) is a schematic view of the inclusion-a network structure.
Fig. 5(B) is a schematic view of the inclusion-B network structure.
Fig. 5(C) is a schematic view of the inclusion-C network structure.
Fig. 6 is a flowchart of a method for identifying a small SAR sample based on a super-resolution countermeasure generation cascade network.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The method aims to solve the problems of low image resolution and low identification accuracy in the SAR small sample identification problem. The invention provides an SAR small sample identification method based on a super-resolution countermeasure generation cascade network. The specific implementation scheme is as follows:
(1) training SRGAN pre-training through a large number of label-free SAR map databases, and specifically comprising the following implementation steps:
① the low resolution image (LR) is input into the generating network to get the generated high resolution image (SR). the generating model structure refers to the ResNet network structure, the generating model G proposed by the invention has a deeper network structure compared with the model proposed by the predecessor, 16 parameter blocks (Residual blocks) with the same structure are used together, firstly two convolution layers with convolution kernel size of 3 x 3 and 64 feature mappings are used, and Parametric ReLU is used as the activation function after each convolution layer.
② further, the high resolution image (SR) and the original real image (HR) are input into the discrimination network together to get the accuracy of discriminating the network to identify the true and false of the image sample, the network structure of the discrimination model D contains 8 convolutional layers, each convolutional layer adopts a convolutional kernel with the size of 3 × 3, the batch normalization is adopted after each convolutional layer to prevent the gradient from disappearing and strengthen the convergence speed of the model, and LeakyReLU (α is 0.2, and α is set as a variable which can be learned through the back propagation algorithm) is adopted as an activation function, after a Feature map with the size of 512 is obtained, the probability of sample classification is finally obtained through two fully connected layers through a Sigmoid activation function, namely, the probability that the sample is the actual image rather than the generated image, the pixel-based MSE loss calculation method is as follows:
where r is the magnification, W, the width and length pixel values of the H image,pixel values representing (x, y) coordinate points on an image, exceptIs the image of the original image, and the image is displayed,an image is generated.
And feeding back the data to the generative model to improve the image generation capability of the generative model.
The resistance loss: in addition to the loss of image content described above, the present invention adds a generation component. In addition to the loss of the discriminant model, the resistance loss is defined based on the probability of the discriminant network over all training samples. It is expressed as:
here, theIs a super-resolution imageSummary of images judged to be true by a confrontation networkAnd (4) rate. For better gradient descent performance, the invention will minimizeTo replace
③ the generated model network and the discriminant model network form a pair to promote each other, wherein the discriminant loss is generated when the discriminant is discriminated as follows:
when the real image is judgedAs positive or super-resolution imagesNegative, it is desirable to minimize the loss of D. Then, the total loss is calculated as:
here, theIs a super-resolution imageThe probability of being judged as a true image by the confronted network.Is an original imageProbability of being discriminated as a true image. For better gradient descent performance, the invention will minimizeTo replace
For generative model G, it is used to generate a high resolution image (SR) of a small sample of SAR. The discriminant model D is used to evaluate the visual quality of the image generated by the generator model G.
(2) And pre-training the inclusion target recognition network through the existing target recognition database. The concrete network model is shown in fig. 3.
Inclusion losses were exploited in an attempt to achieve perceptual similarity. The perceptual loss is as follows:
wherein, Wi,jAnd Hi,jRespectively represents the dimension of each characteristic in the Incep network, phii,jRepresents sample features extracted after the ith addition module and before the jth addition module, where IHRRepresenting high definition pictures, ILRRepresenting a low-definition picture, x, y is the pixel value of the (x, y) coordinate point on the image.
(3) After the super-resolution network and the classification network are cascaded, the whole network is trained by using the data with the labels again. And (3) generating a countermeasure network and an inclusion-ReNet-V2 network by utilizing the excess differentiation, wherein a specific network model is shown in FIG. 4. The inclusion-ResNet-V2 network comprises three inclusion structures, namely inclusion-A, Inception-B and inclusion-C, as shown in figure 5(a), figure 5(B) and figure 5 (C).
Compared with the initial version of the inclusion structure, the inclusion-ResNet-V2 mainly has the following differences:
a) adding a convolution layer with the size of 1 multiplied by 1 in front of the convolution layer in each increment structure, thereby reducing the weight size and the characteristic dimension;
b) two convolution layers with the size of 3 multiplied by 3 are used for replacing the convolution layer with the size of 5 multiplied by 5 in the original inclusion, thereby realizing the increase of the network depth;
c) a convolution decomposition is proposed, which decomposes a convolution kernel of 7 × 7 into two convolution kernels of 1 × 7 and 7 × 1, so that the network depth is further increased, and the nonlinearity of the network is increased;
d) for each convolution structure in the Incepton structure, residual connection is used, so that training is accelerated, the convergence speed is higher, and the precision is higher.
The inclusion-respet-V2 network consists of Stem, 4 inclusion-a, Reduction-a, 7 inclusion-B, Reduction-B, 3 inclusion-C layers, and finally outputs the results via averaging porous layer (Average pooling layer) and Softmax, while Dropout is used in the network to reduce the risk of overfitting.
Giving an original imageOr a hyper-resolution image I, the output of the object recognition network beingHere m is the ID of the object, so the probability of each ID tag m is calculated in the following way:
to simplify this equation, the present invention ignores the association between m and I. The cross entropy loss of the authentication loss is calculated as:
where p (m) represents the probability when the label output by the model is m, and y is a true ID label, q (y) is 1, and q (m) is 0 for all m ≠ y. In this case, minimizing the recognition loss is equivalent to maximizing the probability of being assigned to the true sample class.
Finally, cross entropy loss is used when the Incep network-based target recognition network is used for target discrimination network trainingFinally, all the loss functions are combined into one block, as:
for convenience, the weight between these losses is balanced by the present invention using the same mean.
(3) And the new SAR target sample obtains a recognition result through the network.
According to the invention, SRGAN is pre-trained through a large amount of label-free SAR map databases, then an inclusion-based target identification network is pre-trained through the existing target identification database, and after the super-resolution network and the classification network are cascaded, the whole network is trained by using labeled data again. The sensing network based on the increment takes the output of the middle layer as input, and the target identification network takes the super-resolution picture output by the super-resolution generation type countermeasure network as input.
The invention utilizes the over-resolution to generate the countermeasure network and the inclusion-ReNet-V2 network, optimizes the network structure and the loss function thereof, and improves the accuracy of SAR small sample identification while improving the sample quality.

Claims (7)

1. A SAR small sample identification method based on super-resolution countermeasure generation cascade network is characterized by comprising the following steps:
step 1: training SRGAN pre-training through a large number of unlabeled SAR map databases;
step 2: pre-training an inclusion target recognition network through an existing target recognition database;
and step 3: after the super-resolution network and the classification network are cascaded, the whole network is trained by using the data with the labels again;
and 4, step 4: and the new SAR target sample obtains a recognition result through the network.
2. The method for recognizing the small SAR sample based on the super-resolution countermeasure generation cascade network as claimed in claim 1, wherein step 2 is to pre-train the inclusion target recognition network through the existing target recognition database, the training generation function { G (z) } is used to generate the HR output image of the designated LR input, and the inclusion network is trained to be used as the perception network for extracting the advanced features in the final HR image, and the method comprises the following steps:
step 1.1: inputting the low-resolution image LR into a generation network to obtain a generated high-resolution image SR;
step 1.2: inputting the high-resolution image SR and the original real image HR into a discrimination network together to obtain the accuracy rate of discriminating the authenticity of the network identification image sample, namely the accuracy rate of identifying the original real image HR and the high-resolution image SR;
step 1.3: feeding back to the generative model;
step 1.4: generating a model network and discriminating the model network to form a countermeasure.
3. The method for identifying the SAR small samples based on the super-resolution countermeasure generation cascade network as claimed in claim 2, wherein the step 1.2 inputs the high-resolution image SR and the original real image HR together into the discriminant network, wherein the super-resolution is trained by using pixel-based MSE loss, and the calculation method of the pixel-based MSE loss is expressed as:
where r is the magnification, W is the long pixel value of the image, H is the wide pixel value of the image,is of the original image (x)Y) the pixel value of the coordinate point,is the pixel value of the (x, y) coordinate point of the generated image.
4. The method for identifying the small samples of the SAR based on the super-resolution countermeasure generation cascade network is characterized in that the feedback of the step 1.3 to the generation models introduces the countermeasure loss in each generation model, and simultaneously trains a discriminant network by using the countermeasure loss, except the discriminant model loss, the countermeasure loss is defined based on the probability of the discriminant network on all the training samples, and is expressed as:
wherein,is a super-resolution imageThe probability of being judged as a true image by the confronted network.
5. The method for identifying the SAR small sample based on the super-resolution countermeasure generation cascade network as claimed in claim 2, wherein the generation model network and the discriminant model network in step 1.4 form an countermeasure, where the discriminant loss is generated when the discriminant discriminates, and the loss calculation is expressed as the following formula:
wherein,is a super-resolution imageThe probability of being judged by the confronted network as a true image,is an original imageProbability of being discriminated as a true image.
6. The SAR small sample identification method based on super-resolution countermeasure generation cascade network of claim 1, characterized in that: step 2, the inclusion target recognition network is pre-trained through the existing target recognition database, and the perception loss is predicted by using the inclusion loss, which is expressed as:
wherein, Wi,jAnd Hi,jRespectively represents the dimension of each characteristic in the Incep network, phii,jRepresents sample features extracted after the ith addition module and before the jth addition module, where IHRRepresenting high definition pictures, ILRRepresenting a low-definition picture, x, y is the pixel value of the (x, y) coordinate point on the image.
7. The SAR small sample identification method based on super-resolution countermeasure generation cascade network of claim 1, characterized in that: after the super-resolution network and the classification network are cascaded in the step 3, the whole network is trained by using the data with the labels again, wherein the sensing network based on the increment takes the output of the middle layer as input, the target identification network takes the super-resolution picture output by the super-resolution generation type countermeasure network as input, the opposition network and the increment-ReNet-V2 network are generated by utilizing the super-resolution, the increment-ResNet-V2 network comprises three increment structures of increment-A, Inception-B and increment-C, the increment-ResNet-V2 network comprises a Stem layer, 4 increment-A layers, a Reduction-A layer, 7 increment-B layers, a Reduction-B layer, 3 increment-C layers and finally passes through an aggregation layer, i.e., the average pooling layer, and Softmax outputs the results while using Dropout in the network, the cross-entropy loss calculation for authentication loss is expressed as:
where p (m) represents the probability when the label output by the model is m, and y is assumed to be a true ID label, therefore, q (y) is 1, q (m) is 0 for all m ≠ y, and finally, all loss functions are merged into one and represented as:
CN201910177075.XA 2019-03-08 2019-03-08 SAR small sample identification method based on super-resolution countermeasure generation cascade network Active CN109871902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910177075.XA CN109871902B (en) 2019-03-08 2019-03-08 SAR small sample identification method based on super-resolution countermeasure generation cascade network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910177075.XA CN109871902B (en) 2019-03-08 2019-03-08 SAR small sample identification method based on super-resolution countermeasure generation cascade network

Publications (2)

Publication Number Publication Date
CN109871902A true CN109871902A (en) 2019-06-11
CN109871902B CN109871902B (en) 2022-12-13

Family

ID=66920071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910177075.XA Active CN109871902B (en) 2019-03-08 2019-03-08 SAR small sample identification method based on super-resolution countermeasure generation cascade network

Country Status (1)

Country Link
CN (1) CN109871902B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322418A (en) * 2019-07-11 2019-10-11 北京航空航天大学 A kind of super-resolution image generates the training method and device of confrontation network
CN110363792A (en) * 2019-07-19 2019-10-22 广东工业大学 A kind of method for detecting change of remote sensing image based on illumination invariant feature extraction
CN110490802A (en) * 2019-08-06 2019-11-22 北京观微科技有限公司 A kind of satellite image Aircraft Targets type identifier method based on super-resolution
CN110570355A (en) * 2019-09-12 2019-12-13 杭州海睿博研科技有限公司 Multi-scale automatic focusing super-resolution processing system and method
CN110866472A (en) * 2019-11-04 2020-03-06 西北工业大学 Unmanned aerial vehicle ground moving target identification and image enhancement system and method
CN111242087A (en) * 2020-01-21 2020-06-05 华为技术有限公司 Object recognition method and device
CN111339969A (en) * 2020-03-02 2020-06-26 深圳市瑞立视多媒体科技有限公司 Human body posture estimation method, device, equipment and storage medium
CN112200055A (en) * 2020-09-30 2021-01-08 深圳市信义科技有限公司 Pedestrian attribute identification method, system and device of joint countermeasure generation network
CN112200721A (en) * 2020-10-10 2021-01-08 广州云从人工智能技术有限公司 Image processing method, system, device and medium
CN114359667A (en) * 2021-12-30 2022-04-15 西安交通大学 Strength coherent identification method and equipment based on generating type countermeasure network
CN115410083A (en) * 2022-08-24 2022-11-29 南京航空航天大学 Small sample SAR target classification method and device based on antithetical domain adaptation
CN111104967B (en) * 2019-12-02 2023-12-22 精锐视觉智能科技(上海)有限公司 Image recognition network training method, image recognition device and terminal equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537743A (en) * 2018-03-13 2018-09-14 杭州电子科技大学 A kind of face-image Enhancement Method based on generation confrontation network
CN109410239A (en) * 2018-11-07 2019-03-01 南京大学 A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537743A (en) * 2018-03-13 2018-09-14 杭州电子科技大学 A kind of face-image Enhancement Method based on generation confrontation network
CN109410239A (en) * 2018-11-07 2019-03-01 南京大学 A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LONG GANG WANG等: "Super-resolution SAR Image Reconstruction via Generative Adversarial Network", 《THE 12TH INTERNATIONAL SYMPOSIUM ANTENNAS, PROPAGATION, AND EM THEORY》 *
XIAORAN SHI等: "Automatic Target Recognition for Synthetic Aperture Radar Images Based on Super-Resolution Generative Adversarial Network and Deep Convolutional Neural Network", 《REMOTE SENSING》 *
汶茂宁: "基于轮廓波 CNN 和选择性注意机制的高分辨 SAR 目标检测和分类", 《中国优秀硕士学位论文数据库 信息科技辑》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322418A (en) * 2019-07-11 2019-10-11 北京航空航天大学 A kind of super-resolution image generates the training method and device of confrontation network
CN110363792A (en) * 2019-07-19 2019-10-22 广东工业大学 A kind of method for detecting change of remote sensing image based on illumination invariant feature extraction
CN110490802A (en) * 2019-08-06 2019-11-22 北京观微科技有限公司 A kind of satellite image Aircraft Targets type identifier method based on super-resolution
CN110490802B (en) * 2019-08-06 2021-01-19 北京观微科技有限公司 Super-resolution-based satellite image airplane target model identification method
CN110570355A (en) * 2019-09-12 2019-12-13 杭州海睿博研科技有限公司 Multi-scale automatic focusing super-resolution processing system and method
CN110570355B (en) * 2019-09-12 2020-09-01 杭州海睿博研科技有限公司 Multi-scale automatic focusing super-resolution processing system and method
CN110866472A (en) * 2019-11-04 2020-03-06 西北工业大学 Unmanned aerial vehicle ground moving target identification and image enhancement system and method
CN111104967B (en) * 2019-12-02 2023-12-22 精锐视觉智能科技(上海)有限公司 Image recognition network training method, image recognition device and terminal equipment
CN111242087A (en) * 2020-01-21 2020-06-05 华为技术有限公司 Object recognition method and device
CN111242087B (en) * 2020-01-21 2024-06-07 华为技术有限公司 Object identification method and device
CN111339969B (en) * 2020-03-02 2023-06-20 深圳市瑞立视多媒体科技有限公司 Human body posture estimation method, device, equipment and storage medium
CN111339969A (en) * 2020-03-02 2020-06-26 深圳市瑞立视多媒体科技有限公司 Human body posture estimation method, device, equipment and storage medium
CN112200055A (en) * 2020-09-30 2021-01-08 深圳市信义科技有限公司 Pedestrian attribute identification method, system and device of joint countermeasure generation network
CN112200055B (en) * 2020-09-30 2024-04-30 深圳市信义科技有限公司 Pedestrian attribute identification method, system and device of combined countermeasure generation network
CN112200721A (en) * 2020-10-10 2021-01-08 广州云从人工智能技术有限公司 Image processing method, system, device and medium
CN114359667A (en) * 2021-12-30 2022-04-15 西安交通大学 Strength coherent identification method and equipment based on generating type countermeasure network
CN114359667B (en) * 2021-12-30 2024-01-30 西安交通大学 Intensity coherent identification method and equipment based on generation type countermeasure network
CN115410083A (en) * 2022-08-24 2022-11-29 南京航空航天大学 Small sample SAR target classification method and device based on antithetical domain adaptation
CN115410083B (en) * 2022-08-24 2024-04-30 南京航空航天大学 Small sample SAR target classification method and device based on contrast domain adaptation

Also Published As

Publication number Publication date
CN109871902B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN109871902B (en) SAR small sample identification method based on super-resolution countermeasure generation cascade network
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
US11402494B2 (en) Method and apparatus for end-to-end SAR image recognition, and storage medium
Wen et al. A novel automatic change detection method for urban high-resolution remotely sensed imagery based on multiindex scene representation
Liu et al. Remote sensing image change detection based on information transmission and attention mechanism
CN111898633B (en) Marine ship target detection method based on hyperspectral image
CN114202696A (en) SAR target detection method and device based on context vision and storage medium
Lv et al. Novel adaptive region spectral–spatial features for land cover classification with high spatial resolution remotely sensed imagery
Shi et al. Discriminative feature learning with distance constrained stacked sparse autoencoder for hyperspectral target detection
CN108564115A (en) Semi-supervised polarization SAR terrain classification method based on full convolution GAN
CN109376591A (en) The ship object detection method of deep learning feature and visual signature joint training
CN114241511B (en) Weak supervision pedestrian detection method, system, medium, equipment and processing terminal
CN111242061B (en) Synthetic aperture radar ship target detection method based on attention mechanism
Wu et al. Typical target detection in satellite images based on convolutional neural networks
CN109117739A (en) One kind identifying projection properties extracting method based on neighborhood sample orientation
CN114255403A (en) Optical remote sensing image data processing method and system based on deep learning
CN111507416B (en) Smoking behavior real-time detection method based on deep learning
CN112270285A (en) SAR image change detection method based on sparse representation and capsule network
CN117079097A (en) Sea surface target identification method based on visual saliency
CN115272882A (en) Discrete building detection method and system based on remote sensing image
CN115272865A (en) Target detection method based on adaptive activation function and attention mechanism
CN112800968B (en) HOG blocking-based feature histogram fusion method for identifying identity of pigs in drinking area
CN112949634A (en) Bird nest detection method for railway contact network
Jiao et al. Detection of cladding ice on transmission line based on SVM and mathematical morphology
CN112926383A (en) Automatic target identification system based on underwater laser image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant