CN109636742A - The SAR image of network and the mode conversion method of visible images are generated based on confrontation - Google Patents

The SAR image of network and the mode conversion method of visible images are generated based on confrontation Download PDF

Info

Publication number
CN109636742A
CN109636742A CN201811405188.2A CN201811405188A CN109636742A CN 109636742 A CN109636742 A CN 109636742A CN 201811405188 A CN201811405188 A CN 201811405188A CN 109636742 A CN109636742 A CN 109636742A
Authority
CN
China
Prior art keywords
image
sar image
visible images
input
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811405188.2A
Other languages
Chinese (zh)
Other versions
CN109636742B (en
Inventor
张瑞峰
刘长卫
李晖晖
郭雷
吴东庆
翟庆刚
汤剑
冯和军
杨岗军
韩太初
胡树正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Research Institute Air Force Research Institute Pla
Northwestern Polytechnical University
Original Assignee
Air Force Research Institute Air Force Research Institute Pla
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Research Institute Air Force Research Institute Pla, Northwestern Polytechnical University filed Critical Air Force Research Institute Air Force Research Institute Pla
Priority to CN201811405188.2A priority Critical patent/CN109636742B/en
Publication of CN109636742A publication Critical patent/CN109636742A/en
Application granted granted Critical
Publication of CN109636742B publication Critical patent/CN109636742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Abstract

A kind of mode conversion method that SAR image is converted to visible images based on confrontation generation network of the present invention, firstly, the feature vector of the satellite image of same position is extracted, as the prior information of SAR image;Then the prior information and SAR image are input in generator together, generate the visible images with SAR image target.Secondly, training one arbiter generated in confrontation network, using formula LGAN(GAB, D, A, B) and=EB~B[log D(b)]+EA~A[log(1‑D(GAB(a)))] as differentiation loss.Finally, the confrontation of training of judgement generates the mistake whether network has model to fold, i.e., different SAR image inputs, the output of generator is largely same visible images.Another generator is had trained simultaneously, the characteristic similarity of two images is compared using generational loss.Generational loss is LGAN(GAB, GBA, A, B) and=EA~A[||GAB(GBA(a))‑a||1].When network training is completed, differentiate that the curve of loss and generational loss tends towards stability, differentiates that loss no longer increases, generational loss no longer reduces.

Description

The SAR image of network and the mode conversion method of visible images are generated based on confrontation
Technical field
The invention belongs to the image interpretation fields in deep learning, are related to a kind of SAR image that network is generated based on confrontation With the mode conversion method of visible images.
Background technique
Since 1978, the appearance of synthetic aperture radar (SAR) has started massive Radar Technology revolution. Its unrivaled round-the-clock having, it is round-the-clock many advantages, such as and resulting wide application prospect, attracted thunder Up to the countless sight of scientific circles.The SAR correlative study being succeeded by constitutes the theme of current technological revolution tide, different waves The SAR system of section, not same polarization or even different resolution continues to bring out.Undoubtedly, this great change has influenced Dual-use every field.
Due to the raising of resolution ratio, the data volume of SAR increases in series, based on artificial information processing and application study (such as target identification) faces many difficulties: first having in extensive area, the ground quality testing based on SAR image is realized in artificial interpretation Survey, the task of identification, task amount is big considerably beyond the limit manually judged rapidly, thus bring subjective errors and Misinterpretation is inevitable.Secondly, the imaging mechanism that SAR image is special, so that target azimuthal is very sensitive, biggish side Angle difference will will lead to entirely different SAR image so that SAR image in visual effect with the difference of optical imagery into one Step increases, and increases the difficulty of image interpretation judgement;Again, with the continuous improvement of SAR sensor resolution, sensor die Volatile growth is also presented in the diversification of formula, wave band and polarization mode, the target information in SAR image, and target is by original single Point target in the single polarization of channel on low-resolution image has become the Area Objects with abundant minutia and scattering signatures, This aspect to carry out terrestrial object information finer explanation and identifies that work becomes possibility, while but also characters of ground object Type and unstability greatly increase, thus traditional information processing and methods for using them is no longer satisfied the need of practical application It wants, it is necessary to tackle key problems relevant key technology, accelerate data processing speed, improve the precision of information extraction.
Based on this, the present invention proposes a kind of method for generating network based on confrontation, and SAR image is converted to visible light The image model of characteristics of image, major advantage include the following aspects: first, the CycleGAN in network is generated using confrontation When network generates picture, we extract the satellite mapping of same position not only using the SAR image in source image domains as input Prior information of the semantic feature of picture as SAR image is input in generator simultaneously using the prior information as condition, makes to give birth to At image not only there is the style of visible light, but also there is the target information that can't see in SAR image simultaneously;Second, in order to It prevents confrontation from generating network and occurs the case where model folds (Mode Collapse) during training, we have trained two A generator, first generator are to generate visible images required for target by SAR image, and second generator will give birth to At visible images be translated as SAR image again.We calculate the SAR of original SAR image and generation during training Generational loss between image avoids the occurrence of training confrontation and generates net common in network by constantly reducing the generational loss Network model promnesia.
Summary of the invention
Technical problems to be solved
To SAR image carry out image procossing when, it is on the one hand more sensitive to model due to conventional method, to image request compared with Height, and SAR image resolution ratio is relatively low, picture quality is fuzzy, when image is not consistent with model, may cannot be satisfied with Result;On the other hand, the characteristics such as the reflection of target, scattering, transmission, absorption and radiation are understood not enough due to us, is difficult Establish the semiempirical formula or mathematical model between the characteristic signal of SAR image and target.For this purpose, we have proposed one kind to be based on The method that confrontation generates network, is converted to the image model with visible images feature for SAR image, to realize to target Identification, detection.
Technical solution
Basic ideas of the invention are: use generation confrontation network (Generative Adversarial Network, GAN deep learning method), training one unsupervised learning network --- CycleGAN, realizes that the SAR image of low resolution arrives The conversion of visible images mode.CycleGAN is using visible images as target image domain, using SAR image as source images Domain learns one from source image domains to the mapping in target image domain, realizes the conversion of source images to target image.CycleGAN packet Include two generator G and an arbiter D.Generator G1 is realized from source image domains to the conversion in target image domain (i.e. SAR Image is converted into visible mode image), generator G2 is to realize conversion from aiming field to source image domains (i.e. visible optical mode Formula image is converted into SAR image), arbiter D is used to judge whether the picture of input is true visible images.
The method is characterized in that steps are as follows:
Step 1: obtain the prior information of SAR image: method neural network based extracts the satellite image of same position Feature vector, as the prior information of SAR image, thus make generate confrontation network generate visible images in target more Clearly.
(1) feature extraction of satellite image: extracting feature by convolutional layer, is compressed by pond layer to feature.
(a) convolution operation extracts feature: it is assumed that the input feature vector figure F of convolutional layerinParameter be Win×Hin×Cin, WinTable Show the width of input feature vector figure, HinIndicate the height of input feature vector figure, CinIndicate the port number of input feature vector figure.Convolutional layer Deconvolution parameter is K, S, P, and Stride, K indicate the number of convolution kernel, and S indicates that the width and height of convolution kernel, P are indicated to input Characteristic pattern carries out zero padding operation, such as P=1 indicates to indicate convolution kernel defeated one 0, Stride of circle of benefit around input feature vector figure Enter the sliding step on characteristic pattern.The then output characteristic pattern F of convolutional layeroutParameter be Wout×Hout×Cout, WoutIndicate output The width of characteristic pattern, HoutIndicate the height of output characteristic pattern, CoutIndicate the port number of output characteristic pattern, parameter calculates such as Under:
The size of present convolution kernel is generally 3 × 3, P=1, Stride=1, can guarantee in this way input feature vector figure and The size of output characteristic pattern is consistent.
(b) pond layer compression feature: it is general using maximum pond layer, i.e., when carrying out down-sampling to characteristic pattern, selection 2 × The 2 maximum numbers of grid intermediate value are transferred in output characteristic pattern.In pondization operation, input, the port number of output characteristic layer are constant, defeated The size of characteristic pattern is the half of input feature vector figure size out.
It is operated by convolution pondization set forth above, the feature extraction to satellite image can be completed.
Step 2: generator generates visible images: there are two input interface, first interfaces for the generator of design Then SAR image, the feature vector for the satellite image that second interface step 1 is extracted pass through encoder, converter With the effect of decoder, a width visible images are generated.
(1) encoder
SAR image is input in encoder, encoder extracts the characteristic information of SAR image, is indicated with feature vector.Its In, there are three convolutional layers to form for encoder, the convolutional layer of respectively one 7*7 with 32 filters and stride 1, a tool There are the convolutional layer of the 3*3 of 64 filters and stride 2 and the convolutional layer of a 3*3 with 128 filters and stride 2.I By a SAR image having a size of [256,256,3], be input in designed encoder, it is different size of in encoder Convolution kernel moves over an input image and extracts feature, obtains the feature vector having a size of [64,64,256].
(2) converter
The effect of converter is the close feature of difference of combination S AR image, is then based on these features, determines how conversion For the feature vector of the image (visible images) in aiming field.Since step 1 has obtained the feature vector conduct of satellite image The prior information of SAR image, encoder have obtained the feature vector of SAR image.Therefore, first have to by two different features to Amount is merged, and could be inputted as the feature of decoder.Decoder is made of several residual blocks, the purpose of residual block be for Ensure that the input data information of previous network layer directly acts on subsequent network layer, so that accordingly output (visible images Feature vector) and the deviation that is originally inputted reduce.
(3) decoder
Using the feature vector of the image (visible images) in the aiming field that converter obtains as input, decoder receives The input restores low-level features from feature vector, and decoding process and cataloged procedure are completely on the contrary, entire decoding process is all Using transposition convolutional layer.Finally, these low-level features are converted, obtain an image in target image domain, i.e., it is visible Light image.
Step 3: arbiter differentiates visible images: the picture that generator exports is input in trained arbiter D, Arbiter D can generate a score value d.The image (i.e. visible images) in aiming field is exported, the value of d is closer to 1;It is no Then the value of d is closer to 0.By arbiter D, to judge that the image generated is visible images.The judgement of D is to pass through calculating Differentiate loss to complete.
Differentiate loss are as follows:
LGAN(GAB, D, A, B) and=EB~B[logD(b)]+EA~A[log(1-D(GAB(a)))] (2)
Wherein, A is source image domains (SAR image), and B is target image domain (visible images), and a is in source image domains SAR image, b are the visible images in target image domain, GABFor from source image domains A to the generator of target image domain B, D is Arbiter.Training process is to make to differentiate loss LGAN(GAB, D, A, B) and small as far as possible.
Step 4: verifying generates the characteristic similarity of picture: since the arbiter D picture that can only judge that generator generates is It is not visible images style, the target signature in picture can not be differentiated well.Mode is generated in order to prevent Collapse (model folding), i.e. generator have Memorability.Another generator of training simultaneously, generator in step 2 is raw At visible images switch to SAR image, the network architecture of two generators is identical.It is verified by calculating generational loss Generate the characteristic similarity of picture.
Generational loss are as follows:
LGAN(GAB, GBA, A, B) and=EA~A[||GBA(GAB(a))-a||1] (3)
Generational loss be calculate two SAR images Euclidean distance (Euclidean distance refers in m-dimensional space between two points Actual distance), wherein A is source image domains (SAR image), and B is target image domain (visible images), GABFor from source image domains A To the generator of target image domain B, GBAFor from target image domain B to the generator of source image domains A.A is the SAR in source image domains Image, GAB(GBAIt (a)) is the SAR image generated.In training process, to make LGAN(GAB, GBA, A, B) and small as far as possible.
Beneficial effect
The present invention proposes that a kind of SAR image for generating network based on confrontation is converted to the mode method of visible images.It should Method is efficiently solved since SAR image resolution ratio is relatively low, picture quality is fuzzy, leads to traditional side based on model The problem of method effectively cannot carry out detection identification to the target in SAR image.Not only the advantages of having remained SAR image, but also can have Effect ground utilizes existing image processing method, reduces SAR image quality problems bring limitation, research cost is low, has Comparable researching value has highly important effect in national economy and military field.
Detailed description of the invention
Fig. 1: the overall framework figure of the method for the present invention.
Fig. 2 (a): the network structure of generator in confrontation network is generated.
Fig. 2 (b): the network structure of arbiter in confrontation network is generated.
Specific embodiment
Now in conjunction with embodiment, attached drawing 1 and Fig. 2 (a), the invention will be further described with Fig. 2 (b):
The hardware environment tested herein are as follows: GPU:Intel to strong series, memory: 8G, hard disk: 500G mechanical hard disk, it is independent Video card: NVIDIA GeForce GTX 1080Ti, 11G;System environments is Ubuntu 16.0.4;Software environment is python 3.6, Tensorflow-GPU.The reality of measured data has been done herein for the mode conversion method of SAR image and visible images It tests.We obtain the SAR image taken photo by plane during military airplane flight first, and (about 5000 width images, image size are 256* 256) satellite image (5000 width, image size are 256*256) of same position, is obtained then in conjunction with satellite, is input to us In the network built, network repetitive exercise 200000 times, basic learning rate is 0.0002, every 100000 changes learning rate, instruction Adam optimizer, 10000 preservation models of every mistake in training are used during practicing.Pass through actual test, it has been found that generation Image not only has the property of visible images, can also show the target property not seen in former SAR image well.
Present invention specific implementation is as follows:
Step 1: obtain the prior information of SAR image: method neural network based extracts the satellite image of same position Feature vector, as the prior information for the SAR image taken photo by plane, thus make generate confrontation network generate visible images in Target is apparent.
(1) feature extraction of satellite image: extracting feature by convolutional layer, is compressed by pond layer to feature.
(a) convolution operation extracts feature: input feature vector figure (i.e. satellite image) F for the convolutional layer that we design in examplein Parameter be 256*256*3,256 indicate the width of input feature vector figures, and 256 indicate the height of input feature vector figures, and 3 indicate that input is special Levy the port number of figure.The deconvolution parameter of convolutional layer is K, S, P, and Stride, K indicate the number of convolution kernel, and S indicates the width of convolution kernel It spends and height, P indicates to carry out input feature vector figure zero padding operation, such as P=1 is indicated to one circle 0 of benefit around input feature vector figure, Stride indicates sliding step of the convolution kernel on input feature vector figure.In example, the deconvolution parameter of convolutional layer used in us Respectively 64,3,1,1.The then output characteristic pattern F of convolutional layeroutParameter be 256*256*64,256 indicate output characteristic patterns Width, 256 indicate the height of output characteristic pattern, and 64 indicate the port number of output characteristic pattern, and parameter calculates as follows:
Wherein, Win, HinAnd CinRepresent the parameter of input feature vector figure, Wout, HoutAnd CoutRepresent each layer of convolutional layer convolution The parameter of the output characteristic pattern obtained later.
(b) pond layer compression feature: we using maximum pond layer to the output characteristic pattern obtained after convolutional layer convolution into That is, 2 × 2 maximum numbers of grid intermediate value are chosen and are transferred in output characteristic pattern when carrying out down-sampling to characteristic pattern in row pond.Pond Change in operation, input, the port number of output characteristic layer are constant, and the size for exporting characteristic pattern is the half of input feature vector figure size. We only carry out pond to first, third, the 4th these three convolutional layers in an experiment.
It is operated by convolution pondization set forth above, the feature extraction to image can be completed, obtain defending for same position The feature vector of star chart picture, the size of vector are [256,256,64].
Step 2: first generator of training generates visible images: the generator that we design there are two input interface, First interface SAR image, the size of SAR image is 256*256*3 in experiment;Second interface step 1 is extracted Then the feature vector of the satellite image arrived passes through the effect of encoder, converter and decoder, generate a width visible light figure Picture.
(1) encoder
SAR image is input in encoder, encoder extracts the characteristic information of SAR image, is indicated with feature vector.Its In, there are three convolutional layers to form for encoder, the convolutional layer of respectively one 7*7 with 32 filters and stride 1, a tool There are the convolutional layer of the 3*3 of 64 filters and stride 2 and the convolutional layer of a 3*3 with 128 filters and stride 2.It is real In testing, the output scale of first convolution module is 256*256*64, and the output scale of second convolution module is 256*256* 128, the output scale of third convolution module is 64*64*256.I.e. we scheme a SAR having a size of [256,256,3] Picture is input in designed encoder, and different size of convolution kernel moves over an input image and extract feature in encoder, Finally obtain the feature vector having a size of [64,64,256].
(2) converter
The effect of converter is the close feature of difference of combination S AR image, is then based on these features, determines how conversion For the feature vector of the image (visible images) in aiming field.Since step 1 has obtained the prior information of SAR image, that is, defend The feature vector of star chart picture, encoder have obtained the feature vector of SAR image.Therefore, first have to by two different features to Amount is merged, and is then inputted as the feature of decoder.Decoder is made of several residual blocks, the purpose of residual block be for Ensure that the input data information of previous network layer directly acts on subsequent network layer, so that accordingly output (visible images Feature vector) and the deviation that is originally inputted reduce.We use 9 residual blocks in an experiment, and each residual block is by two The convolutional layer of 3*3 with 256 filters and stride 2 forms, and the output scale of the 9th residual block is 64*64*256.
(3) decoder
Using the feature vector of the image (visible images) in the aiming field that converter obtains as input, decoder receives The input restores low-level features from feature vector, and decoding process and cataloged procedure are completely on the contrary, entire decoding process is all Using transposition convolutional layer.Finally, these low-level features are converted, obtain an image in target image domain, i.e., it is visible Light image.In experiment, we define three warp volume modules, and the output that each warp volume module receives a module is made To input, the output of the 9th residual block is as input in first warp volume module receiving converter.Wherein, first warp Volume module is made of the convolutional layer of a 3*3 with 128 filters and stride 2, and the scale of output is 128*128*128; Second warp volume module is made of the convolutional layer of a 3*3 with 64 filters and stride 2, and the scale of output is 256* 256*64;Third warp volume module is made of the convolutional layer of a 7*7 with 3 filters and stride 1, the scale of output For 256*256*3.Finally, activating the output generated by tanh function.
Step 3: training arbiter differentiates visible images: the picture that generator exports is input to trained differentiation In device D, D can generate a score value d.It exports closer to target area image (i.e. visible images), the value of d is closer to 1;Otherwise d Value closer to 0.By arbiter D, to judge that the image generated is visible images.The judgement of D is sentenced by calculating It Sun Shi not complete.
Differentiate loss are as follows:
LGAN(GAB, D, A, B) and=EB~B[logD(b)]+EA~A[log(1-D(GAB(a)))] (5)
Wherein, A is source image domains (SAR image), and B is target image domain (visible images), and a is in source image domains SAR image, b are the visible images in target image domain, GABFor from source image domains A to the generator of target image domain B, D is Arbiter.Training process is to make to differentiate loss LGAN(GAB, D, A, B) and small as far as possible.
In experiment, we devise 5 convolution modules for generator, the last one convolution module is followed by one Sigmoid layers by output control in 0 to 1 section.First convolution module is by a 4* with 64 filters and stride 2 4 convolutional layer composition, the scale of output are 128*128*64;Second convolution module has 128 filters and step by one The convolutional layer composition of 2 4*4, the scale of output are 64*64*128;Third convolution module has 256 filtering by one The convolutional layer of the 4*4 of device and stride 2 forms, and the scale of output is 32*32*256;4th convolution module has 512 by one The convolutional layer of the 4*4 of a filter and stride 1 forms, and the scale of output is 32*32*512;5th convolution module is had by one It is made of the convolutional layer of 1 filter and the 4*4 of stride 1, the scale of output is 32*32*1.We are by the 5th convolution module Output be input to sigmoid layers, by the activation of sigmoid function, obtain final output.
Step 4 trains second generator, and verifying generates the characteristic similarity of picture: since arbiter D can only judge to give birth to The picture of generation of growing up to be a useful person is visible light style, can not be differentiated well to the target signature in picture.In order to prevent It generates Mode Collapse (model folding), i.e., generator has Memorability.Another generator of training simultaneously, by step 2 The visible images that middle generator generates switch to SAR image.The feature that generation picture is verified by calculating generational loss is similar Property.
Generational loss are as follows:
LGAN(GAB, GBA, A, B) and=EA~A[||GBA(GAB(a))-a||1] (6)
Generational loss be calculate two SAR images Euclidean distance (Euclidean distance refers in m-dimensional space between two points Actual distance), wherein A is source image domains (SAR image), and B is target image domain (visible images), GABFor from source image domains A To the generator of target image domain B, GBAFor from target image domain B to the generator of source image domains A.A is original SAR image, GBA(GABIt (a)) is the SAR image generated.In training process, to make LGAN(GAB, GBA, A, B) and small as far as possible.
In experiment, our network architectures of designed two generators are identical (being detailed in step 2).In the mistake of experiment Cheng Zhong, we have recorded the log of three generational loss.First is to calculate the generation damage that visible images are generated by SAR image It loses, by LGAN(GAB, A, B) and=EA~A[||GAB(a)-a||1] indicate;Second is to calculate to be redeveloped into SAR figure by visible images The generational loss of picture, by LGAN(GAB, GBA, A, B) and=EA~A[||GBA(GAB(a))-GAB(a)||1] indicate;Third is entire life The generational loss grown up to be a useful person, by LGAN(GAB, GBA, A, B) and=EA~A[||GBA(GAB(a))-a||1] indicate.Wherein, A is source image domains (SAR image), B are target image domain (visible images), GABFor from source image domains A to the generator of target image domain B, GBA For from target image domain B to the generator of source image domains A.A is original SAR image, GABIt (a) is the visible images generated, GBA(GABIt (a)) is the SAR image generated.

Claims (10)

1. a kind of mode conversion method for the SAR image and visible images for generating network based on confrontation, which is characterized in that including Following steps:
Step 1: obtain the prior information of SAR image:
Method neural network based extracts the feature vector of the satellite image of same position, and the priori as SAR image is believed Breath, to keep the target generated in the visible images that confrontation network generates apparent;
Step 2: generator generates visible images:
There are two input interface, first interface SAR image, second interface step 1 to extract for the generator of design Then the feature vector of the satellite image arrived passes through the effect of encoder, converter and decoder, generate a width visible light figure Picture;
Step 3: arbiter differentiates visible images:
The picture that generator exports is input in trained arbiter D, arbiter D can generate a score value d;Output more connects Image in close-target domain, the value of d is closer to 1;Otherwise the value of d is closer to 0;By arbiter D, to judge that the image generated is It is not visible images;The judgement of arbiter D is completed by computational discrimination loss;
Step 4: verifying generates the characteristic similarity of picture: since the arbiter D picture that can only judge that generator generates is Visible images style, the target signature in picture can not be differentiated well;Model is generated in order to prevent folds Mode Collapse, i.e. generator have Memorability;Another generator of training simultaneously, the visible light that generator in step 2 is generated Image switchs to SAR image, and the network architecture of two generators is identical;Picture is generated by calculating generational loss to verify Characteristic similarity.
2. a kind of mode conversion of SAR image and visible images for generating network based on confrontation according to claim 1 Method, it is characterised in that: the feature extraction of satellite image is to extract feature by convolutional layer, is carried out by pond layer to feature Compression;
Convolution operation extracts feature: it is assumed that the input feature vector figure F of convolutional layerinParameter be Win×Hin×Cin, WinIndicate that input is special Levy the width of figure, HinIndicate the height of input feature vector figure, CinIndicate the port number of input feature vector figure;The deconvolution parameter of convolutional layer For K, S, P, Stride, K indicate the number of convolution kernel, and S indicates the width and height of convolution kernel, P indicate to input feature vector figure into Row zero padding operation, P=1 indicate to indicate convolution kernel on input feature vector figure one 0, Stride of circle of benefit around input feature vector figure Sliding step;The then output characteristic pattern F of convolutional layeroutParameter be Wout×Hout×Cout, WoutIndicate the width of output characteristic pattern Degree, HoutIndicate the height of output characteristic pattern, CoutIndicate the port number of output characteristic pattern, parameter calculates as follows:
Pond layer compression feature: using maximum pond layer, i.e., when carrying out down-sampling to characteristic pattern, 2 × 2 grid intermediate values are chosen most Big number is transferred in output characteristic pattern.
3. a kind of mode conversion of SAR image and visible images for generating network based on confrontation according to claim 2 Method, it is characterised in that: the size of convolution kernel is 3 × 3, P=1, Stride=1, guarantees input feature vector figure and output characteristic pattern Size be consistent.
4. a kind of mode conversion of SAR image and visible images for generating network based on confrontation according to claim 2 Method, it is characterised in that: in pondization operation, input, the port number of output characteristic layer are constant, and the size for exporting characteristic pattern is input The half of characteristic pattern size.
5. a kind of mode conversion of SAR image and visible images for generating network based on confrontation according to claim 1 Method, it is characterised in that: SAR image is input in encoder, encoder extract SAR image characteristic information, with feature to Amount indicates.
6. a kind of mode turn for the SAR image and visible images for generating network based on confrontation according to claim 1 or 5 Change method, it is characterised in that: there are three convolutional layers to form for encoder, respectively one 7*7 with 32 filters and stride 1 Convolutional layer, the convolutional layer of 3*3 with 64 filters and stride 2 and one are with 128 filters and stride 2 The convolutional layer of 3*3;It is input in designed encoder, a SAR image having a size of [256,256,3] in encoder Different size of convolution kernel moves over an input image and extracts feature, obtains the feature vector having a size of [64,64,256].
7. a kind of mode conversion of SAR image and visible images for generating network based on confrontation according to claim 1 Method, it is characterised in that: since step 1 has obtained prior information of the feature vector as SAR image of satellite image, encoder The feature vector of SAR image is obtained;Therefore, it first has to merge two different feature vectors, decoding could be used as The feature of device inputs;Decoder is made of several residual blocks, and residual block is input data in order to ensure previous network layer Information directly acts on subsequent network layer, so that the corresponding deviation for exporting and being originally inputted reduces.
8. a kind of mode conversion of SAR image and visible images for generating network based on confrontation according to claim 1 Method, it is characterised in that: using the feature vector of the image in the aiming field that converter obtains as input, it is defeated that decoder receives this Enter, low-level features is restored from feature vector, decoding process and cataloged procedure are completely on the contrary, entire decoding process is all to use Transposition convolutional layer;Finally, these low-level features are converted, an image in target image domain, i.e. visible light figure are obtained Picture.
9. a kind of mode conversion of SAR image and visible images for generating network based on confrontation according to claim 1 Method, it is characterised in that: differentiate loss are as follows:
LGAN(GAB, D, A, B) and=EB~B[logD(b)]+EA~A[log(1-D(GAB(a)))] (2)
Wherein, A is source image domains, and B is target image domain, and a is the SAR image in source image domains, b be in target image domain can Light-exposed image, GABFor from source image domains A, to the generator of target image domain B, D is arbiter;Training process is to make to differentiate to lose LGAN(GAB, D, A, B) and small as far as possible.
10. a kind of mode conversion of SAR image and visible images for generating network based on confrontation according to claim 1 Method, it is characterised in that: generational loss are as follows:
LGAN(GAB, GBA, A, B) and=EA~A[||GBA(GAB(a))-a||1] (3)
Generational loss is the Euclidean distance for calculating two SAR images, wherein A is source image domains, and B is target image domain, GABFor from Generator of the source image domains A to target image domain B, GBAFor from target image domain B to the generator of source image domains A;A is source images SAR image in domain, GAB(GBAIt (a)) is the SAR image generated;In training process, to make LGAN(GAB, GGA, A, B) and small as far as possible.
CN201811405188.2A 2018-11-23 2018-11-23 Mode conversion method of SAR image and visible light image based on countermeasure generation network Active CN109636742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811405188.2A CN109636742B (en) 2018-11-23 2018-11-23 Mode conversion method of SAR image and visible light image based on countermeasure generation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811405188.2A CN109636742B (en) 2018-11-23 2018-11-23 Mode conversion method of SAR image and visible light image based on countermeasure generation network

Publications (2)

Publication Number Publication Date
CN109636742A true CN109636742A (en) 2019-04-16
CN109636742B CN109636742B (en) 2020-09-22

Family

ID=66069278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811405188.2A Active CN109636742B (en) 2018-11-23 2018-11-23 Mode conversion method of SAR image and visible light image based on countermeasure generation network

Country Status (1)

Country Link
CN (1) CN109636742B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188667A (en) * 2019-05-28 2019-08-30 复旦大学 It is a kind of based on tripartite fight generate network face ajust method
CN110197517A (en) * 2019-06-11 2019-09-03 常熟理工学院 The SAR image painting methods that consistent sex resistance generates network are recycled based on multiple domain
CN110210574A (en) * 2019-06-13 2019-09-06 中国科学院自动化研究所 Diameter radar image decomposition method, Target Identification Unit and equipment
CN110363163A (en) * 2019-07-18 2019-10-22 电子科技大学 A kind of SAR target image generation method that azimuth is controllable
CN110472627A (en) * 2019-07-02 2019-11-19 五邑大学 One kind SAR image recognition methods end to end, device and storage medium
CN111047525A (en) * 2019-11-18 2020-04-21 宁波大学 Method for translating SAR remote sensing image into optical remote sensing image
US20210027417A1 (en) * 2019-07-22 2021-01-28 Raytheon Company Machine learned registration and multi-modal regression
CN112330562A (en) * 2020-11-09 2021-02-05 中国人民解放军海军航空大学 Heterogeneous remote sensing image transformation method and system
GB2586245A (en) * 2019-08-13 2021-02-17 Univ Of Hertfordshire Higher Education Corporation Method and apparatus
CN112668621A (en) * 2020-12-22 2021-04-16 南京航空航天大学 Image quality evaluation method and system based on cross-source image translation
CN113361508A (en) * 2021-08-11 2021-09-07 四川省人工智能研究院(宜宾) Cross-view-angle geographic positioning method based on unmanned aerial vehicle-satellite
CN114202679A (en) * 2021-12-01 2022-03-18 昆明理工大学 Automatic labeling method for heterogeneous remote sensing image based on GAN network
WO2022120661A1 (en) * 2020-12-09 2022-06-16 深圳先进技术研究院 Priori-guided network for multi-task medical image synthesis
CN115186814A (en) * 2022-07-25 2022-10-14 南京慧尔视智能科技有限公司 Training method and device for confrontation generation network, electronic equipment and storage medium
CN117611644A (en) * 2024-01-23 2024-02-27 南京航空航天大学 Method, device, medium and equipment for converting visible light image into SAR image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101405435B1 (en) * 2012-12-14 2014-06-11 한국항공우주연구원 Method and apparatus for blending high resolution image
CN105809194A (en) * 2016-03-08 2016-07-27 华中师范大学 Method for translating SAR image into optical image
CN108510532A (en) * 2018-03-30 2018-09-07 西安电子科技大学 Optics and SAR image registration method based on depth convolution GAN
CN108564606A (en) * 2018-03-30 2018-09-21 西安电子科技大学 Heterologous image block matching method based on image conversion
CN108717698A (en) * 2018-05-28 2018-10-30 深圳市唯特视科技有限公司 A kind of high quality graphic generation method generating confrontation network based on depth convolution

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101405435B1 (en) * 2012-12-14 2014-06-11 한국항공우주연구원 Method and apparatus for blending high resolution image
CN105809194A (en) * 2016-03-08 2016-07-27 华中师范大学 Method for translating SAR image into optical image
CN108510532A (en) * 2018-03-30 2018-09-07 西安电子科技大学 Optics and SAR image registration method based on depth convolution GAN
CN108564606A (en) * 2018-03-30 2018-09-21 西安电子科技大学 Heterologous image block matching method based on image conversion
CN108717698A (en) * 2018-05-28 2018-10-30 深圳市唯特视科技有限公司 A kind of high quality graphic generation method generating confrontation network based on depth convolution

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188667A (en) * 2019-05-28 2019-08-30 复旦大学 It is a kind of based on tripartite fight generate network face ajust method
CN110188667B (en) * 2019-05-28 2020-10-30 复旦大学 Face rectification method based on three-party confrontation generation network
CN110197517A (en) * 2019-06-11 2019-09-03 常熟理工学院 The SAR image painting methods that consistent sex resistance generates network are recycled based on multiple domain
CN110197517B (en) * 2019-06-11 2023-01-31 常熟理工学院 SAR image coloring method based on multi-domain cycle consistency countermeasure generation network
CN110210574A (en) * 2019-06-13 2019-09-06 中国科学院自动化研究所 Diameter radar image decomposition method, Target Identification Unit and equipment
CN110210574B (en) * 2019-06-13 2022-02-18 中国科学院自动化研究所 Synthetic aperture radar image interpretation method, target identification device and equipment
CN110472627A (en) * 2019-07-02 2019-11-19 五邑大学 One kind SAR image recognition methods end to end, device and storage medium
CN110472627B (en) * 2019-07-02 2022-11-08 五邑大学 End-to-end SAR image recognition method, device and storage medium
CN110363163A (en) * 2019-07-18 2019-10-22 电子科技大学 A kind of SAR target image generation method that azimuth is controllable
CN110363163B (en) * 2019-07-18 2021-07-13 电子科技大学 SAR target image generation method with controllable azimuth angle
US11657475B2 (en) * 2019-07-22 2023-05-23 Raytheon Company Machine learned registration and multi-modal regression
US20210027417A1 (en) * 2019-07-22 2021-01-28 Raytheon Company Machine learned registration and multi-modal regression
GB2595122A (en) * 2019-08-13 2021-11-17 Univ Of Hertfordshire Higher Education Corporation Method and apparatus
GB2586245B (en) * 2019-08-13 2021-09-22 Univ Of Hertfordshire Higher Education Corporation Method and apparatus
GB2595122B (en) * 2019-08-13 2022-08-24 Univ Of Hertfordshire Higher Education Corporation Method and apparatus
GB2586245A (en) * 2019-08-13 2021-02-17 Univ Of Hertfordshire Higher Education Corporation Method and apparatus
CN111047525A (en) * 2019-11-18 2020-04-21 宁波大学 Method for translating SAR remote sensing image into optical remote sensing image
CN112330562B (en) * 2020-11-09 2022-11-15 中国人民解放军海军航空大学 Heterogeneous remote sensing image transformation method and system
CN112330562A (en) * 2020-11-09 2021-02-05 中国人民解放军海军航空大学 Heterogeneous remote sensing image transformation method and system
WO2022120661A1 (en) * 2020-12-09 2022-06-16 深圳先进技术研究院 Priori-guided network for multi-task medical image synthesis
US11915401B2 (en) 2020-12-09 2024-02-27 Shenzhen Institutes Of Advanced Technology Apriori guidance network for multitask medical image synthesis
CN112668621A (en) * 2020-12-22 2021-04-16 南京航空航天大学 Image quality evaluation method and system based on cross-source image translation
CN112668621B (en) * 2020-12-22 2023-04-18 南京航空航天大学 Image quality evaluation method and system based on cross-source image translation
CN113361508A (en) * 2021-08-11 2021-09-07 四川省人工智能研究院(宜宾) Cross-view-angle geographic positioning method based on unmanned aerial vehicle-satellite
CN114202679A (en) * 2021-12-01 2022-03-18 昆明理工大学 Automatic labeling method for heterogeneous remote sensing image based on GAN network
CN115186814B (en) * 2022-07-25 2024-02-13 南京慧尔视智能科技有限公司 Training method, training device, electronic equipment and storage medium of countermeasure generation network
CN115186814A (en) * 2022-07-25 2022-10-14 南京慧尔视智能科技有限公司 Training method and device for confrontation generation network, electronic equipment and storage medium
CN117611644A (en) * 2024-01-23 2024-02-27 南京航空航天大学 Method, device, medium and equipment for converting visible light image into SAR image

Also Published As

Publication number Publication date
CN109636742B (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN109636742A (en) The SAR image of network and the mode conversion method of visible images are generated based on confrontation
CN109492556B (en) Synthetic aperture radar target identification method for small sample residual error learning
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN110363215B (en) Method for converting SAR image into optical image based on generating type countermeasure network
CN111814875B (en) Ship sample expansion method in infrared image based on pattern generation countermeasure network
CN111062880B (en) Underwater image real-time enhancement method based on condition generation countermeasure network
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN108229404A (en) A kind of radar echo signal target identification method based on deep learning
CN108509910A (en) Deep learning gesture identification method based on fmcw radar signal
CN105869166B (en) A kind of human motion recognition method and system based on binocular vision
CN111160268A (en) Multi-angle SAR target recognition method based on multi-task learning
CN109948467A (en) Method, apparatus, computer equipment and the storage medium of recognition of face
CN108681725A (en) A kind of weighting sparse representation face identification method
CN107316004A (en) Space Target Recognition based on deep learning
CN109886135A (en) A kind of low resolution face identification method, device and storage medium
CN109410149A (en) A kind of CNN denoising method extracted based on Concurrent Feature
CN108230280A (en) Image speckle noise minimizing technology based on tensor model and compressive sensing theory
CN110096976A (en) Human behavior micro-Doppler classification method based on sparse migration network
CN112163998A (en) Single-image super-resolution analysis method matched with natural degradation conditions
CN112686826A (en) Marine search and rescue method in severe weather environment
CN115731597A (en) Automatic segmentation and restoration management platform and method for mask image of face mask
Yang et al. Research on digital camouflage pattern generation algorithm based on adversarial autoencoder network
Sun et al. Remote sensing images dehazing algorithm based on cascade generative adversarial networks
CN112446835A (en) Image recovery method, image recovery network training method, device and storage medium
CN109344916A (en) A kind of microwave imaging of field-programmable deep learning and target identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant