NL2032891B1 - ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN - Google Patents
ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN Download PDFInfo
- Publication number
- NL2032891B1 NL2032891B1 NL2032891A NL2032891A NL2032891B1 NL 2032891 B1 NL2032891 B1 NL 2032891B1 NL 2032891 A NL2032891 A NL 2032891A NL 2032891 A NL2032891 A NL 2032891A NL 2032891 B1 NL2032891 B1 NL 2032891B1
- Authority
- NL
- Netherlands
- Prior art keywords
- self
- image
- cyclegan
- scansar
- attention mechanism
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000007246 mechanism Effects 0.000 title claims abstract description 26
- 230000000694 effects Effects 0.000 title abstract description 37
- 241000237509 Patinopecten sp. Species 0.000 title abstract description 31
- 235000020637 scallop Nutrition 0.000 title abstract description 31
- 230000005764 inhibitory process Effects 0.000 title description 2
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000003062 neural network model Methods 0.000 claims abstract description 10
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000001629 suppression Effects 0.000 abstract description 3
- 125000004122 cyclic group Chemical group 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 230000002159 abnormal effect Effects 0.000 description 6
- 208000009119 Giant Axonal Neuropathy Diseases 0.000 description 5
- 230000004913 activation Effects 0.000 description 5
- 201000003382 giant axonal neuropathy 1 Diseases 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 101100345589 Mus musculus Mical1 gene Proteins 0.000 description 1
- 101150107869 Sarg gene Proteins 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 229940052961 longrange Drugs 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/904—SAR modes
- G01S13/9056—Scan SAR mode
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
UITTREKSEL image scallop effect suppression method based (H1 a self—attention mechanism and za CycleGAN. The method comprises the following steps: Sl, constructing a ScanSAR image data set; SZ, constructing an adversarial generative network model; S3, inputting the data set into the constructed, neural network model for training; and S4, inputting the ScanSAR image with the scallop effect into the network model trained in the step 3. According to the ScanSAR image scallop effect suppression method based on the self—attention mechanism and the CycleGAN, the scallop effect of the ScanSAR image is processed. On the basis of the CycleGAN, a novel cyclic consistent adversarial generative network model with long—distance dependence is formed by combining a self—attention mechanism. The method has the capability of more effectively eliminating the scallop effect fringe phenomenon. of the image, so that the image quality is obviously improved.
Description
ScanSAR image scallop effect inhibition method based on self- attention mechanism and CycleGAN
The invention belongs to the technical field of image pro- cessing, and in particular relates to a method for suppressing the scallop effect of ScanSAR images based on a self-attention mecha- nism and CycleGAN.
Background technique
Synthetic Aperture Radar (SAR) is an active space microwave remote sensing technology. Scanning Synthetic Aperture Radar (ScanSAR) mode is one of the important working modes of SAR. Scan multiple bands for larger mapping bandwidth. Due to the scanning mechanism of ScanSAR, its system function is time-varying, and the intensity of the received echo signal will change periodically with the azimuth and range positions, resulting in serious inhomo- geneity of ScanSAR images. Among them, scalloping is one of the main reasons for the inhomogeneity of ScanSAR images, and it is also an inherent phenomenon of ScanSAR mode. After the imaging process, since the energy accumulated in the middle part is higher and the edge is lower, the phenomenon appears as bright and dark stripes. The existence of the scallop effect greatly reduces the quality of the image and increases the difficulty of image inter- pretation. Because this problem is caused by system characteris- tics, it is very difficult to overcome in hardware, so it is nec- essary to find a solution in software algorithm.
The existing methods for removing the scalloping effect can be divided into two categories: the first category is applied to the processing of SAR raw signals, that is, it is usually incorpo- rated into the level 0 SAR processing chain. For example, Bamler removes the scalloping effect by constructing a weighting function related to the azimuth position. This function can accurately com- pensate the periodic fluctuation of ScanSAR echo intensity in the azimuth direction, correcting the scalloping effect, while main-
taining the local signal-to-noise ratio in the azimuth direction.
Constant in azimuth. Shimada uses the JERS-1 imaging satellite to perform radiometric and geometric calibration, and accurately cal- culates the satellite's directional pattern and energy variation law, and performs more accurate energy fluctuation corrections on this basis.
The second category of methods are methods based entirely on image post-processing, such as Romeiser’s proposed Fourier domain- based adaptive filter, which presents a computationally and logi- cally complex process, and this technique is applied to C-band ra- dar scanning SAR images for the wind field estimation, the Doppler center is found through multiple iterations, and the energy fluc- tuations are gradually corrected. M.Igbal proposed a method of re- moving scallop effect based on Kalman filter, which obtained a good effect of removing stripes in both simulation and real data experiments. Sorrentino extends the idea to the frequency domain and separates the ScanSAR fringe information through wavelet transform.
The biggest problem of the first category of method is that it requires the prior data information of the radar sensor as sup- port, and cannot only correct the problem image; when there is no way to obtain the SAR original signal or the prior data infor- mation of the sensor cannot be obtained, this category of method is unavailable. The second category of method is based on the post-processing of the image. A series of processing of the image in the spatial or frequency domain requires a lot of logical deri- vation and calculation. Therefore, the removal stability of the scallop effect for images with different platforms and different imaging modes is not high.
The invention provides a method for suppressing the scallop effect of ScanSAR images based on a self-attention mechanism and
CycleGAN, comprising the following steps: 31: Crop the ScanSAR image to construct a dataset;
S2: Based on the CycleGAN model combined with the self- attention mechanism, a cycle-consistent adversarial generative network model with long-distance dependence is constructed;
S3: Input the prepared training data set into the neural net- work model constructed in step 2 for training;
S4: Input the ScanSAR image with the scallop effect into the network model trained in step 3, and then the streak phenomenon of the scallop effect can be eliminated.
Further, the S1: crop the ScanSAR image, and construct a data set. The construction of the data set is to crop the image into sub-images with a size of 512*512, and classify the sub-images ac- cording to whether there is a scallop effect. The picture shows the x-class image data set, and the normal sub-picture is the y- class image data set; the constructed data set is divided into training set and test set according to the ratio of 9:1.
Further, the S2: Based on the CycleGAN model combined with the self-attention mechanism, constructing a cycle-consistent con- frontation generative network model with long-distance dependen- cies, including the following steps:
S301: The constructed generator neural network;
S302: The constructed discriminator neural network.
Further, the generator neural network includes: an encoding part, a first self-attention module, an image conversion part, a second self-attention module, and a decoding part.
Further, the neural network model included in the discrimina- tor has 5 convolution layers, the input channel is 1, the number of filters in the first convolution layer is 13, and the step size is 2; the number of filters in the second convolution layer is 26, and the step size is 2; the number of filters in the third convo- lutional layer is 52 and the step size is 2; the number of filters in the fourth convolutional layer is 104 and the step size is 1; the number of filters in the fifth convolutional layer is 1 The number is 1 and the step size is 1; all the padding strategy is 1.
Further, the CycleGAN network learning strategy of the neural network model is:
G0, sarg min mar b, „{0, 0, DD) CB)
Vol J
The advantages of the present invention are: the present in-
vention provides the method for suppressing the scallop effect of the ScanSAR image based on the self-attention mechanism and Cy- cleGAN, and processes the scallop effect of the ScanSAR image. On the basis of CycleGAN, combined with the self-attention mechanism, a novel cycle-consistent adversarial generative network model with long-range dependencies is formed. The network model can learn the characteristics of images and process images with scalloping ef- fect by applying only image data. Compared with the traditional scallop effect processing method, the present invention has the ability to more effectively eliminate the image scallop effect streak phenomenon, so that the image quality is obviously im- proved. And once the model is trained, it can be reused many times, which simplifies the process of image processing.
The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
Figure 1 is a flow chart of the steps of the ScanSAR image scallop effect processing method of the present invention.
Figure 2 shows the structure of the self-attention module.
Figure 3 shows the generator structure combined with the self-attention mechanism.
Figure 4 is a structure diagram of the discriminator.
FIG. 5 is a structural diagram of the improved network Cy- cleGAN network of the present invention.
Figure 6 is a schematic diagram of abnormal image cropping.
Figure 7 is a schematic diagram of normal image cropping.
Figure 8 is a schematic diagram of an abnormal image dataset.
Figure 9 is a schematic diagram of a normal image dataset.
Figure 10 shows the data flow diagram of CycleGAN training.
Figure 11 is the original test image.
Figure 12 is a graph of the test results.
In order to further illustrate the technical means and ef- fects adopted by the present invention to achieve the predeter- mined purpose, the specific embodiments, structural features and effects of the present invention are described in detail below with reference to the accompanying drawings and examples.
The technical solutions in the embodiments of the present in- vention will be clearly and completely described below with refer- 5 ence to the accompanying drawings in the embodiments of the pre- sent invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present inven- tion, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the description of the present invention, it should be un- derstood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", " The orienta- tion or positional relationship indicated by "aligned", "overlap- ping”, "bottom", "inner", "outer", etc. is based on the orienta- tion or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and sim- plifying the description, rather than an indication or implication that the referred device or element must have a particular orien- tation, being constructed and operated in a particular orienta- tion, is not to be construed as a limitation of the invention.
The terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of tech- nical features indicated. Thus, the features defined with "first" and "second" may expressly or implicitly include one or more of the features; in the description of the present invention, unless otherwise specified, the meaning of "multiple" is two or more.
Example 1
This embodiment provides a method for suppressing the scallop effect in ScanSAR images based on the self-attention mechanism and
CycleGAN as shown in FIG. 1 to FIG. 12, including the following steps:
Sl: Crop the ScanSAR image to construct a dataset;
S2: Based on the CycleGAN model combined with the self-
attention mechanism, a cycle-consistent adversarial generative network model with long-distance dependence is constructed;
S3: Input the prepared training data set into the neural net- work model constructed in step 2 for training;
S4: Input the ScanSAR image with the scallop effect into the network model trained in step 3, and then the streak phenomenon of the scallop effect can be eliminated.
Further, the S1: crop the ScanSAR image, construct a data set, and cut 18 high-resolution No. 3 ScanSAR images, of which 8 are images with scallop effect, and 10 are normal ScanSAR images, and they are all cropped into Several sub-images with a size of 512*512, the sub-images allow partial areas to overlap, as shown in Figures 6 and 7. 400 abnormal sub-images with scallop effect were screened out as X-type images, as shown in Figure 8; 480 nor- mal images were screened out as Y-type images, as shown in Figure 9. Then according to the ratio of 9:1, the constructed data set is divided into training set and test set.
Further, the S2: Based on the CycleGAN model combined with the self-attention mechanism, constructing a cycle-consistent con- frontation generative network model with long-distance dependen- cies, including the following steps:
S301: The constructed generator neural network; 5302: The constructed discriminator neural network.
Further, as shown in FIG. 3, the generator neural network in- cludes: an encoding part, a first self-attention module, an image conversion part, a second self-attention module, and a decoding part.
The coding part is composed of three layers of convolution modules, each layer of convolution module includes a layer of con- volution network, instance normalization and Relu activation func- tion from front to back. The number of convolution kernels in the convolutional network layer of the three-layer convolutional mod- ule continues to increase from 32 to 128, ensuring that the net- work can extract enough features.
As shown in Figure 2, the first self-attention module passes the first part of the advanced feature map through the hidden lay- er of the convolutional network to obtain the initial feature
(convolution feature maps) x, and then obtains two features through the 1x1 convolution kernel. Space (feature space) f=W;x and g=Wyx (Ws, Wy; is the weight matrix), and then use f and g to calcu- late the attention, the formula is as follows:
Sape } ‘
Bosc where s, © JY gis 3 £13 where B+4,i represents the attention to the i-th part when gen- erating the j-th position, and the result is normalized by softmax to cbtain an attention map. Then use the obtained attention map and calculate the output of the attention layer through the fea- ture space obtained by another 1x1 convolution kernel, as follows:
The above three weight matrices We, Wy, Wi need to be learned through training. Finally, the output of the attention layer is combined with x to get the final output y;=vo;+x;, where y is ini- tially set to 0, so that the model can learn from simple local features and gradually learn the global. The self-attention fea- ture maps output by the self-attention module combines the deep structural features of the image and are input to the subseguent network of the generator to complete the generation task.
The image conversion part, composed of 7 residual modules, is to complete the conversion of different categories of images through the extracted features. The residual module does not change the size of the feature map, the input channel and output channel are both 128, and the activation function is both
LeakRelu.
The second self-attention module is the same as the first self-attention module, and highlights the global features in the transformed feature map to improve the quality of image restora- tion.
Decoding part: there are 3 deconvolution layers, 1 convolu- tion layer, and the input channel is 52; the number of filters in the first deconvolution layer is 26, and the step size is 2, and the padding strategy is 1; the number of filters in the second de-
convolution layer is 13, and the step size is 2, and the padding strategy is 1; the number of filters in the convolutional layer is 1, the padding strategy is 0, and the size of the deconvolution kernel is 7*7; the activation function the first 2 deconvolution layers use is LeakRelu, and the activation function the last con- volutional layer use is Tanh.
As shown in Figure 4, the constructed discriminator neural network is: using the patch strategy of the discriminator in
Patch-GAN, the input image is cropped into several 70*70 size sub- images, and the sub-images are input into the discriminator's con- volutional neural network , the neural network model of the dis- criminator has 5 convolutional layers, the input channel is 1, the number of filters in the first convolutional layer is 13, and the step size is 2; the number of filters in the second convolutional layer is 26 and the step size is 2; the number of filters in the third convolutional layer is 52 and the step size is 2; the number of filters in the fourth convolutional layer is 104 and the step size is 1; the number of filters in the fifth convolutional layer is 1 and the step size is 1; all the padding strategy is 1; the activation functions used are all LeakReLu; the discriminator will eventually output a prediction map with a channel of 1;
As shown in Figure 5, according to the CycleGAN theoretical model, a neural network model is constructed, as follows:
There are two generators Gy and Gy in the CycleGAN model, forming a ring network, with a discriminator Dy and Dy on each side. Gy and Dy, Gy and Dy respectively constitute a GAN. At the same time, the two GANs are symmetrical, and a ring network is formed by these two symmetrical GANs, forming CycleGAN.
In the model, there are x and y images, x generates a fake y class image through the generator Gy, that is: Gy(X), at the same time, the discriminator Dy; will determine that the image G;(X}) gen- erated by the generator is true or false; in the same way, y gen- erates a false x-type image through the generator Gy, namely:
G‚(Y), and at the same time, the discriminator Dy will determine whether the image Gy(Y) generated by the generator is true or false.
Therefore, the adversarial loss for its discriminator is:
Lean (Gxr Dy, Xp Y)=Eyopdacaty [109 Dy (VV) I +Eucpaarax: [log (1-De (Gx (x) ))] (3)
The adversarial loss for Gy and its discriminator Dy is:
Lean (Ges Dx; Ys X)=Ex gata) [109 Dz (X)} 1 +Eypaataryy [log (1-Dr(Gr(y)}))] (4)
Eu gdatainss [E (2) ] represents the expectation of f(x) about
Paas (2) when the random variable X satisfies the probability dis- tribution of Pats.
The adversarial loss of Gy and Dy is the same as that of the original GAN. CycleGAN introduces cycle consistency. Figure 5 shows a schematic diagram of cycle consistency. The so-called cy- cle consistency means that in an ideal state, after an X-type im- age is translated into a Y-type image through the generator Gi, the
Y-type image should be able to be translated into the original X- type image through the generator Gy, that is, the following formula is satisfied:
GAG {x)= x ms (5)
Satisfying the cycle consistency can prevent all X-class im- ages from being mapped to the same Y-class image, thus ensuring the robustness of the CycleGAN model. Therefore, the cycle con- sistency loss for the CycleGAN model is:
Loye (Gxy Gy) Extera tg [| [Gr (Gx (2) ) =X | | 1] +Eympgatary [|I Gz (Gey) )-yll 11 (6)
The complete CycleGAN model objects are as follows, where the role of the A coefficient is to control the relative importance of the two objects:
L (Gy, Gy, De, Dy)=Laay (Gz, Dy, X, Y)+Leay (Gy, Dy, Y, X)+AL.w (Gy, Gv) (7)
So the strategy of CycleGAN network learning is:
Godee a BSK dg UF BF Bf (8
After the network model of this method is built, set the learning parameters as follows: batch size is 4, the initial learning rate is 0.0002, the learning rate for the first 5000 it- erations is kept at 0.0002, and the learning rate decays linearly every time after that. The optimizer chooses Adam algorithm, where
A is 5. In the training process, select the appropriate number of images to build each sample batch (branch) for training, consider building a small batch of samples instead of a single image as a sample, that is to ensure that each batch of samples contains rich sample features, and also There is a suitable distance between different samples in space to avoid model collapse. Two time-scale update rules (TTUR) are used to balance the learning rates of the discriminative and generative networks so that both the generator and the discriminator converge stably, solving the slow learning problem in the regularized discriminator.
The data flow in the model during training is shown in Figure 10. (1) The abnormal image (X class) real A is input into the generator Gy, and after passing through the generator network, the generated image fake B is obtained. (2) Input fake B into the discriminator Dy, and the discrimi- nator determines the category of fake B. If it is judged to belong to the Y type image, it will output 1, otherwise it will be 0. (3) Input fake B into the generator Gx, and after passing through the generator network, the generated image cyc A is ob- tained. (4) The normal image (Y class) real B is input to the genera- tor Gx, and after passing through the generator network, the gen- erated image fake A is obtained. (5) Input fake A into the discriminator Dy, and the discrimi- nator determines the category of fake A. If it is judged to belong to the Y type image, it will output 0, otherwise it will be 1. (6) Input fake A into the generator Gy, and after passing through the generator network, the generated image cyc B is ob- tained. (7) And calculate the function values of the model's genera- tive adversarial loss and cyclic consistent loss, update the net- work parameters according to the obtained loss, and finally mini-
mize the difference between fake B and real B, fake A and real A, cyc A and real A, and cyc B and real B.
In order to verify the processing effect of this method on the scallop effect of ScanSAR images, after nearly 15,000 epochs of training, the network converges to obtain better weights, and the abnormal images of the test set are input into the network, as shown in Figure 11, the network output the resulting image, as shown in Figure 12, clearly shows that the network can effectively suppress the fringe phenomenon caused by the scallop effect of the abnormal image, so that the overall gray level of the image is ba- sically consistent, the brightness is moderate, and the image quality is significantly improved, which has great practical sig- nificance.
In summary, the scallop effect suppression method of ScanSAR images based on the self-attention mechanism and CycleGAN is used to process the scallop effect of ScanSAR images. On the basis of
CycleGAN, combined with the self-attention mechanism, a novel cy- cle-consistent adversarial generative network model with long- range dependencies is formed. The network model can learn the characteristics of images and process images with scalloping ef- fect by applying only image data. Compared with the traditional scallop effect processing method, the present invention has the ability to more effectively eliminate the image scallop effect streak phenomenon, so that the image quality is obviously im- proved. And once the model is trained, it can be reused many times, which simplifies the process of image processing.
The above content is a further detailed description of the present invention in combination with specific preferred embodi- ments, and it cannot be considered that the specific implementa- tion of the present invention is limited to these descriptions.
For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present in- vention, some simple deductions or substitutions can be made, which should be regarded as belonging to the protection scope of the present invention.
Claims (6)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110999708.2A CN113822895B (en) | 2021-08-29 | 2021-08-29 | ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN |
Publications (2)
Publication Number | Publication Date |
---|---|
NL2032891A NL2032891A (en) | 2022-09-26 |
NL2032891B1 true NL2032891B1 (en) | 2023-10-09 |
Family
ID=78923435
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2032891A NL2032891B1 (en) | 2021-08-29 | 2022-08-29 | ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113822895B (en) |
NL (1) | NL2032891B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115546848B (en) * | 2022-10-26 | 2024-02-02 | 南京航空航天大学 | Challenge generation network training method, cross-equipment palmprint recognition method and system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544442B (en) * | 2018-11-12 | 2023-05-23 | 南京邮电大学 | Image local style migration method of double-countermeasure-based generation type countermeasure network |
CN109978165A (en) * | 2019-04-04 | 2019-07-05 | 重庆大学 | A kind of generation confrontation network method merged from attention mechanism |
CN110533665B (en) * | 2019-09-03 | 2022-04-05 | 北京航空航天大学 | SAR image processing method for inhibiting scallop effect and sub-band splicing effect |
US11625812B2 (en) * | 2019-11-01 | 2023-04-11 | Microsoft Technology Licensing, Llc | Recovering occluded image data using machine learning |
CN111429340A (en) * | 2020-03-25 | 2020-07-17 | 山东大学 | Cyclic image translation method based on self-attention mechanism |
CN112232156B (en) * | 2020-09-30 | 2022-08-16 | 河海大学 | Remote sensing scene classification method based on multi-head attention generation countermeasure network |
CN112561838B (en) * | 2020-12-02 | 2024-01-30 | 西安电子科技大学 | Image enhancement method based on residual self-attention and generation of countermeasure network |
-
2021
- 2021-08-29 CN CN202110999708.2A patent/CN113822895B/en active Active
-
2022
- 2022-08-29 NL NL2032891A patent/NL2032891B1/en active
Also Published As
Publication number | Publication date |
---|---|
CN113822895B (en) | 2024-08-02 |
NL2032891A (en) | 2022-09-26 |
CN113822895A (en) | 2021-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Canchumuni et al. | Recent developments combining ensemble smoother and deep generative networks for facies history matching | |
CN113156439B (en) | SAR wind field and sea wave joint inversion method and system based on data driving | |
CN112446419A (en) | Time-space neural network radar echo extrapolation forecasting method based on attention mechanism | |
CN109389058A (en) | Sea clutter and noise signal classification method and system | |
CN108447041B (en) | Multi-source image fusion method based on reinforcement learning | |
CN108734171A (en) | A kind of SAR remote sensing image ocean floating raft recognition methods of depth collaboration sparse coding network | |
NL2032891B1 (en) | ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN | |
CN100370486C (en) | Typhoon center positioning method based on embedded type concealed Markov model and cross entropy | |
CN111144234A (en) | Video SAR target detection method based on deep learning | |
CN113989100B (en) | Infrared texture sample expansion method based on style generation countermeasure network | |
CN111784581B (en) | SAR image super-resolution reconstruction method based on self-normalization generation countermeasure network | |
CN109060838A (en) | A kind of product surface scratch detection method based on machine vision | |
CN114611608A (en) | Sea surface height numerical value prediction deviation correction method based on deep learning model | |
CN112883908A (en) | Space-frequency characteristic consistency-based SAR image-to-optical image mapping method | |
CN117422619A (en) | Training method of image reconstruction model, image reconstruction method, device and equipment | |
CN116071664A (en) | SAR image ship detection method based on improved CenterNet network | |
CN112446256A (en) | Vegetation type identification method based on deep ISA data fusion | |
CN112989940B (en) | Raft culture area extraction method based on high-resolution third satellite SAR image | |
CN112529828A (en) | Reference data non-sensitive remote sensing image space-time fusion model construction method | |
CN117114984A (en) | Remote sensing image super-resolution reconstruction method based on generation countermeasure network | |
CN117115669A (en) | Object-level ground object sample self-adaptive generation method and system with double-condition quality constraint | |
CN114764880B (en) | Multi-component GAN reconstructed remote sensing image scene classification method | |
CN115540832A (en) | Satellite altimetry submarine topography correction method and system based on VGGNet | |
CN114859317A (en) | Radar target self-adaptive reverse truncation intelligent identification method | |
CN112767292A (en) | Geographical weighting spatial mixed decomposition method for space-time fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PD | Change of ownership |
Owner name: SHAANXI TOWER STAR-X AEROSPACE TECHNOLOGY CO., LTD.; CN Free format text: DETAILS ASSIGNMENT: CHANGE OF OWNER(S), ASSIGNMENT; FORMER OWNER NAME: SHAANXI NORMAL UNIVERSITY Effective date: 20240904 |