NL2032891B1 - ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN - Google Patents

ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN Download PDF

Info

Publication number
NL2032891B1
NL2032891B1 NL2032891A NL2032891A NL2032891B1 NL 2032891 B1 NL2032891 B1 NL 2032891B1 NL 2032891 A NL2032891 A NL 2032891A NL 2032891 A NL2032891 A NL 2032891A NL 2032891 B1 NL2032891 B1 NL 2032891B1
Authority
NL
Netherlands
Prior art keywords
self
image
cyclegan
scansar
attention mechanism
Prior art date
Application number
NL2032891A
Other languages
Dutch (nl)
Other versions
NL2032891A (en
Inventor
Han Xinyu
Fang Ziyue
Ma Yuchen
Guo Rong
Dong Liren
Wang Biyu
Zheng Lingfeng
Sun Zengguo
Original Assignee
Univ Shaanxi Normal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Shaanxi Normal filed Critical Univ Shaanxi Normal
Publication of NL2032891A publication Critical patent/NL2032891A/en
Application granted granted Critical
Publication of NL2032891B1 publication Critical patent/NL2032891B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • G01S13/9056Scan SAR mode
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

UITTREKSEL image scallop effect suppression method based (H1 a self—attention mechanism and za CycleGAN. The method comprises the following steps: Sl, constructing a ScanSAR image data set; SZ, constructing an adversarial generative network model; S3, inputting the data set into the constructed, neural network model for training; and S4, inputting the ScanSAR image with the scallop effect into the network model trained in the step 3. According to the ScanSAR image scallop effect suppression method based on the self—attention mechanism and the CycleGAN, the scallop effect of the ScanSAR image is processed. On the basis of the CycleGAN, a novel cyclic consistent adversarial generative network model with long—distance dependence is formed by combining a self—attention mechanism. The method has the capability of more effectively eliminating the scallop effect fringe phenomenon. of the image, so that the image quality is obviously improved.

Description

ScanSAR image scallop effect inhibition method based on self- attention mechanism and CycleGAN
Technical field
The invention belongs to the technical field of image pro- cessing, and in particular relates to a method for suppressing the scallop effect of ScanSAR images based on a self-attention mecha- nism and CycleGAN.
Background technique
Synthetic Aperture Radar (SAR) is an active space microwave remote sensing technology. Scanning Synthetic Aperture Radar (ScanSAR) mode is one of the important working modes of SAR. Scan multiple bands for larger mapping bandwidth. Due to the scanning mechanism of ScanSAR, its system function is time-varying, and the intensity of the received echo signal will change periodically with the azimuth and range positions, resulting in serious inhomo- geneity of ScanSAR images. Among them, scalloping is one of the main reasons for the inhomogeneity of ScanSAR images, and it is also an inherent phenomenon of ScanSAR mode. After the imaging process, since the energy accumulated in the middle part is higher and the edge is lower, the phenomenon appears as bright and dark stripes. The existence of the scallop effect greatly reduces the quality of the image and increases the difficulty of image inter- pretation. Because this problem is caused by system characteris- tics, it is very difficult to overcome in hardware, so it is nec- essary to find a solution in software algorithm.
The existing methods for removing the scalloping effect can be divided into two categories: the first category is applied to the processing of SAR raw signals, that is, it is usually incorpo- rated into the level 0 SAR processing chain. For example, Bamler removes the scalloping effect by constructing a weighting function related to the azimuth position. This function can accurately com- pensate the periodic fluctuation of ScanSAR echo intensity in the azimuth direction, correcting the scalloping effect, while main-
taining the local signal-to-noise ratio in the azimuth direction.
Constant in azimuth. Shimada uses the JERS-1 imaging satellite to perform radiometric and geometric calibration, and accurately cal- culates the satellite's directional pattern and energy variation law, and performs more accurate energy fluctuation corrections on this basis.
The second category of methods are methods based entirely on image post-processing, such as Romeiser’s proposed Fourier domain- based adaptive filter, which presents a computationally and logi- cally complex process, and this technique is applied to C-band ra- dar scanning SAR images for the wind field estimation, the Doppler center is found through multiple iterations, and the energy fluc- tuations are gradually corrected. M.Igbal proposed a method of re- moving scallop effect based on Kalman filter, which obtained a good effect of removing stripes in both simulation and real data experiments. Sorrentino extends the idea to the frequency domain and separates the ScanSAR fringe information through wavelet transform.
The biggest problem of the first category of method is that it requires the prior data information of the radar sensor as sup- port, and cannot only correct the problem image; when there is no way to obtain the SAR original signal or the prior data infor- mation of the sensor cannot be obtained, this category of method is unavailable. The second category of method is based on the post-processing of the image. A series of processing of the image in the spatial or frequency domain requires a lot of logical deri- vation and calculation. Therefore, the removal stability of the scallop effect for images with different platforms and different imaging modes is not high.
Summary of the invention
The invention provides a method for suppressing the scallop effect of ScanSAR images based on a self-attention mechanism and
CycleGAN, comprising the following steps: 31: Crop the ScanSAR image to construct a dataset;
S2: Based on the CycleGAN model combined with the self- attention mechanism, a cycle-consistent adversarial generative network model with long-distance dependence is constructed;
S3: Input the prepared training data set into the neural net- work model constructed in step 2 for training;
S4: Input the ScanSAR image with the scallop effect into the network model trained in step 3, and then the streak phenomenon of the scallop effect can be eliminated.
Further, the S1: crop the ScanSAR image, and construct a data set. The construction of the data set is to crop the image into sub-images with a size of 512*512, and classify the sub-images ac- cording to whether there is a scallop effect. The picture shows the x-class image data set, and the normal sub-picture is the y- class image data set; the constructed data set is divided into training set and test set according to the ratio of 9:1.
Further, the S2: Based on the CycleGAN model combined with the self-attention mechanism, constructing a cycle-consistent con- frontation generative network model with long-distance dependen- cies, including the following steps:
S301: The constructed generator neural network;
S302: The constructed discriminator neural network.
Further, the generator neural network includes: an encoding part, a first self-attention module, an image conversion part, a second self-attention module, and a decoding part.
Further, the neural network model included in the discrimina- tor has 5 convolution layers, the input channel is 1, the number of filters in the first convolution layer is 13, and the step size is 2; the number of filters in the second convolution layer is 26, and the step size is 2; the number of filters in the third convo- lutional layer is 52 and the step size is 2; the number of filters in the fourth convolutional layer is 104 and the step size is 1; the number of filters in the fifth convolutional layer is 1 The number is 1 and the step size is 1; all the padding strategy is 1.
Further, the CycleGAN network learning strategy of the neural network model is:
G0, sarg min mar b, „{0, 0, DD) CB)
Vol J
The advantages of the present invention are: the present in-
vention provides the method for suppressing the scallop effect of the ScanSAR image based on the self-attention mechanism and Cy- cleGAN, and processes the scallop effect of the ScanSAR image. On the basis of CycleGAN, combined with the self-attention mechanism, a novel cycle-consistent adversarial generative network model with long-range dependencies is formed. The network model can learn the characteristics of images and process images with scalloping ef- fect by applying only image data. Compared with the traditional scallop effect processing method, the present invention has the ability to more effectively eliminate the image scallop effect streak phenomenon, so that the image quality is obviously im- proved. And once the model is trained, it can be reused many times, which simplifies the process of image processing.
The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
Brief description of the drawings
Figure 1 is a flow chart of the steps of the ScanSAR image scallop effect processing method of the present invention.
Figure 2 shows the structure of the self-attention module.
Figure 3 shows the generator structure combined with the self-attention mechanism.
Figure 4 is a structure diagram of the discriminator.
FIG. 5 is a structural diagram of the improved network Cy- cleGAN network of the present invention.
Figure 6 is a schematic diagram of abnormal image cropping.
Figure 7 is a schematic diagram of normal image cropping.
Figure 8 is a schematic diagram of an abnormal image dataset.
Figure 9 is a schematic diagram of a normal image dataset.
Figure 10 shows the data flow diagram of CycleGAN training.
Figure 11 is the original test image.
Figure 12 is a graph of the test results.
Detailed description of the embodiments
In order to further illustrate the technical means and ef- fects adopted by the present invention to achieve the predeter- mined purpose, the specific embodiments, structural features and effects of the present invention are described in detail below with reference to the accompanying drawings and examples.
The technical solutions in the embodiments of the present in- vention will be clearly and completely described below with refer- 5 ence to the accompanying drawings in the embodiments of the pre- sent invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present inven- tion, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the description of the present invention, it should be un- derstood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", " The orienta- tion or positional relationship indicated by "aligned", "overlap- ping”, "bottom", "inner", "outer", etc. is based on the orienta- tion or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and sim- plifying the description, rather than an indication or implication that the referred device or element must have a particular orien- tation, being constructed and operated in a particular orienta- tion, is not to be construed as a limitation of the invention.
The terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of tech- nical features indicated. Thus, the features defined with "first" and "second" may expressly or implicitly include one or more of the features; in the description of the present invention, unless otherwise specified, the meaning of "multiple" is two or more.
Example 1
This embodiment provides a method for suppressing the scallop effect in ScanSAR images based on the self-attention mechanism and
CycleGAN as shown in FIG. 1 to FIG. 12, including the following steps:
Sl: Crop the ScanSAR image to construct a dataset;
S2: Based on the CycleGAN model combined with the self-
attention mechanism, a cycle-consistent adversarial generative network model with long-distance dependence is constructed;
S3: Input the prepared training data set into the neural net- work model constructed in step 2 for training;
S4: Input the ScanSAR image with the scallop effect into the network model trained in step 3, and then the streak phenomenon of the scallop effect can be eliminated.
Further, the S1: crop the ScanSAR image, construct a data set, and cut 18 high-resolution No. 3 ScanSAR images, of which 8 are images with scallop effect, and 10 are normal ScanSAR images, and they are all cropped into Several sub-images with a size of 512*512, the sub-images allow partial areas to overlap, as shown in Figures 6 and 7. 400 abnormal sub-images with scallop effect were screened out as X-type images, as shown in Figure 8; 480 nor- mal images were screened out as Y-type images, as shown in Figure 9. Then according to the ratio of 9:1, the constructed data set is divided into training set and test set.
Further, the S2: Based on the CycleGAN model combined with the self-attention mechanism, constructing a cycle-consistent con- frontation generative network model with long-distance dependen- cies, including the following steps:
S301: The constructed generator neural network; 5302: The constructed discriminator neural network.
Further, as shown in FIG. 3, the generator neural network in- cludes: an encoding part, a first self-attention module, an image conversion part, a second self-attention module, and a decoding part.
The coding part is composed of three layers of convolution modules, each layer of convolution module includes a layer of con- volution network, instance normalization and Relu activation func- tion from front to back. The number of convolution kernels in the convolutional network layer of the three-layer convolutional mod- ule continues to increase from 32 to 128, ensuring that the net- work can extract enough features.
As shown in Figure 2, the first self-attention module passes the first part of the advanced feature map through the hidden lay- er of the convolutional network to obtain the initial feature
(convolution feature maps) x, and then obtains two features through the 1x1 convolution kernel. Space (feature space) f=W;x and g=Wyx (Ws, Wy; is the weight matrix), and then use f and g to calcu- late the attention, the formula is as follows:
Sape } ‘
Bosc where s, © JY gis 3 £13 where B+4,i represents the attention to the i-th part when gen- erating the j-th position, and the result is normalized by softmax to cbtain an attention map. Then use the obtained attention map and calculate the output of the attention layer through the fea- ture space obtained by another 1x1 convolution kernel, as follows:
The above three weight matrices We, Wy, Wi need to be learned through training. Finally, the output of the attention layer is combined with x to get the final output y;=vo;+x;, where y is ini- tially set to 0, so that the model can learn from simple local features and gradually learn the global. The self-attention fea- ture maps output by the self-attention module combines the deep structural features of the image and are input to the subseguent network of the generator to complete the generation task.
The image conversion part, composed of 7 residual modules, is to complete the conversion of different categories of images through the extracted features. The residual module does not change the size of the feature map, the input channel and output channel are both 128, and the activation function is both
LeakRelu.
The second self-attention module is the same as the first self-attention module, and highlights the global features in the transformed feature map to improve the quality of image restora- tion.
Decoding part: there are 3 deconvolution layers, 1 convolu- tion layer, and the input channel is 52; the number of filters in the first deconvolution layer is 26, and the step size is 2, and the padding strategy is 1; the number of filters in the second de-
convolution layer is 13, and the step size is 2, and the padding strategy is 1; the number of filters in the convolutional layer is 1, the padding strategy is 0, and the size of the deconvolution kernel is 7*7; the activation function the first 2 deconvolution layers use is LeakRelu, and the activation function the last con- volutional layer use is Tanh.
As shown in Figure 4, the constructed discriminator neural network is: using the patch strategy of the discriminator in
Patch-GAN, the input image is cropped into several 70*70 size sub- images, and the sub-images are input into the discriminator's con- volutional neural network , the neural network model of the dis- criminator has 5 convolutional layers, the input channel is 1, the number of filters in the first convolutional layer is 13, and the step size is 2; the number of filters in the second convolutional layer is 26 and the step size is 2; the number of filters in the third convolutional layer is 52 and the step size is 2; the number of filters in the fourth convolutional layer is 104 and the step size is 1; the number of filters in the fifth convolutional layer is 1 and the step size is 1; all the padding strategy is 1; the activation functions used are all LeakReLu; the discriminator will eventually output a prediction map with a channel of 1;
As shown in Figure 5, according to the CycleGAN theoretical model, a neural network model is constructed, as follows:
There are two generators Gy and Gy in the CycleGAN model, forming a ring network, with a discriminator Dy and Dy on each side. Gy and Dy, Gy and Dy respectively constitute a GAN. At the same time, the two GANs are symmetrical, and a ring network is formed by these two symmetrical GANs, forming CycleGAN.
In the model, there are x and y images, x generates a fake y class image through the generator Gy, that is: Gy(X), at the same time, the discriminator Dy; will determine that the image G;(X}) gen- erated by the generator is true or false; in the same way, y gen- erates a false x-type image through the generator Gy, namely:
G‚(Y), and at the same time, the discriminator Dy will determine whether the image Gy(Y) generated by the generator is true or false.
Therefore, the adversarial loss for its discriminator is:
Lean (Gxr Dy, Xp Y)=Eyopdacaty [109 Dy (VV) I +Eucpaarax: [log (1-De (Gx (x) ))] (3)
The adversarial loss for Gy and its discriminator Dy is:
Lean (Ges Dx; Ys X)=Ex gata) [109 Dz (X)} 1 +Eypaataryy [log (1-Dr(Gr(y)}))] (4)
Eu gdatainss [E (2) ] represents the expectation of f(x) about
Paas (2) when the random variable X satisfies the probability dis- tribution of Pats.
The adversarial loss of Gy and Dy is the same as that of the original GAN. CycleGAN introduces cycle consistency. Figure 5 shows a schematic diagram of cycle consistency. The so-called cy- cle consistency means that in an ideal state, after an X-type im- age is translated into a Y-type image through the generator Gi, the
Y-type image should be able to be translated into the original X- type image through the generator Gy, that is, the following formula is satisfied:
GAG {x)= x ms (5)
Satisfying the cycle consistency can prevent all X-class im- ages from being mapped to the same Y-class image, thus ensuring the robustness of the CycleGAN model. Therefore, the cycle con- sistency loss for the CycleGAN model is:
Loye (Gxy Gy) Extera tg [| [Gr (Gx (2) ) =X | | 1] +Eympgatary [|I Gz (Gey) )-yll 11 (6)
The complete CycleGAN model objects are as follows, where the role of the A coefficient is to control the relative importance of the two objects:
L (Gy, Gy, De, Dy)=Laay (Gz, Dy, X, Y)+Leay (Gy, Dy, Y, X)+AL.w (Gy, Gv) (7)
So the strategy of CycleGAN network learning is:
Godee a BSK dg UF BF Bf (8
After the network model of this method is built, set the learning parameters as follows: batch size is 4, the initial learning rate is 0.0002, the learning rate for the first 5000 it- erations is kept at 0.0002, and the learning rate decays linearly every time after that. The optimizer chooses Adam algorithm, where
A is 5. In the training process, select the appropriate number of images to build each sample batch (branch) for training, consider building a small batch of samples instead of a single image as a sample, that is to ensure that each batch of samples contains rich sample features, and also There is a suitable distance between different samples in space to avoid model collapse. Two time-scale update rules (TTUR) are used to balance the learning rates of the discriminative and generative networks so that both the generator and the discriminator converge stably, solving the slow learning problem in the regularized discriminator.
The data flow in the model during training is shown in Figure 10. (1) The abnormal image (X class) real A is input into the generator Gy, and after passing through the generator network, the generated image fake B is obtained. (2) Input fake B into the discriminator Dy, and the discrimi- nator determines the category of fake B. If it is judged to belong to the Y type image, it will output 1, otherwise it will be 0. (3) Input fake B into the generator Gx, and after passing through the generator network, the generated image cyc A is ob- tained. (4) The normal image (Y class) real B is input to the genera- tor Gx, and after passing through the generator network, the gen- erated image fake A is obtained. (5) Input fake A into the discriminator Dy, and the discrimi- nator determines the category of fake A. If it is judged to belong to the Y type image, it will output 0, otherwise it will be 1. (6) Input fake A into the generator Gy, and after passing through the generator network, the generated image cyc B is ob- tained. (7) And calculate the function values of the model's genera- tive adversarial loss and cyclic consistent loss, update the net- work parameters according to the obtained loss, and finally mini-
mize the difference between fake B and real B, fake A and real A, cyc A and real A, and cyc B and real B.
In order to verify the processing effect of this method on the scallop effect of ScanSAR images, after nearly 15,000 epochs of training, the network converges to obtain better weights, and the abnormal images of the test set are input into the network, as shown in Figure 11, the network output the resulting image, as shown in Figure 12, clearly shows that the network can effectively suppress the fringe phenomenon caused by the scallop effect of the abnormal image, so that the overall gray level of the image is ba- sically consistent, the brightness is moderate, and the image quality is significantly improved, which has great practical sig- nificance.
In summary, the scallop effect suppression method of ScanSAR images based on the self-attention mechanism and CycleGAN is used to process the scallop effect of ScanSAR images. On the basis of
CycleGAN, combined with the self-attention mechanism, a novel cy- cle-consistent adversarial generative network model with long- range dependencies is formed. The network model can learn the characteristics of images and process images with scalloping ef- fect by applying only image data. Compared with the traditional scallop effect processing method, the present invention has the ability to more effectively eliminate the image scallop effect streak phenomenon, so that the image quality is obviously im- proved. And once the model is trained, it can be reused many times, which simplifies the process of image processing.
The above content is a further detailed description of the present invention in combination with specific preferred embodi- ments, and it cannot be considered that the specific implementa- tion of the present invention is limited to these descriptions.
For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present in- vention, some simple deductions or substitutions can be made, which should be regarded as belonging to the protection scope of the present invention.

Claims (6)

CONCLUSIESCONCLUSIONS 1. Werkwijze voor het onderdrukken van scalloping voor een ScanSAR-afbeelding op basis van een zelfaandachtsmechanisme en Cy- cleGAN, bestaande uit de volgende stappen: Sl: de ScanSAR-afbeelding bijsnijden om een dataset te construe- ren; S2: het construeren van een cyclusconsistent generatief adversari- eel netwerkmodel met een afhankelijkheid op afstand door het zelf- aandachtsmechanisme te combineren op basis van een CycleGAN-model; S3: het invoeren van een voorbereide trainingsdataset in het neu- rale netwerkmodel dat in S2 is gebouwd voor training; en S4: het invoeren van de ScanSAR-afbeelding met scalloping in het netwerkmodel dat is getraind in S3, om een streepfenomeen van scalloping te elimineren.1. Method for suppressing scalloping for a ScanSAR image based on a self-attention mechanism and CycleGAN, consisting of the following steps: Sl: cropping the ScanSAR image to construct a data set; S2: constructing a cycle-consistent generative adversarial network model with a remote dependency by combining the self-attention mechanism based on a CycleGAN model; S3: inputting a prepared training dataset into the neural network model built in S2 for training; and S4: feeding the ScanSAR image with scalloping into the network model trained in S3, to eliminate a stripe phenomenon of scalloping. 2. Werkwijze voor het onderdrukken van scalloping voor een ScanSAR-afbeelding op basis van een zelfaandachtsmechanisme en Cy- cleGAN volgens conclusie 1, waarbij in de stap van het bijsnijden van de ScanSAR-afbeelding om de dataset in Sl te construeren, het construeren van de dataset bestaat uit: het bijsnijden van het beeld in 512*512-subafbeeldingen en het classificeren van de sub- afbeeldingen op basis van de vraag of de subafbeeldingen scallo- ping problemen hebben, waarbij de subafbeeldingen met het scallo- pingprobleem behoren tot een x-type beeldgegevensset en normale subafbeeldingen tot een y-type beeldgegevensset behoren; en het verdelen van de geconstrueerde dataset in trainingssets en test- sets met een verhouding van 9: 1.A method of suppressing scalloping for a ScanSAR image based on a self-attention mechanism and CycleGAN according to claim 1, wherein in the step of cropping the ScanSAR image to construct the data set in S1, constructing the dataset consists of: cropping the image into 512*512 sub-images and classifying the sub-images based on whether the sub-images have scalloping problems, where the sub-images with the scalloping problem belong to an x- type image dataset and normal subimages belong to a y-type image dataset; and dividing the constructed data set into training sets and test sets with a ratio of 9:1. 3. Werkwijze voor het onderdrukken van scalloping voor een ScanSAR-afbeelding op basis van een zelfaandachtsmechanisme en Cy- cleGAN volgens conclusie 1, waarbij het construeren van een cy- clusconsistent generatief adversarieel netwerkmodel met een afhan- kelijkheid op afstand door het zelfaandachtsmechanisme te combine- ren op basis van een CycleGAN-model in S2 bestaat uit de volgende stappen:A method for suppressing scalloping for a ScanSAR image based on a self-attention mechanism and CycleGAN according to claim 1, wherein constructing a cycle-consistent generative adversarial network model with a remote dependency by combining the self-attention mechanism. running based on a CycleGAN model in S2 consists of the following steps: S301: het bouwen van een generator neuraal netwerk; en S302: het bouwen van een dicriminator neuraal netwerk.S301: building a generator neural network; and S302: building a discriminator neural network. 4. Werkwijze voor het onderdrukken van scalloping voor een ScanSAR-afbeelding op basis van een zelfaandachtsmechanisme en Cy- cleGAN volgens conclusie 3, waarbij het generator neuraal netwerk bestaat uit een coderingsgedeelte, een eerste zelfaandachtsmodule, een beeldconversiedeel, een tweede zelfaandachtsmodule en een de- coderingsgedeelte.A method for suppressing scalloping for a ScanSAR image based on a self-attention mechanism and CycleGAN according to claim 3, wherein the generator neural network consists of an encoding part, a first self-attention module, an image conversion part, a second self-attention module and a de-attention module. coding part. 5. Werkwijze voor het onderdrukken van scalloping voor een ScanSAR-afbeelding op basis van een zelfaandachtmechanisme en Cy- cleGAN volgens conclusie 3, waarbij het neurale netwerkmodel dat in de discriminator is opgenomen vijf convolutionele lagen heeft, met één invoerkanaal; het aantal filters in de eerste convolutio- nele laag is 13 met een stapgrootte van 2; het aantal filters in de tweede convolutionele laag is 26 met een stapgrootte van 2; het aantal filters in de derde convolutionele laag is 52 met een stap- grootte van 2; het aantal filters in de vierde convolutionele laag is 104 met een stapgrootte van 1; het aantal filters in de vijfde convolutionele laag is 1 met een stapgrootte van 1; en alle opvul- strategieën zijn 1.A method of suppressing scalloping for a ScanSAR image based on a self-attention mechanism and CycleGAN according to claim 3, wherein the neural network model included in the discriminator has five convolutional layers, with one input channel; the number of filters in the first convolutional layer is 13 with a step size of 2; the number of filters in the second convolutional layer is 26 with a step size of 2; the number of filters in the third convolutional layer is 52 with a step size of 2; the number of filters in the fourth convolutional layer is 104 with a step size of 1; the number of filters in the fifth convolutional layer is 1 with a step size of 1; and all fill strategies are 1. 6. Werkwijze voor het onderdrukken van scalloping voor een ScanSAR-afbeelding op basis van een zelfaandachtsmechanisme en Cy- cleGAN volgens conclusie 3, waarbij een leerstrategie van een Cy- cleGAN-netwerk van het neurale netwerkmodel als volgt is: GG, =argmin max ZL (CG 6, D0, Dy} {S) EE Se DyA method of suppressing scalloping for a ScanSAR image based on a self-attention mechanism and CycleGAN according to claim 3, wherein a learning strategy of a CycleGAN network of the neural network model is as follows: GG, =argmin max ZL (CG 6, D0, Dy} {S) EE Se Dy
NL2032891A 2021-08-29 2022-08-29 ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN NL2032891B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110999708.2A CN113822895A (en) 2021-08-29 2021-08-29 ScanSAR image scallop effect suppression method based on self-attention mechanism and cycleGAN

Publications (2)

Publication Number Publication Date
NL2032891A NL2032891A (en) 2022-09-26
NL2032891B1 true NL2032891B1 (en) 2023-10-09

Family

ID=78923435

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2032891A NL2032891B1 (en) 2021-08-29 2022-08-29 ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN

Country Status (2)

Country Link
CN (1) CN113822895A (en)
NL (1) NL2032891B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546848B (en) * 2022-10-26 2024-02-02 南京航空航天大学 Challenge generation network training method, cross-equipment palmprint recognition method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544442B (en) * 2018-11-12 2023-05-23 南京邮电大学 Image local style migration method of double-countermeasure-based generation type countermeasure network
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism
CN110533665B (en) * 2019-09-03 2022-04-05 北京航空航天大学 SAR image processing method for inhibiting scallop effect and sub-band splicing effect
US11625812B2 (en) * 2019-11-01 2023-04-11 Microsoft Technology Licensing, Llc Recovering occluded image data using machine learning
CN111429340A (en) * 2020-03-25 2020-07-17 山东大学 Cyclic image translation method based on self-attention mechanism
CN112232156B (en) * 2020-09-30 2022-08-16 河海大学 Remote sensing scene classification method based on multi-head attention generation countermeasure network
CN112561838B (en) * 2020-12-02 2024-01-30 西安电子科技大学 Image enhancement method based on residual self-attention and generation of countermeasure network

Also Published As

Publication number Publication date
CN113822895A (en) 2021-12-21
NL2032891A (en) 2022-09-26

Similar Documents

Publication Publication Date Title
CN112446419A (en) Time-space neural network radar echo extrapolation forecasting method based on attention mechanism
CN108447041B (en) Multi-source image fusion method based on reinforcement learning
NL2032891B1 (en) ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN
CN113156439B (en) SAR wind field and sea wave joint inversion method and system based on data driving
CN111144234A (en) Video SAR target detection method based on deep learning
CN109884625B (en) Radar correlation imaging method based on convolutional neural network
CN109060838A (en) A kind of product surface scratch detection method based on machine vision
CN111784581A (en) SAR image super-resolution reconstruction method based on self-normalization generation countermeasure network
CN113989100A (en) Infrared texture sample expansion method based on pattern generation countermeasure network
CN114611608A (en) Sea surface height numerical value prediction deviation correction method based on deep learning model
CN116071664A (en) SAR image ship detection method based on improved CenterNet network
CN112989940B (en) Raft culture area extraction method based on high-resolution third satellite SAR image
CN112529828A (en) Reference data non-sensitive remote sensing image space-time fusion model construction method
CN112883908A (en) Space-frequency characteristic consistency-based SAR image-to-optical image mapping method
CN116299247B (en) InSAR atmospheric correction method based on sparse convolutional neural network
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
CN117115669A (en) Object-level ground object sample self-adaptive generation method and system with double-condition quality constraint
CN112446256A (en) Vegetation type identification method based on deep ISA data fusion
CN112767292B (en) Geographic weighting spatial hybrid decomposition method for space-time fusion
CN115540832A (en) Satellite altimetry submarine topography correction method and system based on VGGNet
CN114859317A (en) Radar target self-adaptive reverse truncation intelligent identification method
CN113506233B (en) SAR self-focusing method based on deep learning
CN113012251B (en) SAR image automatic colorization method based on generation countermeasure network
CN114764880B (en) Multi-component GAN reconstructed remote sensing image scene classification method
Wei et al. Moving Image Information-fusion-analysis Algorithm based on Multi-sensor