CN113822895A - ScanSAR image scallop effect suppression method based on self-attention mechanism and cycleGAN - Google Patents
ScanSAR image scallop effect suppression method based on self-attention mechanism and cycleGAN Download PDFInfo
- Publication number
- CN113822895A CN113822895A CN202110999708.2A CN202110999708A CN113822895A CN 113822895 A CN113822895 A CN 113822895A CN 202110999708 A CN202110999708 A CN 202110999708A CN 113822895 A CN113822895 A CN 113822895A
- Authority
- CN
- China
- Prior art keywords
- image
- self
- cyclegan
- scansar
- scallop effect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000694 effects Effects 0.000 title claims abstract description 52
- 241000237509 Patinopecten sp. Species 0.000 title claims abstract description 48
- 235000020637 scallop Nutrition 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000007246 mechanism Effects 0.000 title claims abstract description 26
- 230000001629 suppression Effects 0.000 title claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000003062 neural network model Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 125000004122 cyclic group Chemical group 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 230000002159 abnormal effect Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 5
- 208000009119 Giant Axonal Neuropathy Diseases 0.000 description 4
- 201000003382 giant axonal neuropathy 1 Diseases 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/904—SAR modes
- G01S13/9056—Scan SAR mode
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/60—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Abstract
The invention provides a ScanSAR image scallop effect suppression method based on a self-attention mechanism and cycleGAN, which comprises the following steps: s1: constructing a ScanSAR image data set; s2: constructing an confrontation generation network model; s3: inputting the data set into the constructed neural network model for training; s4: inputting the ScanSAR image with the scallop effect into the network model trained in the third step; the ScanSAR image scallop effect suppression method based on the self-attention mechanism and the cycleGAN processes the scallop effect of the ScanSAR image. On the basis of the cycleGAN, a novel cycle consistent countermeasure generation network model with remote dependence is formed by combining a self-attention mechanism. The method has the capability of eliminating the scallop effect fringe phenomenon of the image more effectively, so that the image quality is obviously improved.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a ScanSAR image scallop effect suppression method based on a self-attention mechanism and cycleGAN.
Background
Synthetic Aperture Radar (SAR) is used as an active space microwave remote sensing technology, and a scanning Synthetic Aperture Radar (ScanSAR) mode is one of important working modes of the SAR, and can change the angle of an antenna during working and scan a plurality of strips in sequence to obtain larger mapping bandwidth. Due to the scanning mechanism of the ScanSAR, the system function of the ScanSAR is time-varying, the intensity of the received echo signal can be periodically changed along with the azimuth direction and the distance direction, so that the ScanSAR image generates a serious non-uniform phenomenon. Among them, the scallop effect (grading) is one of the main causes of uneven ScanSAR images and is also a phenomenon inherent to the ScanSAR mode. This phenomenon presents alternate bright and dark stripes after the imaging process because the energy accumulated in the middle is high and the edges are low. The existence of the scallop effect greatly reduces the quality of the image and increases the difficulty in interpreting the image. Since this problem is a problem due to system characteristics and it is difficult to overcome the problem in hardware, a solution to software algorithm needs to be found.
Existing methods for removing scallop effect can be divided into two categories: the first method is applied to processing raw SAR signals, i.e., a level0 level SAR processing chain is typically incorporated. For example, Bamler removes the scallop effect by constructing an azimuth position-dependent weighting function, and the function can accurately compensate the periodical fluctuation of the ScanSAR echo intensity in the azimuth direction, correct the scallop effect and simultaneously keep the local signal-to-noise ratio constant in the azimuth direction. Shimada performs radiation and geometric calibration through a JERS-1 imaging satellite, accurately calculates a directional diagram and an energy change rule of the satellite, and performs more accurate energy fluctuation correction on the basis.
The second method is a method completely based on image post-processing, for example, Romeiser proposes an adaptive filter based on Fourier domain, the algorithm proposes a process with complex calculation and logic, the technology is applied to wind field estimation of C-band radar scanning SAR images, a Doppler center is found through multiple iterations, and energy fluctuation is gradually corrected. Iqbal provides a method for removing the scallop effect based on Kalman filtering, and the good fringe removing effect is obtained in simulation and real data experiments. The Sorrentino extends the thought to the frequency domain, and the ScanSAR stripe information is separated through wavelet transformation.
The first method has the biggest problem that prior data information of a radar sensor is required to be used as support, and only problem images cannot be corrected; this type of method cannot be used when there is no way to obtain raw SAR signals or to obtain a priori data information of the sensor. The second method is based on post-processing of the image, and a series of processing of the image in a spatial domain or a frequency domain requires a large amount of logic derivation and calculation, so that the scallop effect removal stability of the image for different platforms and different imaging modes is not high.
Disclosure of Invention
The invention provides a ScanSAR image scallop effect suppression method based on a self-attention mechanism and cycleGAN, which comprises the following steps:
s1: cutting the ScanSAR image to construct a data set;
s2: constructing a cyclic consistent countermeasure generation network model with remote dependence based on a CycleGAN model in combination with a self-attention mechanism;
s3: inputting the prepared training data set into the neural network model constructed in the second step, and training;
s4: inputting the ScanSAR image with the scallop effect into the network model trained in the third step, and eliminating the fringe phenomenon of the scallop effect.
Further, the step S1: cutting the ScanSAR image to construct a data set, wherein the data set is constructed by cutting the image into subgraphs with the size of 512 × 512, classifying according to whether the subgraphs have scallop effect, the subgraphs with the scallop effect problem are x-class image data sets, and the normal subgraphs are y-class image data sets; according to the following steps of 9: 1 the ratio divides the constructed data set into training sets: and (5) testing the set.
Further, the step S2: the method is characterized in that a cycle-consistent confrontation generation network model with remote dependence is constructed based on a CycleGAN model and a self-attention mechanism, and comprises the following steps:
s301: a constructed generator neural network;
s302: and constructing a discriminator neural network.
Further, the generator neural network includes: the image processing device comprises an encoding part, a first self-attention module, an image conversion part, a second self-attention module and a decoding part.
Furthermore, the neural network model included in the discriminator has 5 convolutional layers, the input channel is 1, the number of the first convolutional layer filters is 13, and the step length is 2; the number of the second convolution layer filters is 26, and the step length is 2; the number of the third convolutional layer filters is 52, and the step length is 2; the number of the fourth convolutional layer filters is 104, and the step length is 1; the number of the fifth convolutional layer filters is 1, and the step length is 1; the padding policies are all 1.
Further, the strategy for learning the CycleGAN network of the neural network model is as follows:
the invention has the advantages that: the invention provides a ScanSAR image scallop effect suppression method based on a self-attention mechanism and a cycleGAN, which is used for processing the scallop effect of a ScanSAR image. On the basis of the cycleGAN, a novel cycle consistent countermeasure generation network model with remote dependence is formed by combining a self-attention mechanism. The network model can learn the features of the image and process the image with scallop effect only in case of applying the image data. Compared with the traditional scallop effect processing method, the method has the capability of more effectively eliminating the scallop effect fringe phenomenon of the image, so that the image quality is obviously improved. And once the model is trained, the model can be repeatedly used for many times, and the image processing process is simplified.
The present invention will be described in detail below with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a flow chart of steps of a ScanSAR image scallop effect processing method of the present invention.
Fig. 2 is a diagram of a self-attention module.
FIG. 3 is a block diagram of a generator incorporating a self-attentive mechanism.
Fig. 4 is a diagram showing the structure of the discriminator.
Fig. 5 is a structure diagram of the improved network CycleGAN network of the present invention.
FIG. 6 is a schematic diagram of abnormal image cropping.
Fig. 7 is a schematic diagram of normal image cropping.
FIG. 8 is a schematic view of an anomaly image dataset.
Fig. 9 is a schematic diagram of a normal image data set.
FIG. 10 is a dataflow diagram during cycleGAN training.
Fig. 11 is a test original.
Fig. 12 is a graph of the test results.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the intended purpose, the following detailed description of the embodiments, structural features and effects of the present invention will be made with reference to the accompanying drawings and examples.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "aligned", "overlapping", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Example 1
The embodiment provides a ScanSAR image scallop effect suppression method based on a self-attention mechanism and CycleGAN in fig. 1 to 12, which includes the following steps:
s1: cutting the ScanSAR image to construct a data set;
s2: constructing a cyclic consistent countermeasure generation network model with remote dependence based on a CycleGAN model in combination with a self-attention mechanism;
s3: inputting the prepared training data set into the neural network model constructed in the second step, and training;
s4: inputting the ScanSAR image with the scallop effect into the network model trained in the third step, and eliminating the fringe phenomenon of the scallop effect.
Further, the step S1: and (3) cutting the ScanSAR image, constructing a data set, and cutting 18 scenes of the ScanSAR image with a high mark three, wherein 8 scenes are images with scallop effect, and 10 scenes are normal ScanSAR images, and each image is cut into a plurality of subgraphs with the size of 512 × 512, wherein the subgraphs allow partial areas to be overlapped, as shown in FIGS. 6 and 7. 400 abnormal subgraphs with scallop effect are screened out and taken as X-class images, as shown in FIG. 8; 480 normal images were screened out as Y-class images, as shown in fig. 9. Then according to the following steps of 9: the ratio of 1 divides the constructed data set into a training set and a testing set.
Further, the step S2: the method is characterized in that a cycle-consistent confrontation generation network model with remote dependence is constructed based on a CycleGAN model and a self-attention mechanism, and comprises the following steps:
s301: a constructed generator neural network;
s302: and constructing a discriminator neural network.
Further, as shown in fig. 3, the generator neural network includes: the image processing device comprises an encoding part, a first self-attention module, an image conversion part, a second self-attention module and a decoding part.
And the coding part is composed of three layers of convolution modules, and each layer of convolution module comprises a layer of convolution network, example normalization and Relu activation function from front to back. The number of convolution kernels in the convolution network layer of the three-layer convolution module is increased to 32 and 128 continuously, and the network is ensured to extract enough features.
As shown in fig. 2, the first self-attention module obtains initial features (convolution features) x from the first partial advanced feature map through the hidden layer of the convolutional network, and then obtains two feature spaces (feature spaces) f ═ W through a 1 × 1 convolution kernelfx and g ═ Wgx(Wf,WgA weight matrix) and then using f and g to calculate attention, the formula is as follows:
wherein beta isj,iShowing the attention of the ith local part when the jth position is generated, and normalizing the result by softmax to obtain an attention feature map (attention map). The resulting attention map is then used and the output of the attention layer is computed from the feature space obtained by another 1 x 1 convolution kernel as follows:
the three weight matrices W mentioned abovef,Wg,WhNeeds to be learned through training. Finally, the output of the attention layer is combined with the x to obtain the final output yi=γoi+xiWhere γ is initially set to 0 so that the model can be gradually learned from simple local characterizations to global. Self att output from attention moduleThe entry feature maps are combined with the deep structural features of the images and input into a subsequent network of the generator to complete the generation task.
The image conversion part is composed of 7 residual modules and completes the conversion of different image categories through the extracted features. The residual module does not change the size of the feature map, the input channel and the output channel are both 128, and the activation functions are both LeakRelu.
The second self-attention module is the same as the first self-attention module, and is used for highlighting the global features in the feature map after conversion, so that the image restoration quality is improved.
A decoding part: there are 3 deconvolution layers, 1 convolution layer, input channel 52; the number of the first deconvolution layer filters is 26, the step size is 2, and the padding strategy is 1; the number of the second deconvolution layer filters is 13, the step length is 2, and the padding strategy is 1; the number of convolution layer filters is 1, the padding strategy is 0, and the size of a deconvolution kernel is 7 × 7; the activation function used for the first 2 deconvolution layers is LeakRelu, and the activation function used for the last convolution layer is Tanh.
As shown in fig. 4, the constructed discriminator neural network is: cutting an input image into a plurality of 70 × 70 sub-images by adopting a Patch strategy of a Patch-GAN discriminator, inputting the sub-images into a convolutional neural network of the discriminator, wherein a neural network model of the discriminator has 5 convolutional layers, an input channel is 1, the number of first convolutional layer filters is 13, and the step length is 2; the number of the second convolution layer filters is 26, and the step length is 2; the number of the third convolutional layer filters is 52, and the step length is 2; the number of the fourth convolutional layer filters is 104, and the step length is 1; the number of the fifth convolutional layer filters is 1, and the step length is 1; the padding strategies are all 1; the applied activation functions are LeakReLu; the discriminator finally outputs a prediction map (prediction map) with a channel of 1;
as shown in fig. 5, a neural network model is constructed according to the CycleGAN theoretical model, specifically as follows:
there are two generators G in the CycleGAN modelX、GYForm a ring network, and each side has a discriminator DX、DY。GXAnd DY、GYAnd DXEach constituting a GAN. Meanwhile, the two GANs are symmetrical, and a ring network is formed by the two symmetrical GANs to form a cycleGAN.
In the model, there are x and y classes of images, x passing through the generator GXGenerating a false y-type image, namely: gX(X) and, at the same time, a discriminator DYWill determine the image G generated by the generatorX(X) is true or false; similarly, y is through generator GYGenerating a false x-type image, namely: gY(Y) and, at the same time, a discriminator DXWill determine the image G generated by the generatorY(Y) is true or false.
Therefore, the penalty for confrontation (adaptive loss) with its arbiter is:
LGAN(GX,DY,X,Y)=Ey~Pdata(y)[log DY(y)]+Ex~Pdata(x)[log(1-DY(GX(x)))] (3)
for GYAnd its discriminator DXThe confrontational loss (adversarial loss) of (1) is:
LGAN(GY,DX,Y,X)=Ex~Pdata(x)[log DX(x)]+Ey~Pdata(y)[log(1-DX(GY(y)))] (4)
Ex~Pdata(x)[f(x)]when the random variable X satisfies PdataF (x) with respect to Pdata(x) The expectation is that.
GXAnd DYThe antagonistic loss of (c) is the same as that of the original GAN, the cyclic GAN introduces cyclic consistency (cycle consistency), and fig. 5 gives a cyclic consistency diagram. The cycle consistency is that under the ideal condition, the X-class images pass through the generator GXAfter translating into Y-type image, the Y-type image should pass through the generator GYTranslating into the original X-type image, namely satisfying the following formula:
satisfying the cycle consistency can avoid mapping all the X-class images to the same Y-class image, thereby ensuring the robustness of the CycleGAN model, and therefore, the cycle consistency loss (cycle consistency loss) for the CycleGAN model is:
Lcyc(GX,GY)=Ex~Pdata(x)[||GY(GX(x))-x||1]+Ey~Pdata(y)[||GX(GY(y))-y||1] (6)
the complete CycleGAN model object is as follows, where the role of the λ coefficient is to control the relative importance of the two objects:
L(GX,GY,DX,DY)=LGAN(GX,DY,X,Y)+LGAN(GY,DX,Y,X)+λLcyc(GX,GY) (7)
therefore, the strategy for CycleGAN network learning is as follows:
after the network model of the method is built, learning parameters are set as follows: the batch size is 4, the initial learning rate is 0.0002, the learning rate for the first 5000 iterations is kept at 0.0002, and then the learning rate decays linearly each time, and the optimizer chooses the Adam algorithm where λ is 5. In the training process, an appropriate number of images is selected to construct each sample batch (branch) for training, and a small batch of samples are considered to be constructed instead of a single image as one sample, so that the samples in each batch are ensured to contain abundant sample characteristics, and meanwhile, the different samples are spaced at an appropriate distance, so that model collapse is avoided. Two time scale update rules (TTUR) are used to balance the learning rates of the discriminant network and the generation network, so that the generator and the discriminant are both stable and convergent, and the slow learning problem in the regularization discriminant is solved.
The data flow in the model at the time of training is shown in fig. 10.
(1) The abnormal image (X class) real _ a is input into the generator Gy, and the generated image fake _ B is obtained after passing through the generator network.
(2) Inputting the fake _ B into a discriminator Dy, judging the category of the fake _ B by the discriminator, outputting 1 if judging that the fake _ B belongs to the Y-type image, and otherwise, outputting 0.
(3) And inputting the fake _ B into the generator Gx, and obtaining a generated image cyc _ A after passing through a generator network.
(4) And inputting the normal image (Y type) real _ B into a generator Gx, and obtaining a generated image fake _ A after passing through a generator network.
(5) Inputting the fake _ A into a discriminator Dy, judging the category of the fake _ A by the discriminator, outputting 0 if judging that the fake _ A belongs to the Y-type image, and otherwise, outputting 1.
(6) The fake _ A is input into the generator Gy, and a generated image cyc _ B is obtained after the fake _ A passes through a generator network.
(7) And calculating the function value of the model for generating the antagonistic loss and the cyclic consistent loss, and updating the network parameters according to the obtained loss, wherein the difference between fake _ B and real _ B, fake _ A and real _ A, cyc _ A and real _ A, cyc _ B and real _ B is finally minimized.
In order to verify the processing effect of the method on the ScanSAR image scallop effect, after the training of nearly 15000 epochs, the network obtains better weight through convergence, the abnormal image of the test set is input into the network, as shown in figure 11, the network outputs a result image, as shown in figure 12, obviously, the network can better inhibit the stripe phenomenon generated by the abnormal image scallop effect, the integral gray scale of the image is basically consistent and moderate in brightness, the image quality is obviously improved, and the method has greater practical significance.
In summary, the ScanSAR image scallop effect suppression method based on the self-attention mechanism and the cycleGAN processes the scallop effect of the ScanSAR image. On the basis of the cycleGAN, a novel cycle consistent countermeasure generation network model with remote dependence is formed by combining a self-attention mechanism. The network model can learn the features of the image and process the image with scallop effect only in case of applying the image data. Compared with the traditional scallop effect processing method, the method has the capability of more effectively eliminating the scallop effect fringe phenomenon of the image, so that the image quality is obviously improved. And once the model is trained, the model can be repeatedly used for many times, and the image processing process is simplified.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (6)
1. A ScanSAR image scallop effect suppression method based on a self-attention mechanism and cycleGAN is characterized by comprising the following steps:
s1: cutting the ScanSAR image to construct a data set;
s2: constructing a cyclic consistent countermeasure generation network model with remote dependence based on a CycleGAN model in combination with a self-attention mechanism;
s3: inputting the prepared training data set into the neural network model constructed in the second step, and training;
s4: inputting the ScanSAR image with the scallop effect into the network model trained in the third step, and eliminating the fringe phenomenon of the scallop effect.
2. The ScanSAR image scallop effect suppression method based on the self-attention mechanism and the cycleGAN as claimed in claim 1, characterized in that: the S1: cutting the ScanSAR image to construct a data set, wherein the data set is constructed by cutting the image into subgraphs with the size of 512 × 512, classifying according to whether the subgraphs have scallop effect, the subgraphs with the scallop effect problem are x-class image data sets, and the normal subgraphs are y-class image data sets; according to the following steps of 9: 1 the ratio divides the constructed data set into training sets: and (5) testing the set.
3. The ScanSAR image scallop effect suppression method based on the self-attention mechanism and the cycleGAN as claimed in claim 1, characterized in that: the S2: the method is characterized in that a cycle-consistent confrontation generation network model with remote dependence is constructed based on a CycleGAN model and a self-attention mechanism, and comprises the following steps:
s301: a constructed generator neural network;
s302: and constructing a discriminator neural network.
4. The ScanSAR image scallop effect suppression method based on the self-attention mechanism and the cycleGAN as claimed in claim 3, characterized in that: the generator neural network includes: the image processing device comprises an encoding part, a first self-attention module, an image conversion part, a second self-attention module and a decoding part.
5. The ScanSAR image scallop effect suppression method based on the self-attention mechanism and the cycleGAN as claimed in claim 3, characterized in that: the neural network model included in the discriminator has 5 convolutional layers, the input channel is 1, the number of the first convolutional layer filters is 13, and the step length is 2; the number of the second convolution layer filters is 26, and the step length is 2; the number of the third convolutional layer filters is 52, and the step length is 2; the number of the fourth convolutional layer filters is 104, and the step length is 1; the number of the fifth convolutional layer filters is 1, and the step length is 1; the padding policies are all 1.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110999708.2A CN113822895A (en) | 2021-08-29 | 2021-08-29 | ScanSAR image scallop effect suppression method based on self-attention mechanism and cycleGAN |
NL2032891A NL2032891B1 (en) | 2021-08-29 | 2022-08-29 | ScanSAR image scallop effect inhibition method based on self-attention mechanism and CycleGAN |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110999708.2A CN113822895A (en) | 2021-08-29 | 2021-08-29 | ScanSAR image scallop effect suppression method based on self-attention mechanism and cycleGAN |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113822895A true CN113822895A (en) | 2021-12-21 |
Family
ID=78923435
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110999708.2A Pending CN113822895A (en) | 2021-08-29 | 2021-08-29 | ScanSAR image scallop effect suppression method based on self-attention mechanism and cycleGAN |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113822895A (en) |
NL (1) | NL2032891B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115546848A (en) * | 2022-10-26 | 2022-12-30 | 南京航空航天大学 | Confrontation generation network training method, cross-device palmprint recognition method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544442A (en) * | 2018-11-12 | 2019-03-29 | 南京邮电大学 | The image local Style Transfer method of production confrontation network based on dual confrontation |
CN109978165A (en) * | 2019-04-04 | 2019-07-05 | 重庆大学 | A kind of generation confrontation network method merged from attention mechanism |
CN111429340A (en) * | 2020-03-25 | 2020-07-17 | 山东大学 | Cyclic image translation method based on self-attention mechanism |
CN112232156A (en) * | 2020-09-30 | 2021-01-15 | 河海大学 | Remote sensing scene classification method based on multi-head attention generation countermeasure network |
CN112561838A (en) * | 2020-12-02 | 2021-03-26 | 西安电子科技大学 | Image enhancement method based on residual self-attention and generation countermeasure network |
US20210133936A1 (en) * | 2019-11-01 | 2021-05-06 | Microsoft Technology Licensing, Llc | Recovering occluded image data using machine learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533665B (en) * | 2019-09-03 | 2022-04-05 | 北京航空航天大学 | SAR image processing method for inhibiting scallop effect and sub-band splicing effect |
-
2021
- 2021-08-29 CN CN202110999708.2A patent/CN113822895A/en active Pending
-
2022
- 2022-08-29 NL NL2032891A patent/NL2032891B1/en active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544442A (en) * | 2018-11-12 | 2019-03-29 | 南京邮电大学 | The image local Style Transfer method of production confrontation network based on dual confrontation |
CN109978165A (en) * | 2019-04-04 | 2019-07-05 | 重庆大学 | A kind of generation confrontation network method merged from attention mechanism |
US20210133936A1 (en) * | 2019-11-01 | 2021-05-06 | Microsoft Technology Licensing, Llc | Recovering occluded image data using machine learning |
CN111429340A (en) * | 2020-03-25 | 2020-07-17 | 山东大学 | Cyclic image translation method based on self-attention mechanism |
CN112232156A (en) * | 2020-09-30 | 2021-01-15 | 河海大学 | Remote sensing scene classification method based on multi-head attention generation countermeasure network |
CN112561838A (en) * | 2020-12-02 | 2021-03-26 | 西安电子科技大学 | Image enhancement method based on residual self-attention and generation countermeasure network |
Non-Patent Citations (2)
Title |
---|
YU LU ET AL.: "《Image Translation with Attention Mechanism based on Generative Adversarial Networks》", BING, 10 August 2020 (2020-08-10), pages 364 - 369 * |
潘宗序;安全智;张冰尘;: "基于深度学习的雷达图像目标识别研究进展", 中国科学:信息科学, no. 12, 20 December 2019 (2019-12-20) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115546848A (en) * | 2022-10-26 | 2022-12-30 | 南京航空航天大学 | Confrontation generation network training method, cross-device palmprint recognition method and system |
CN115546848B (en) * | 2022-10-26 | 2024-02-02 | 南京航空航天大学 | Challenge generation network training method, cross-equipment palmprint recognition method and system |
Also Published As
Publication number | Publication date |
---|---|
NL2032891A (en) | 2022-09-26 |
NL2032891B1 (en) | 2023-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734171A (en) | A kind of SAR remote sensing image ocean floating raft recognition methods of depth collaboration sparse coding network | |
CN113156439B (en) | SAR wind field and sea wave joint inversion method and system based on data driving | |
CN112488924A (en) | Image super-resolution model training method, reconstruction method and device | |
CN106226212A (en) | EO-1 hyperion haze monitoring method based on degree of depth residual error network | |
CN109884625B (en) | Radar correlation imaging method based on convolutional neural network | |
CN104318246A (en) | Depth self-adaption ridgelet network based polarimetric SAR (Synthetic Aperture Radar) image classification | |
CN111144234A (en) | Video SAR target detection method based on deep learning | |
CN107595312A (en) | Model generating method, image processing method and medical imaging devices | |
CN107301641A (en) | A kind of detection method and device of Remote Sensing Imagery Change | |
CN113822895A (en) | ScanSAR image scallop effect suppression method based on self-attention mechanism and cycleGAN | |
CN112989940B (en) | Raft culture area extraction method based on high-resolution third satellite SAR image | |
CN107392861B (en) | Sparse representation SAR image speckle reduction method based on Gaussian proportion mixed model | |
CN109886910A (en) | External digital elevation model DEM modification method and device | |
CN112883908A (en) | Space-frequency characteristic consistency-based SAR image-to-optical image mapping method | |
CN114037891A (en) | High-resolution remote sensing image building extraction method and device based on U-shaped attention control network | |
CN113111706A (en) | SAR target feature unwrapping and identifying method for continuous missing of azimuth angle | |
CN110599423A (en) | SAR image brightness compensation method based on deep learning cycleGAN model processing | |
CN115860113A (en) | Training method and related device for self-antagonistic neural network model | |
CN107832805B (en) | Technology for eliminating influence of spatial position error on remote sensing soft classification precision evaluation based on probability position model | |
CN113344846B (en) | Remote sensing image fusion method and system based on generation countermeasure network and compressed sensing | |
Luo et al. | Despeckling multi-temporal polarimetric SAR data based on tensor decomposition | |
Xu et al. | Desnet: Deep residual networks for Descalloping of ScanSar images | |
CN114882473A (en) | Road extraction method and system based on full convolution neural network | |
CN113743373A (en) | High-resolution remote sensing image cropland change detection device and method based on deep learning | |
Wang et al. | Unsupervised SAR Despeckling by Combining Online Speckle Generation and Unpaired Training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |