CN117114984A - Remote sensing image super-resolution reconstruction method based on generation countermeasure network - Google Patents
Remote sensing image super-resolution reconstruction method based on generation countermeasure network Download PDFInfo
- Publication number
- CN117114984A CN117114984A CN202310787153.4A CN202310787153A CN117114984A CN 117114984 A CN117114984 A CN 117114984A CN 202310787153 A CN202310787153 A CN 202310787153A CN 117114984 A CN117114984 A CN 117114984A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- module
- resolution
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000006731 degradation reaction Methods 0.000 claims abstract description 21
- 230000015556 catabolic process Effects 0.000 claims abstract description 16
- 230000007246 mechanism Effects 0.000 claims abstract description 13
- 230000008447 perception Effects 0.000 claims abstract description 9
- 230000004913 activation Effects 0.000 claims description 54
- 230000006870 function Effects 0.000 claims description 54
- 238000011176 pooling Methods 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 9
- 230000014509 gene expression Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000001303 quality assessment method Methods 0.000 description 3
- 238000013441 quality evaluation Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008485 antagonism Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000002262 irrigation Effects 0.000 description 1
- 238000003973 irrigation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention provides a remote sensing image super-resolution reconstruction method based on a generated countermeasure network, which comprises the following steps: firstly, constructing a remote sensing image high-order degradation model simulating an actual degradation process to obtain a pair of high-resolution remote sensing image and low-resolution remote sensing image; secondly, inputting the low-resolution remote sensing image into a remote sensing image generator comprising an ultra-dense residual error module to generate a reconstructed ultra-resolution image; then, classifying and judging the high-resolution remote sensing image and the reconstructed super-resolution image by using a U-shaped remote sensing image discriminator integrating the recursion residual error and the attention mechanism; optimizing network parameters by calculating L1 level loss, perception loss and GAN loss to obtain a remote sensing image super-resolution reconstruction model; and finally, generating a super-resolution reconstructed remote sensing image by using the remote sensing image super-resolution reconstruction model. The invention can effectively eliminate the influence of the blurring and the artifacts in the reconstructed image and realize the super-resolution reconstruction of the high-quality remote sensing image.
Description
Technical Field
The invention relates to the technical field of remote sensing image reconstruction, in particular to a remote sensing image super-resolution reconstruction method based on a generated countermeasure network.
Background
Along with the continuous development of remote sensing technology, remote sensing images have been widely applied to the fields of disaster early warning, environmental change monitoring, military reconnaissance and the like due to the advantages of wide coverage area, small influence by the ground and the like. For a remote sensing image, the size of the image resolution can determine how much information it carries. In order to obtain more accurate results when performing object detection, classification, and scene change detection using remote sensing images, high resolution images are required. At present, the remote sensing image technology of China is rapidly developed, and satellite series such as 'Fengyun', 'Gao' and 'Beidou' are gradually integrated into various industries of society. The method is limited by hardware conditions such as an imaging sensor, an optical element manufacturing process and the like and signal transmission bandwidth, so that a high-resolution remote sensing image is difficult to directly obtain, and unknown losses such as blurring or noise can occur in the imaging and transmission processes of the remote sensing image. The quality of shooting equipment is improved, and the methods of optical components, image sensors and the like with better manufacturing process and higher precision are used, so that the quality of shooting images can be improved, but the implementation difficulty is high, the cost is high, and the method is difficult to popularize in practical application. Therefore, the task of reconstructing the super-resolution of the remote sensing image through an algorithm gradually becomes a mainstream solution.
Before the rise of deep learning, the common remote sensing image super-resolution method comprises a method based on interpolation reconstruction and a method based on image priori information reconstruction. The former adopts a pixel interpolation method to directly up-sample a low-resolution image into a high-resolution image, but the problems of high-frequency information deletion, blurring, continuous jaggies and the like of a reconstructed image are caused. The latter requires the use of explicit image prior information, limiting its performance in image reconstruction. With the development of deep learning technology, the image super-resolution reconstruction method based on the convolutional neural network is excellent in high-frequency information recovery, but because the network level is limited, characteristic information cannot be fully utilized, the reconstructed image still has the problem of local blurring, and particularly, continuous jaggies and artifacts exist on detail textures of ground object intersections.
Disclosure of Invention
Aiming at the problems of image blurring, artifacts and the like in the existing remote sensing image super-resolution reconstruction method, the invention provides a remote sensing image super-resolution reconstruction method based on a generated countermeasure network.
The technical scheme of the invention is realized as follows:
a remote sensing image super-resolution reconstruction method based on a generated countermeasure network comprises the following steps:
step one: constructing a remote sensing image high-order degradation model simulating an actual degradation process, and carrying out high-resolution remote sensing image I HR Inputting the high-order degradation model of the remote sensing image to obtain a high-resolution remote sensing image I HR Corresponding low resolution remote sensing image I LR ;
Step two: constructing a remote sensing image super-resolution reconstruction network, wherein the remote sensing image super-resolution reconstruction network comprises a remote sensing image generator comprising an ultra-dense residual error module, and a U-shaped remote sensing image discriminator integrating a recursion residual error and an attention mechanism; and pair the high-resolution remote sensing images I HR And low resolution remote sensing image I LR As a training set of the remote sensing image super-resolution reconstruction network;
step three: will low resolution remote sensing image I LR Inputting the image into a remote sensing image generator containing an ultra-dense residual error module to generate a reconstructed super-resolution image I SR ;
Step four: benefit (benefit)U-shaped remote sensing image discriminator for high-resolution remote sensing image I by combining recursion residual error and attention mechanism HR And reconstructing super-resolution image I SR Performing classification judgment;
step five: optimizing parameters of the remote sensing image super-resolution reconstruction network by calculating L1 level loss, perception loss and GAN loss, converging the network, and completing training to obtain a remote sensing image super-resolution reconstruction model;
step six: and inputting the low-resolution remote sensing image to be reconstructed into a remote sensing image super-resolution reconstruction model to generate a super-resolution reconstructed remote sensing image.
Preferably, the remote sensing image high-order degradation model comprises a first stage, a second stage and downsampling; the input of the first stage is a high-resolution remote sensing image I HR The output of the first stage is connected with the input of the second stage, the output of the second stage is connected with downsampling, and the multiplying power of downsampling is randomly selected; downsampling to output multiple low-resolution remote sensing images I with different degradation degrees LR And high resolution remote sensing image I HR Forming a plurality of pairs of HR-LR remote sensing images as a training set; the first stage and the second stage comprise two processes of adding blurring and adding noise, wherein the blurring kernel type comprises a Gaussian blurring kernel and a sinc blurring kernel, and the noise type comprises Gaussian noise and Poisson noise; the fuzzy core type and the noise type occur according to the set probability, the set probability of the first stage is different from that of the second stage, the size of the fuzzy core is randomly selected, and multiple iterations are performed.
Preferably, the remote sensing image generator comprising ultra-dense residual modules comprises a first convolution layer, an ultra-dense residual module group, a second convolution layer, an up-sampling module, a third convolution layer and a fourth convolution layer; low resolution remote sensing image I LR Inputting the first convolution layer, inputting the output characteristics of the first convolution layer into an ultra-dense residual error module group, adding the output characteristics of the ultra-dense residual error module group and the output characteristics of the first convolution layer, inputting the output characteristics of the second convolution layer into an up-sampling module, inputting the output characteristics of the up-sampling module into a third convolution layer, and inputting the output characteristics of the third convolution layerInto a fourth convolution layer, the fourth convolution layer outputs and reconstructs super-resolution image I SR The method comprises the steps of carrying out a first treatment on the surface of the The ultra-dense residual error module group comprises 23 ultra-dense residual error modules (RRSDB) which are sequentially connected, and the output characteristic of the last RRSDB is added with the output characteristic of the first convolution layer to be used as the input characteristic of the second convolution layer.
Preferably, the RRSDB includes a convolution layer i, an lreuu activation function i, a convolution layer ii, an lreuu activation function ii, a convolution layer iii, an lreuu activation function iii, a convolution layer iv, an lreuu activation function iv, and a convolution layer v; wherein the feature F is input in Input convolution layer I, output characteristics of convolution layer I input LReLu activation function I, output characteristics of LReLu activation function I and input characteristics F in Adding to obtain a first characteristic F 1 The method comprises the steps of carrying out a first treatment on the surface of the First feature F 1 Input convolution layer II, output characteristics of convolution layer II input LReLu activation function II, output characteristics and input characteristics F of LReLu activation function II in First feature F 1 After addition, a second characteristic F is obtained 2 The method comprises the steps of carrying out a first treatment on the surface of the Second feature F 2 Input convolution layer III, output characteristics of convolution layer III input LReLu activation function III, output characteristics and input characteristics F of LReLu activation function III in First feature F 1 Second characteristic F 2 After addition, a third characteristic F is obtained 3 The method comprises the steps of carrying out a first treatment on the surface of the Third feature F 3 Inputting a convolution layer IV, inputting an LReLu activation function IV to the output characteristic of the convolution layer IV, and inputting the output characteristic and the input characteristic F of the LReLu activation function IV in First feature F 1 Second characteristic F 2 After addition, the fourth characteristic F is obtained 4 The method comprises the steps of carrying out a first treatment on the surface of the Fourth feature F 4 Inputting the convolution layer V, and obtaining an output characteristic F through the convolution layer V out The method comprises the steps of carrying out a first treatment on the surface of the First feature F 1 Second characteristic F 2 Third feature F 3 Fourth feature F 4 And output feature F out The expressions of (2) are respectively:
F 1 =C(F in )+F in ;
F 2 =C(F 1 )+2F in +F 1 ;
F 3 =C(F 2 )+F in +F 1 +F 2 ;
F 4 =C(F 3 )+F in +F 1 +2F 2 ;
F out =C(F 4 );
wherein C (·) represents the convolution operation.
Preferably, the U-shaped remote sensing image discriminator integrating the recursive residual error and the attention mechanism comprises a first recursive residual error convolution module, a first max-pooling module, a second recursive residual error convolution module, a second max-pooling module, a third recursive residual error convolution module, a third max-pooling module, a fourth recursive residual error convolution module, a fourth max-pooling module, a fifth recursive residual error convolution module, a first upsampling module, a sixth recursive residual error convolution module, a second upsampling module, a seventh recursive residual error convolution module, a third upsampling module, an eighth recursive residual error convolution module, a fourth upsampling module, a ninth recursive residual error convolution module and a convolution layer;
high resolution remote sensing image I HR And reconstructing super-resolution image I SR Extracting a first feature through a first recursive residual convolution module; the first feature obtains a first output feature through the attention gate module;
the first features sequentially pass through a first maximum pooling module and a second recursion residual convolution module to obtain second features; the second characteristic obtains a second output characteristic through the attention gate module;
the second feature is subjected to a second maximum pooling module and a third recursion residual convolution module to obtain a third feature; the third characteristic obtains a third output characteristic through the attention gate module;
the third feature is subjected to a third maximum pooling module and a fourth recursion residual convolution module to obtain a fourth feature; the fourth feature obtains a fourth output feature through the attention gate module;
the fourth characteristic is subjected to a fourth maximum pooling module and a fifth recursion residual convolution module to obtain a fifth characteristic; the fifth feature obtains a first upsampling feature through a first upsampling module;
the first upsampling feature and the fourth output feature are combined and connected, and then sequentially pass through a sixth recursion residual convolution module and a second upsampling module to obtain a second upsampling feature;
the second upsampling feature and the third output feature are combined and connected, and then the third upsampling feature is obtained through a seventh recursion residual convolution module and a third upsampling module in sequence;
the third upsampling feature and the second output feature are combined and connected, and then the fourth upsampling feature is obtained through an eighth recursion residual convolution module and a fourth upsampling module in sequence;
and the fourth upsampling feature is combined and connected with the first output feature and then sequentially passes through a ninth recursion residual convolution module and a convolution layer to obtain an image discrimination result.
Preferably, the recursive residual convolution module comprises a convolution layer VI, a residual convolution layer and an adder; the input features sequentially pass through a convolution layer VI, and features obtained by a residual convolution layer subjected to two recursions are added with the input features to obtain output features.
Preferably, the residual convolution layer comprises a convolution layer vi I, reLu activation function i, a convolution layer vii I, reLu activation function ii and an adder; the input features sequentially pass through the convolution layers VI I, the ReLu activation function I, the convolution layers VI II and the ReLu activation function II to obtain features, and the features obtained by the convolution layers VI I, the ReLu activation function I, the convolution layers VI II and the ReLu activation function II are added with the input features to obtain output features.
Preferably, the expressions of the L1 level loss, the perceptual loss and the GAN loss are respectively:
L1_loss=||I SR -I HR ||_|1|;
Perceptual_loss=||F(I SR )-F(I HR )||_|1|;
GAN_loss=ln[D(I HR )]-ln[1-D(I SR )];
Total_loss=λ 1 *L1_loss+λ 2 *Perceptual_loss+λ 3 *GAN_loss;
wherein L1_loss is L1 level loss, perceptual_loss is perceived loss, GAN_loss is GAN loss, total_loss is Total loss of the network; I.I 1| represents an L1 norm; f represents a feature extraction network, F (I SR ) Representing feature extraction networksIn reconstructing super resolution image I SR Features extracted from (I) HR ) Representing the characteristic extraction network in the high resolution remote sensing image I HR Features extracted from the extraction; d (I) HR ) Representing the arbiter network predicate I HR Probability of true, D (I SR ) Representing the arbiter network predicate I SR Probability of true; lambda (lambda) 1 、λ 2 、λ 3 The weighting coefficients for L1 loss, perceptual loss and GAN loss are represented, respectively.
Compared with the prior art, the invention has the beneficial effects that:
1) The remote sensing image high-order degradation model can simulate the remote sensing image degradation process under the real scene, not only can obviously improve the quality of the reconstructed high-resolution remote sensing image, but also can effectively improve the generalization capability of the network model, and solves the problem that the existing majority of algorithms are difficult to simulate the remote sensing image degradation loss under the real scene by adopting single double three downsampling, so that the quality of the reconstructed image is not ideal.
2) The remote sensing image generator comprising the ultra-dense residual error module RRSDB can effectively improve the quality of a reconstructed remote sensing image on the premise of not deepening the depth of a network, and solves the problem that the generated remote sensing image is blurred because the existing generator network of the GAN-based image super-resolution reconstruction algorithm cannot fully utilize the characteristic information of the remote sensing image in the network due to the limitation of the depth of the network.
3) The U-shaped remote sensing image discriminator integrating the recursion residual error and the attention mechanism can carry out finer classification judgment on a real remote sensing image and a generated remote sensing image, provide more accurate and detailed feedback for a generator, enable a reconstructed super-resolution image to have finer texture details and better subjective perception, and solve the problems that an existing discriminator network based on the GAN image super-resolution reconstruction algorithm can only globally discriminate the image and cause blurring and artifact of detail parts of the generated image.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the network training of the present invention;
FIG. 2 is a network structure diagram of a remote sensing image high-order degradation model of the invention;
FIG. 3 is a diagram of a remote sensing image generator network including RRSDB according to the present invention;
FIG. 4 is a network structure diagram of a U-shaped remote sensing image discriminator integrating recursion residual errors and an attention mechanism;
FIG. 5 is a diagram of a recursive residual error structure of the present invention, wherein (a) is the structure of a recursive residual error convolution module and (b) is the structure of a residual error convolution layer;
FIG. 6 is a visual diagram of a U-shaped remote sensing image discriminator network of the invention integrating recursive residuals and attention mechanisms at different iteration times;
fig. 7 is a diagram of reference-free objective evaluation index contrast and local area subjective visual effect contrast of a remote sensing image super-resolution reconstruction network and a part of existing super-resolution algorithm reconstructing images on different remote sensing image data sets.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the embodiment of the invention provides a remote sensing image super-resolution reconstruction method based on an antagonism network, which comprises the following specific steps:
step one: constructing a remote sensing image high-order degradation model simulating an actual degradation process, and carrying out high-resolution remote sensing image I HR Inputting the high-order degradation model of the remote sensing image to obtain a high-resolution remote sensing image I HR Corresponding low resolution remote sensing image I LR 。
As shown in fig. 2, the remote sensing image high-order degradation model includes a first stage, a second stage and downsampling; the input of the first stage is a high-resolution remote sensing image I HR The output of the first stage is connected with the input of the second stage, the output of the second stage is connected with downsampling, and the multiplying power of downsampling is randomly selected; downsampling to output multiple low-resolution remote sensing images I with different degradation degrees LR And high resolution remote sensing image I HR Forming a plurality of pairs of HR-LR remote sensing images as a training set; the first stage and the second stage comprise two processes of adding blurring and adding noise, wherein the blurring kernel type comprises a Gaussian blurring kernel and a sinc blurring kernel, and the noise type comprises Gaussian noise and Poisson noise; the fuzzy core type and the noise type occur according to the set probability, the set probability of the first stage is different from that of the second stage, the size of the fuzzy core is randomly selected, and multiple iterations are performed.
Step two: constructing a remote sensing image super-resolution reconstruction network, wherein the remote sensing image super-resolution reconstruction network comprises a remote sensing image generator comprising an ultra-dense residual error module, and a U-shaped remote sensing image discriminator integrating a recursion residual error and an attention mechanism; and pair the high-resolution remote sensing images I HR And low resolution remote sensing image I LR As a training set of the remote sensing image super-resolution reconstruction network.
Step three: will low resolution remote sensing image I LR Inputting the image into a remote sensing image generator containing an ultra-dense residual error module to generate a reconstructed super-resolution image I SR 。
As shown in fig. 3, the remote sensing image generator including the ultra-dense residual modules includes a first convolution layer, an ultra-dense residual module group, a second convolution layer, an up-sampling module, a third convolution layer, and a fourth convolution layer; the first, second, third and fourth convolution layers are all 3 x 3 convolution layers. Low resolution remote sensing image I LR Inputting a first convolution layer, wherein the output characteristics of the first convolution layer are input into an ultra-dense residual error module group, and the ultra-dense residual error module groupThe output characteristics of the second convolution layer are input into an up-sampling module, the output characteristics of the up-sampling module are input into a third convolution layer, the output characteristics of the third convolution layer are input into a fourth convolution layer, and the fourth convolution layer outputs and reconstructs super-resolution image I SR The method comprises the steps of carrying out a first treatment on the surface of the The ultra-dense residual error module group comprises 23 ultra-dense residual error modules (RRSDB) which are sequentially connected, and the output characteristic of the last RRSDB is added with the output characteristic of the first convolution layer to be used as the input characteristic of the second convolution layer.
The RRSDB comprises a convolution layer I, an LReLu activation function I, a convolution layer II, an LReLu activation function II, a convolution layer III, an LReLu activation function III, a convolution layer IV, an LReLu activation function IV and a convolution layer V; convolution layer I, convolution layer II, convolution layer III, convolution layer IV and convolution layer V are all 3×3 convolution layers. Wherein the feature F is input in Input convolution layer I, output characteristics of convolution layer I input LReLu activation function I, output characteristics of LReLu activation function I and input characteristics F in Adding to obtain a first characteristic F 1 The method comprises the steps of carrying out a first treatment on the surface of the First feature F 1 Input convolution layer II, output characteristics of convolution layer II input LReLu activation function II, output characteristics and input characteristics F of LReLu activation function II in First feature F 1 After addition, a second characteristic F is obtained 2 The method comprises the steps of carrying out a first treatment on the surface of the Second feature F 2 Input convolution layer III, output characteristics of convolution layer III input LReLu activation function III, output characteristics and input characteristics F of LReLu activation function III in First feature F 1 Second characteristic F 2 After addition, a third characteristic F is obtained 3 The method comprises the steps of carrying out a first treatment on the surface of the Third feature F 3 Inputting a convolution layer IV, inputting an LReLu activation function IV to the output characteristic of the convolution layer IV, and inputting the output characteristic and the input characteristic F of the LReLu activation function IV in First feature F 1 Second characteristic F 2 After addition, the fourth characteristic F is obtained 4 The method comprises the steps of carrying out a first treatment on the surface of the Fourth feature F 4 Inputting the convolution layer V, and obtaining an output characteristic F through the convolution layer V out The method comprises the steps of carrying out a first treatment on the surface of the First feature F 1 Second characteristic F 2 Third feature F 3 Fourth feature F 4 And output feature F out The expressions of (2) are respectively:
F 1 =C(F in )+F in ;
F 2 =C(F 1 )+2F in +F 1 ;
F 3 =C(F 2 )+F in +F 1 +F 2 ;
F 4 =C(F 3 )+F in +F 1 +2F 2 ;
F out =C(F 4 );
wherein C (·) represents the convolution operation.
Step four: u-shaped remote sensing image discriminator for high-resolution remote sensing image I by utilizing fusion recursion residual error and attention mechanism HR And reconstructing super-resolution image I SR And (5) performing classification judgment.
As shown in fig. 4, the U-shaped remote sensing image discriminator integrating the recursive residual error and the attention mechanism includes a first recursive residual error convolution module, a first max-pooling module, a second recursive residual error convolution module, a second max-pooling module, a third recursive residual error convolution module, a third max-pooling module, a fourth recursive residual error convolution module, a fourth max-pooling module, a fifth recursive residual error convolution module, a first upsampling module, a sixth recursive residual error convolution module, a second upsampling module, a seventh recursive residual error convolution module, a third upsampling module, an eighth recursive residual error convolution module, a fourth upsampling module, a ninth recursive residual error convolution module, and a convolution layer. High resolution remote sensing image I HR And reconstructing super-resolution image I SR Extracting a first feature through a first recursive residual convolution module; the first feature obtains a first output feature through the attention gate module; the first features sequentially pass through a first maximum pooling module and a second recursion residual convolution module to obtain second features; the second characteristic obtains a second output characteristic through the attention gate module; the second feature is subjected to a second maximum pooling module and a third recursion residual convolution module to obtain a third feature; the third characteristic obtains a third output characteristic through the attention gate module; the third characteristic is subjected to a third maximum pooling module and a fourth recursive residual convolution module to obtain a fourth characteristicFeatures; the fourth feature obtains a fourth output feature through the attention gate module; the fourth characteristic is subjected to a fourth maximum pooling module and a fifth recursion residual convolution module to obtain a fifth characteristic; the fifth feature obtains a first upsampling feature through a first upsampling module; the first upsampling feature and the fourth output feature are combined and connected, and then sequentially pass through a sixth recursion residual convolution module and a second upsampling module to obtain a second upsampling feature; the second upsampling feature and the third output feature are combined and connected, and then the third upsampling feature is obtained through a seventh recursion residual convolution module and a third upsampling module in sequence; the third upsampling feature and the second output feature are combined and connected, and then the fourth upsampling feature is obtained through an eighth recursion residual convolution module and a fourth upsampling module in sequence; and the fourth upsampling feature is combined and connected with the first output feature and then sequentially passes through a ninth recursion residual convolution module and a convolution layer to obtain an image discrimination result.
As shown in fig. 5, the recursive residual convolution module comprises a convolution layer VI, a residual convolution layer and an adder; the input features sequentially pass through a convolution layer VI, and features obtained by a residual convolution layer subjected to two recursions are added with the input features to obtain output features. The residual convolution layer comprises a convolution layer vI I, reLu activation function I, a convolution layer vII I, reLu activation function II and an adder; the input features sequentially pass through the convolution layers VI I, the ReLu activation function I, the convolution layers VI II and the ReLu activation function II to obtain features, and the features obtained by the convolution layers VI I, the ReLu activation function I, the convolution layers VI II and the ReLu activation function II are added with the input features to obtain output features.
Fig. 6 shows exemplary visualizations of the attention of the U-shaped arbiter network at different iteration times when using the remote sensing image super-resolution reconstruction method of the present invention. Starting from the 5000 th iteration, a visual map is shown every 70000 iterations, until 355000 iterations. It can be observed that the attention of the U-shaped arbiter network is almost uniformly distributed over all areas of the whole image during the initial stage of training, i.e. 5000 iterations; as the iteration times are increased, namely 75000 times, 145000 times, 215000 times and 285000 times, the attention of the object edge area and the ground object intersection area is gradually increased, and the attention of other areas is gradually weakened; eventually, i.e. 355000 iterations, the attention is focused on the object edge area and the ground object intersection area, and the attention of the other areas is reduced to a minimum.
Step five: using reconstructed super-resolution image I SR And high resolution remote sensing image I HR Calculating L1 level loss and perception loss, calculating GAN loss by using binary cross entropy of the distribution probability of the classification result of the discriminator, optimizing parameters of the remote sensing image super-resolution reconstruction network according to the L1 level loss, the perception loss and the GAN loss, converging the network, completing training, and obtaining the remote sensing image super-resolution reconstruction model.
The L1 level loss L1_loss, perceived loss Percental_loss, and GAN loss GAN_loss are calculated as follows:
L1_loss=||I SR -I HR ||_|1|;
Perceptual_loss=||F(I SR )-F(I HR )||_|1|;
GAN_loss=ln[D(I HR )]-ln[1-D(I SR )];
Total_loss=λ 1 *L1_loss+λ 2 *Perceptual_loss+λ 3 *GAN_loss;
wherein, I.I 1| represents an L1 norm; f represents a feature extraction network, F (I SR ) Representing the reconstruction of super-resolution image I by a feature extraction network SR Features extracted from (I) HR ) Representing the characteristic extraction network in the high resolution remote sensing image I HR Features extracted from the extraction; d (I) HR ) Representing the arbiter network predicate I HR Probability of true, D (I SR ) Representing the arbiter network predicate I SR Probability of true; lambda (lambda) 1 、λ 2 、λ 3 The weight coefficients representing the L1 loss, the perceptual loss, and the GAN loss, respectively, are set to 1, and 0.1, respectively.
Reconstructing super-resolution image I using L1_loss computation SR And original high resolution image I HR Pixel level differences between them, bringing the network parameters towards reconstructing the super-resolution image I SR More closely to the original high resolution image I at pixel level HR Is optimized in the direction of (3); using a Perceptual_loss meterComputationally reconstructing super-resolution image I SR And original high resolution image I HR The characteristic difference between the two images causes the network parameters to face to the reconstructed super-resolution image I SR Perceptually closer to the original high resolution image I HR Is optimized in the direction of (3); using GAN_loss to calculate the generation of an anti-hyper-split image I between a generator and a arbiter SR Is used for reconstructing super-resolution image I by continuously competing and optimizing countermeasure training generator and discriminator SR More closely approaching the original high resolution image I HR Is a distribution of (3); by minimizing total_loss, when the total_loss trend tends to be stable, the network converges, completing training.
Step six: and inputting the low-resolution remote sensing image to be reconstructed into a remote sensing image super-resolution reconstruction model to generate a super-resolution reconstructed remote sensing image with objective evaluation indexes and subjective perception effects reaching better effects.
Four widely used remote sensing image datasets Kaggle, WHDLD, AID, SYSU-CD were used for testing. The Kagle data set consists of 324 different scenes, and comprises images with the 1720 Zhang Fenbian rate of 3099 multiplied by 2329 pixels, which are obtained by taking the aircraft at high altitude, and the spatial resolution is 0.25m per pixel; the WHDLD dataset contains 4960 Zhang Fenbian rate 256×256 pixel images of 6 scene categories of bare land, building, sidewalk, road, vegetation and water area, with a spatial resolution of 2m per pixel; the AID data set comprises 10000 Zhang Fenbian-rate 600×600-pixel images of 30 scenes such as airports, beaches, churches, commercial areas, forests, irrigation, icebergs, windmills and the like, the number of images of each scene category is about 300, and the spatial resolution is 0.5-8m per pixel; the SYSU-CD dataset contains 20000 images of 256 x 256 pixels resolution for harbor and urban construction, with an image spatial resolution of 0.5m per pixel.
In the test process, 1370 images except the training set in the Kagle data set, all images of the WHDLD data set, all images of the AID data set and all images of the SUSY-CD data set are selected as test sets, the test sets are input into a trained remote sensing image super-resolution reconstruction network model, the super-resolution reconstruction multiple of the images is set to be 4, super-resolution reconstruction is completed, and objective and subjective image quality evaluation is carried out on the images after super-resolution reconstruction. In order to better verify the advancement of the invention, the traditional image super-resolution reconstruction algorithm Bicubic based on interpolation is selected, the image super-resolution reconstruction algorithms SRCNN and FSRCNN based on a convolutional neural network are respectively selected, and the image super-resolution reconstruction algorithms SRGAN and ESRGAN based on a generated countermeasure network are used as comparison algorithms. Four objective evaluation indexes NIQE (Naturalness Image Quality Evaluator), PIQE (Perceptual Image Quality Evaluator), HIQA (High-level Image Quality Assessment) and CEIQ (Color Image Enhancement Quality Assessment) without reference image quality are selected to evaluate the reconstructed super-resolution remote sensing image, wherein the smaller NIQE and PIQE are, the better the image quality is, the larger the HIQA and CEIQ are, and the better the image quality is. The experimental results are shown in Table 1. In Table 1, the performance of images after super-resolution reconstruction on Kaggle, WHDLD, AID, SYSU-CD datasets and without reference to image quality assessment index NIQE, PIQ, HIQA, CEIQ for the methods of the present invention are shown, respectively, for Bicubic, SRCNN, FSRCNN, SRGAN, ESRGAN. The bolded index indicates that the method performs optimally on the same index of the same data set, and as shown in table 1, the method of the invention performs optimally on each data set and evaluation index.
Table 1 objective evaluation index comparison of different super-resolution reconstruction algorithms on respective datasets
Fig. 7 shows a comparison of local area subjective visual effects of reconstruction results from different super-resolution algorithms on a Kaggle, WHDLD, AID, SUSY-CD dataset. For an example of a Kaggle dataset, the method of the invention can better reconstruct vehicle edge details of a dense area of a vehicle in a parking lot; for an example of a WHDLD data set, the method can restore the detail information of the road better when reconstructing the road in the vegetation dense area, and the edge of the road is clearer; for an example of an AID data set, the method of the invention enables the reconstructed bridge road marking line to be clearer; for the example of the SYSU-CD dataset, the method of the invention makes the road edges in the reconstructed image clearer, and therefore the subjective visual effect of the reconstructed image is optimal. In fig. 7, the data on the right side of the image represent the values of the image no-reference image quality evaluation index NIQE, PIQE, HIQA, CEIQ, respectively, and it can be seen that the image no-reference image quality evaluation index reconstructed by the method of the present invention is also optimal. In conclusion, the method can reconstruct the image better, so that the texture detail of the reconstructed super-resolution image is finer and accords with the subjective perception of human eyes.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (8)
1. A remote sensing image super-resolution reconstruction method based on a generated countermeasure network is characterized by comprising the following steps:
step one: constructing a remote sensing image high-order degradation model simulating an actual degradation process, and carrying out high-resolution remote sensing image I HR Inputting the high-order degradation model of the remote sensing image to obtain a high-resolution remote sensing image I HR Corresponding low resolution remote sensing image I LR ;
Step two: constructing a remote sensing image super-resolution reconstruction network, wherein the remote sensing image super-resolution reconstruction network comprises a remote sensing image generator comprising an ultra-dense residual error module, and a U-shaped remote sensing image discriminator integrating a recursion residual error and an attention mechanism; and pair the high-resolution remote sensing images I HR And low resolution remote sensing image I LR As a training set of the remote sensing image super-resolution reconstruction network;
step three: will low resolution remote sensing image I LR Inputting the image into a remote sensing image generator containing an ultra-dense residual error module to generate a reconstructed super-resolution image I SR ;
Step four: u-shaped remote sensing image discriminator for high-resolution remote sensing image I by utilizing fusion recursion residual error and attention mechanism HR And reconstructing super-resolution image I SR Performing classification judgment;
step five: optimizing parameters of the remote sensing image super-resolution reconstruction network by calculating L1 level loss, perception loss and GAN loss, converging the network, and completing training to obtain a remote sensing image super-resolution reconstruction model;
step six: and inputting the low-resolution remote sensing image to be reconstructed into a remote sensing image super-resolution reconstruction model to generate a super-resolution reconstructed remote sensing image.
2. The method for generating a countermeasure network-based remote sensing image super-resolution reconstruction of claim 1, wherein the remote sensing image high-order degradation model includes a first stage, a second stage, and downsampling; the input of the first stage is a high-resolution remote sensing image I HR The output of the first stage is connected with the input of the second stage, the output of the second stage is connected with downsampling, and the multiplying power of downsampling is randomly selected; downsampling to output multiple low-resolution remote sensing images I with different degradation degrees LR And high resolution remote sensing image I HR Forming a plurality of pairs of HR-LR remote sensing images as a training set; the first stage and the second stage comprise two processes of adding blurring and adding noise, wherein the blurring kernel type comprises a Gaussian blurring kernel and a sinc blurring kernel, and the noise type comprises Gaussian noise and Poisson noise; the fuzzy core type and the noise type occur according to the set probability, the set probability of the first stage is different from that of the second stage, the size of the fuzzy core is randomly selected, and multiple iterations are performed.
3. The method for reconstructing the super-resolution of the remote sensing image based on the generation countermeasure network according to claim 1, wherein the remote sensing image generator comprising the ultra-dense residual modules comprises a first convolution layer, an ultra-dense residual module group, a second convolution layer, an up-sampling module, a third convolution layer and a fourth convolution layer; low resolution remote sensing image I LR Inputting a first convolution layer, inputting the output characteristics of the first convolution layer into an ultra-dense residual error module group, and outputting the characteristics of the ultra-dense residual error module group and the first convolution layerThe output characteristics of the second convolution layer are added and then input into a second convolution layer, the output characteristics of the second convolution layer are input into an up-sampling module, the output characteristics of the up-sampling module are input into a third convolution layer, the output characteristics of the third convolution layer are input into a fourth convolution layer, and the fourth convolution layer outputs and reconstructs super-resolution image I SR The method comprises the steps of carrying out a first treatment on the surface of the The ultra-dense residual error module group comprises 23 ultra-dense residual error modules (RRSDB) which are sequentially connected, and the output characteristic of the last RRSDB is added with the output characteristic of the first convolution layer to be used as the input characteristic of the second convolution layer.
4. The method for reconstructing a super-resolution of a remote sensing image based on a generated countermeasure network according to claim 3, wherein the RRSDB includes a convolution layer i, a lreuu activation function i, a convolution layer ii, a lreuu activation function ii, a convolution layer iii, a lreuu activation function iii, a convolution layer iv, a lreuu activation function iv, and a convolution layer v; wherein the feature F is input in Input convolution layer I, output characteristics of convolution layer I input LReLu activation function I, output characteristics of LReLu activation function I and input characteristics F in Adding to obtain a first characteristic F 1 The method comprises the steps of carrying out a first treatment on the surface of the First feature F 1 Input convolution layer II, output characteristics of convolution layer II input LReLu activation function II, output characteristics and input characteristics F of LReLu activation function II in First feature F 1 After addition, a second characteristic F is obtained 2 The method comprises the steps of carrying out a first treatment on the surface of the Second feature F 2 Input convolution layer III, output characteristics of convolution layer III input LReLu activation function III, output characteristics and input characteristics F of LReLu activation function III in First feature F 1 Second characteristic F 2 After addition, a third characteristic F is obtained 3 The method comprises the steps of carrying out a first treatment on the surface of the Third feature F 3 Inputting a convolution layer IV, inputting an LReLu activation function IV to the output characteristic of the convolution layer IV, and inputting the output characteristic and the input characteristic F of the LReLu activation function IV in First feature F 1 Second characteristic F 2 After addition, the fourth characteristic F is obtained 4 The method comprises the steps of carrying out a first treatment on the surface of the Fourth feature F 4 Inputting the convolution layer V, and obtaining an output characteristic F through the convolution layer V out The method comprises the steps of carrying out a first treatment on the surface of the First feature F 1 Second characteristic F 2 Third feature F 3 Fourth feature F 4 And output feature F out The expressions of (2) are respectively:
F 1 =C(F in )+F in ;
F 2 =C(F 1 )+2F in +F 1 ;
F 3 =C(F 2 )+F in +F 1 +F 2 ;
F 4 =C(F 3 )+F in +F 1 +2F 2 ;
F out =C(F 4 );
wherein C (·) represents the convolution operation.
5. The method for generating an countermeasure network based remote sensing image super-resolution reconstruction of claim 1, wherein the U-shaped remote sensing image discriminator fusing a recursive residual error and an attention mechanism comprises a first recursive residual error convolution module, a first max-pooling module, a second recursive residual error convolution module, a second max-pooling module, a third recursive residual error convolution module, a third max-pooling module, a fourth recursive residual error convolution module, a fourth max-pooling module, a fifth recursive residual error convolution module, a first upsampling module, a sixth recursive residual error convolution module, a second upsampling module, a seventh recursive residual error convolution module, a third upsampling module, an eighth recursive residual error convolution module, a fourth upsampling module, a ninth recursive residual error convolution module, and a convolution layer;
high resolution remote sensing image I HR And reconstructing super-resolution image I SR Extracting a first feature through a first recursive residual convolution module; the first feature obtains a first output feature through the attention gate module;
the first features sequentially pass through a first maximum pooling module and a second recursion residual convolution module to obtain second features; the second characteristic obtains a second output characteristic through the attention gate module;
the second feature is subjected to a second maximum pooling module and a third recursion residual convolution module to obtain a third feature; the third characteristic obtains a third output characteristic through the attention gate module;
the third feature is subjected to a third maximum pooling module and a fourth recursion residual convolution module to obtain a fourth feature; the fourth feature obtains a fourth output feature through the attention gate module;
the fourth characteristic is subjected to a fourth maximum pooling module and a fifth recursion residual convolution module to obtain a fifth characteristic; the fifth feature obtains a first upsampling feature through a first upsampling module;
the first upsampling feature and the fourth output feature are combined and connected, and then sequentially pass through a sixth recursion residual convolution module and a second upsampling module to obtain a second upsampling feature;
the second upsampling feature and the third output feature are combined and connected, and then the third upsampling feature is obtained through a seventh recursion residual convolution module and a third upsampling module in sequence;
the third upsampling feature and the second output feature are combined and connected, and then the fourth upsampling feature is obtained through an eighth recursion residual convolution module and a fourth upsampling module in sequence;
and the fourth upsampling feature is combined and connected with the first output feature and then sequentially passes through a ninth recursion residual convolution module and a convolution layer to obtain an image discrimination result.
6. The method for generating a remote sensing image super-resolution reconstruction based on an countermeasure network according to claim 5, wherein the recursive residual convolution module comprises a convolution layer VI, a residual convolution layer and an adder; the input features sequentially pass through a convolution layer VI, and features obtained by a residual convolution layer subjected to two recursions are added with the input features to obtain output features.
7. The method for reconstructing the super-resolution of the remote sensing image based on the generated countermeasure network according to claim 6, wherein the residual convolution layer comprises a convolution layer vi I, reLu activation function i, a convolution layer vii I, reLu activation function ii and an adder; the input features sequentially pass through the convolution layers VI I, the ReLu activation function I, the convolution layers VI II and the ReLu activation function II to obtain features, and the features obtained by the convolution layers VI I, the ReLu activation function I, the convolution layers VI II and the ReLu activation function II are added with the input features to obtain output features.
8. The method for reconstructing the super-resolution of the remote sensing image based on the generation countermeasure network according to claim 1, wherein expressions of the L1 level loss, the perception loss and the GAN loss are respectively:
L1_loss=||I SR -I HR ||_|1|;
Perceptual_loss=||F(I SR )-F(I HR )||_|1|;
GAN_loss=ln[D(I HR )]-ln[1-D(I SR )];
Total_loss=λ 1 *L1_loss+λ 2 *Perceptual_loss+λ 3 *GAN_loss;
wherein L1_loss is L1 level loss, perceptual_loss is perceived loss, GAN_loss is GAN loss, total_loss is Total loss of the network; I.I 1| represents an L1 norm; f represents a feature extraction network, F (I SR ) Representing the reconstruction of super-resolution image I by a feature extraction network SR Features extracted from (I) HR ) Representing the characteristic extraction network in the high resolution remote sensing image I HR Features extracted from the extraction; d (I) HR ) Representing the arbiter network predicate I HR Probability of true, D (I SR ) Representing the arbiter network predicate I SR Probability of true; lambda (lambda) 1 、λ 2 、λ 3 The weighting coefficients for L1 loss, perceptual loss and GAN loss are represented, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310787153.4A CN117114984A (en) | 2023-08-28 | 2023-08-28 | Remote sensing image super-resolution reconstruction method based on generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310787153.4A CN117114984A (en) | 2023-08-28 | 2023-08-28 | Remote sensing image super-resolution reconstruction method based on generation countermeasure network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117114984A true CN117114984A (en) | 2023-11-24 |
Family
ID=88802827
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310787153.4A Pending CN117114984A (en) | 2023-08-28 | 2023-08-28 | Remote sensing image super-resolution reconstruction method based on generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117114984A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118229962A (en) * | 2024-05-23 | 2024-06-21 | 安徽大学 | Remote sensing image target detection method, system, electronic equipment and storage medium |
CN118570457A (en) * | 2024-08-05 | 2024-08-30 | 山东航天电子技术研究所 | Image super-resolution method based on remote sensing target recognition task driving |
-
2023
- 2023-08-28 CN CN202310787153.4A patent/CN117114984A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118229962A (en) * | 2024-05-23 | 2024-06-21 | 安徽大学 | Remote sensing image target detection method, system, electronic equipment and storage medium |
CN118570457A (en) * | 2024-08-05 | 2024-08-30 | 山东航天电子技术研究所 | Image super-resolution method based on remote sensing target recognition task driving |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Survey of single image super‐resolution reconstruction | |
CN112507997B (en) | Face super-resolution system based on multi-scale convolution and receptive field feature fusion | |
CN117114984A (en) | Remote sensing image super-resolution reconstruction method based on generation countermeasure network | |
CN109214989A (en) | Single image super resolution ratio reconstruction method based on Orientation Features prediction priori | |
Zheng et al. | T-net: Deep stacked scale-iteration network for image dehazing | |
CN113538246A (en) | Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network | |
CN116029902A (en) | Knowledge distillation-based unsupervised real world image super-resolution method | |
Xiao et al. | Physics-based GAN with iterative refinement unit for hyperspectral and multispectral image fusion | |
CN115578262A (en) | Polarization image super-resolution reconstruction method based on AFAN model | |
CN116309062A (en) | Remote sensing image super-resolution reconstruction method | |
Han et al. | UIEGAN: Adversarial learning-based photorealistic image enhancement for intelligent underwater environment perception | |
CN114266957A (en) | Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation | |
CN115526779A (en) | Infrared image super-resolution reconstruction method based on dynamic attention mechanism | |
CN114926337A (en) | Single image super-resolution reconstruction method and system based on CNN and Transformer hybrid network | |
CN117576483B (en) | Multisource data fusion ground object classification method based on multiscale convolution self-encoder | |
Zhao et al. | SSIR: Spatial shuffle multi-head self-attention for single image super-resolution | |
Kumar et al. | Underwater image enhancement using deep learning | |
CN112200752B (en) | Multi-frame image deblurring system and method based on ER network | |
Li et al. | Image reflection removal using end‐to‐end convolutional neural network | |
Chen et al. | Joint denoising and super-resolution via generative adversarial training | |
Gupta et al. | A robust and efficient image de-fencing approach using conditional generative adversarial networks | |
CN117058059A (en) | Progressive restoration framework cloud removal method for fusion of optical remote sensing image and SAR image | |
Chen et al. | Multi‐scale single image dehazing based on the fusion of global and local features | |
CN116468638A (en) | Face image restoration method and system based on generation and balance countermeasure identification | |
CN116703750A (en) | Image defogging method and system based on edge attention and multi-order differential loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |