CN113643182A - Remote sensing image super-resolution reconstruction method based on dual learning graph network - Google Patents
Remote sensing image super-resolution reconstruction method based on dual learning graph network Download PDFInfo
- Publication number
- CN113643182A CN113643182A CN202110961671.4A CN202110961671A CN113643182A CN 113643182 A CN113643182 A CN 113643182A CN 202110961671 A CN202110961671 A CN 202110961671A CN 113643182 A CN113643182 A CN 113643182A
- Authority
- CN
- China
- Prior art keywords
- resolution
- image
- super
- low
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000009977 dual effect Effects 0.000 title claims abstract description 76
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000006870 function Effects 0.000 claims abstract description 44
- 230000002776 aggregation Effects 0.000 claims abstract description 14
- 238000004220 aggregation Methods 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims description 22
- 238000005070 sampling Methods 0.000 claims description 21
- 230000004931 aggregating effect Effects 0.000 claims description 17
- 238000012549 training Methods 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000013434 data augmentation Methods 0.000 claims description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 238000010276 construction Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000000593 degrading effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a remote sensing image super-resolution reconstruction method based on a dual learning graph network, which comprises the following steps: acquiring a plurality of target feature blocks, and searching k feature blocks which are most similar to the target feature blocks in the downsampling features; obtaining a high-resolution feature block of the target feature block to obtain a super-resolution reconstruction image; calculating a dual error between the degraded low-resolution image and the original low-resolution image, and adding the dual error into a loss function; and (4) the loss function tends to be stable, the model is converged, and the remote sensing image reconstructed at the super-resolution is input to the super-resolution reconstruction model to obtain a high-resolution remote sensing image corresponding to the low-resolution remote sensing image. Through the construction of the graph, the characteristic cross-scale connection is realized, the non-local aggregation effect is realized, and the computing resources are saved compared with a global method; the size of a super-resolution reconstruction graphic space is reduced, and the super-resolution precision and the visual effect are improved.
Description
Technical Field
The invention belongs to the technical field of remote sensing image automatic understanding, and particularly relates to a remote sensing image super-resolution reconstruction method based on a dual learning graph network.
Background
With the development of remote sensing imaging technology, high-resolution remote sensing images have been widely applied to remote sensing image interpretation, such as target detection, military remote sensing, crop classification and the like. However, due to the limitations of the imaging equipment and the transmission conditions, it is difficult to obtain continuous high-resolution images of a specific area, and the high-resolution images are obtained with great uncertainty and difficulty through the improvement of hardware. Therefore, it is a hot research to improve the resolution of remote sensing images to acquire high resolution images by restoring low resolution images using computer technology, particularly a deep learning method.
In recent years, more and more attention is paid to a mapping relation between a low-resolution image and a high-resolution image based on a deep learning method, and the mapping relation is learned through a deep network to perform super-resolution reconstruction on the low-resolution image. However, the existing method has the following disadvantages:
(1) the recovery of the texture information in the image has limitations;
(2) as a problem of unsuitability in computer vision, the image solution space obtained by super-resolution reconstruction is huge, and due to the lack of constraint conditions, the reconstructed image has a distortion phenomenon to a certain extent.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a remote sensing image super-resolution reconstruction method based on a dual learning graph network, which can overcome the defects in the prior art.
In order to achieve the technical purpose, the technical scheme of the invention is realized as follows:
a remote sensing image super-resolution reconstruction method based on a dual learning graph network comprises the following steps:
the method comprises the steps of conducting down-sampling on an original low-resolution remote sensing image to obtain a plurality of target feature blocks, and searching k feature blocks which are most similar to the target feature blocks in down-sampling features;
constructing a graph convolution network by the target feature block and the k similar feature blocks corresponding to the target feature block;
aggregating the characteristics of the k similar characteristic blocks into a target characteristic block through a graph convolution network to obtain a high-resolution characteristic block of the target characteristic block, and splicing the high-resolution characteristic blocks to obtain a super-resolution reconstruction image;
the super-resolution reconstruction image is degraded into a low-resolution image with the same size as the original low-resolution image through a dual learning module, the dual error between the degraded low-resolution image and the original low-resolution image is calculated, and the dual error is added into a loss function;
and the loss function tends to be stable, the model is converged to obtain a trained super-resolution reconstruction model, the remote sensing image reconstructed by the super-resolution is input to the super-resolution reconstruction model, and the super-resolution reconstruction model is operated to obtain a high-resolution remote sensing image corresponding to the low-resolution remote sensing image.
Further, the downsampling the original low-resolution remote sensing image to obtain a plurality of target feature blocks, and searching k feature blocks most similar to the target feature blocks in the downsampling features includes:
the method comprises the steps of carrying out down-sampling on an original low-resolution remote sensing image, obtaining high-level semantic features through a VGG-19 network, obtaining a plurality of target feature blocks in a high-level semantic feature space by using a sliding window mechanism, and searching k feature blocks which are most similar to the target feature blocks in the down-sampling features;
the downsampling of the low-resolution remote sensing image, acquiring high-level semantic features through a VGG-19 network, acquiring a plurality of target feature blocks by using a sliding window mechanism in a high-level semantic feature space, and searching k feature blocks similar to the target feature blocks in downsampling features comprises the following steps:
low-resolution remote sensing image I by using Bicubic interpolation methodlrDown-sampling to obtain down-sampled image Ilr↓sWherein the down-sampling proportion is an expected super-resolution reconstruction proportion;
using VGG-19 network pair IlrAnd Ilr↓sExtracting high-level semantic features to obtain ElrAnd Elr↓sWherein E islrAnd Elr↓sRepresenting high-level semantic features;
at ElrUsing a sliding window of dxd to obtain a target feature block a, at Elr↓sCalculating Euclidean distance between each d × d candidate feature block and a target feature block a, finding k similar feature blocks as similar feature blocks of a, and obtaining a ds × ds-sized similar feature block, namely a high-resolution feature block, through feature mapping, wherein d × d represents the size of a sliding window, and ds × ds represents the size of the similar feature block.
Further, in the graph convolution network constructed by the target feature block and the k similar feature blocks corresponding to the target feature block, the target feature block a and the similar feature blocks are regarded as nodes, and the connection between the target feature block and the similar feature blocks is regarded as an edge; the node label is a feature obtained by acquiring a feature block through 32 layers of residual block layers; the label of the edge is the characteristic difference between the target characteristic block and the similar characteristic block respectively, and the calculation formula isWhereinRepresenting the high-level semantic features of the target feature block a,a high-level semantic feature representing an mth block of similar features, where m ═ 1,... k,representing the feature difference between the target block and the similar target block, m representing the number, a representing the target feature block, nmRepresenting the mth similar feature block.
Further, the aggregating the features of the k similar feature blocks into the target feature block through the graph convolution network to obtain a high-resolution feature block of the target feature block, and splicing the high-resolution feature blocks to obtain a super-resolution reconstructed image includes:
aggregating the characteristics of the k similar characteristic blocks to a target characteristic block through a graph convolution network to obtain a high-resolution characteristic block of the target characteristic block, and splicing the high-resolution characteristic block through the sliding of a sliding window on a low-resolution image to obtain a super-resolution reconstruction image;
the method for obtaining the super-resolution reconstruction image by aggregating the characteristics of the k similar characteristic blocks to the target characteristic block through the graph convolution network to obtain the high-resolution characteristic block of the target characteristic block and splicing the high-resolution characteristic blocks through the sliding of the sliding window on the low-resolution image comprises the following steps:
aggregating k similar feature blocks with the size of ds × ds through a graph convolution network, wherein an aggregation formula is as follows:
representing the mth ds × ds similar feature block, wherein ds × ds represents the size of the similar feature block;for an improved edge-condition convolutional network, the variable is Representing the difference in characteristics between the target block and a similar target block, nmRepresenting an mth similar feature block; saRepresenting a set of similar feature blocks; ca(Flr) For the normalization factor, the normalization factor is formulated asa represents a target feature block;
for each target feature block a, carrying out aggregation of the high-resolution neighbor blocks through the formula, and obtaining the high-resolution feature block of the target feature block through an aggregation process;
and sliding the sliding window on the low-resolution image to obtain a super-resolution reconstruction image of the whole image.
Further, the step of degrading the super-resolution reconstructed image into a low-resolution image with the same size as the original low-resolution image by using a dual learning module, calculating a dual error between the degraded low-resolution image and the original low-resolution image, and adding the dual error into a loss function includes:
and inputting the obtained super-resolution reconstruction image into a dual learning module to obtain a low-resolution image, calculating dual loss and constraining the super-resolution reconstruction process, wherein the dual learning module consists of a convolutional layer and an activation layer.
Further, the obtained super-resolution reconstructed image is input into the dual learning module to obtain a low-resolution image, dual loss is calculated and the super-resolution reconstruction process is constrained, and for the paired images, that is, the data set includes the low-resolution image and the corresponding high-resolution image data, the loss function is:
wherein λ is1、λ2Represents a weight parameter, L1Denotes the mean absolute error, LmseRepresenting mean square loss error, xiRepresenting the original low resolution image, y'iFor low resolution images xiHigh resolution image, y, obtained after reconstructioniRepresenting a low resolution image xiCorresponding pairs of high resolution images, vgg (y'i) Represents to y'iFeatures obtained by VGG-19 network, VGG (y)i) Represents a pair yiFeatures obtained through a VGG-19 network; l is1(y′i,yi) Representing pixel level errors between the high resolution image and the real image obtained by the super-resolution; lambda [ alpha ]1Lmse(vgg(y′i),vgg(yi) Representing the perceptual error between the super-divided high-resolution image and the real image; lambda [ alpha ]2L1(x′i,xi) Representing the pixel error between the low resolution image of the hyper-resolution reconstructed image after passing through the dual learning module and the original low resolution image.
Further, the obtained super-resolution reconstructed image is input into a dual learning module to obtain a low-resolution image, dual loss is calculated, and the super-resolution reconstruction process is constrained, wherein for an unpaired image, that is, a part of a data set includes a low-resolution image and corresponding high-resolution image data, a part includes low-resolution image data, and a loss function is as follows:
wherein the loss function is increased by a parameter λ in the first two terms compared to the loss function for paired datapConstrained by a parameter lambdapThe value of (d) depends on whether the image is paired data; wherein, if the image is paired data, i.e. the image comprises a low resolution image and a high resolution image, λ p1 is ═ 1; if the images are unpaired data, i.e. the images comprise low resolution images, λp=0;λpL1(y′i,yi) Representing pixel level errors between the high resolution image and the real image obtained by the super-resolution; lambda [ alpha ]pλ1Lmse(vgg(y′i),vgg(yi) Representing the perceptual error between the super-divided high-resolution image and the real image; lambda [ alpha ]2L1(x′i,xi) Representing the pixel error between the low resolution image of the hyper-resolution reconstructed image after passing through the dual learning module and the original low resolution image.
Further, the method includes that the loss function tends to be stable, the model converges to obtain a trained super-resolution reconstruction model, the remote sensing image reconstructed by the super-resolution is input to the super-resolution reconstruction model, the super-resolution reconstruction model is operated to obtain a high-resolution remote sensing image corresponding to the low-resolution remote sensing image, and the method includes the following steps:
training model updating parameters to enable a loss function to tend to be stable, converging the model to obtain a trained super-resolution reconstruction model, taking a remote sensing image needing to be used for super-resolution reconstruction as the input of the super-resolution reconstruction model, and operating the super-resolution reconstruction model to obtain a high-resolution remote sensing image corresponding to a low-resolution remote sensing image;
updating parameters of the training model to enable a loss function to tend to be stable, converging the model to obtain a trained super-resolution reconstruction model, taking a remote sensing image needing to be used for super-resolution reconstruction as the input of the super-resolution reconstruction model, operating the super-resolution reconstruction model to obtain a high-resolution remote sensing image corresponding to a low-resolution remote sensing image, wherein a vehicle hyper-resolution data set is adopted by the data set and comprises 1170 remote sensing images; performing operations such as random cutting, rotation, color channel conversion and the like on the cut data in a data augmentation mode; and when the loss function is reduced to the minimum and is unchanged and the model parameters are stable, completing the model training.
Further, the original low-resolution image is a low-resolution feature of the whole image; the target feature block is a low resolution feature of the local image.
The invention has the beneficial effects that: the method fully considers the self characteristics of the remote sensing image, namely the texture self-similarity of the image in a certain area is very obvious, and by utilizing the characteristics, the super-resolution reconstruction of the low-resolution image is assisted by extracting the characteristics of the self-similarity block; through the construction of the graph, on one hand, the low-resolution features and the high-resolution features of the remote sensing image are connected to realize the cross-scale connection of the features, on the other hand, the relation connection among the k self-similar blocks in the image is dynamically established, the non-local aggregation effect is realized, and the computing resources are saved compared with a global method; the dual learning module is used for constraining the super-resolution reconstruction process and the loss function, so that the size of a super-resolution reconstruction graphic space can be reduced, and the super-resolution precision and the visual effect are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 illustrates a process of building a cross-scale feature map according to an embodiment of the invention;
FIG. 2 illustrates a pairwise data dual learning process according to an embodiment of the invention;
FIG. 3 illustrates an unpaired data dual learning process according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1-3, a remote sensing image super-resolution reconstruction method based on dual learning graph network includes:
the method comprises the steps of conducting down-sampling on an original low-resolution remote sensing image to obtain a plurality of target feature blocks, and searching k feature blocks which are most similar to the target feature blocks in down-sampling features;
constructing a graph convolution network by the target feature block and the k similar feature blocks corresponding to the target feature block;
aggregating the characteristics of the k similar characteristic blocks into a target characteristic block through a graph convolution network to obtain a high-resolution characteristic block of the target characteristic block, and splicing the high-resolution characteristic blocks to obtain a super-resolution reconstruction image;
the super-resolution reconstruction image is degraded into a low-resolution image with the same size as the original low-resolution image through a dual learning module, the dual error between the degraded low-resolution image and the original low-resolution image is calculated, and the dual error is added into a loss function;
and the loss function tends to be stable, the model is converged to obtain a trained super-resolution reconstruction model, the remote sensing image reconstructed by the super-resolution is input to the super-resolution reconstruction model, and the super-resolution reconstruction model is operated to obtain a high-resolution remote sensing image corresponding to the low-resolution remote sensing image.
In some embodiments of the present invention, the downsampling the original low-resolution remote sensing image to obtain a plurality of target feature blocks, and searching k feature blocks that are most similar to the target feature blocks in the downsampling features includes:
the method comprises the steps of carrying out down-sampling on an original low-resolution remote sensing image, obtaining high-level semantic features through a VGG-19 network, obtaining a plurality of target feature blocks in a high-level semantic feature space by using a sliding window mechanism, and searching k feature blocks which are most similar to the target feature blocks in the down-sampling features;
the downsampling of the low-resolution remote sensing image, acquiring high-level semantic features through a VGG-19 network, acquiring a plurality of target feature blocks by using a sliding window mechanism in a high-level semantic feature space, and searching k feature blocks similar to the target feature blocks in downsampling features comprises the following steps:
low-resolution remote sensing image I by using Bicubic interpolation methodlrDown-sampling to obtain down-sampled image Ilr↓sWherein the down-sampling proportion is an expected super-resolution reconstruction proportion;
using VGG-19 network pair IlrAnd Ilr↓sExtracting high-level semantic features to obtain ElrAnd Elr↓sWherein E islrAnd Elr↓sRepresenting high-level semantic features;
at ElrUsing a sliding window of dxd to obtain a target feature block a, at Elr↓sCalculating Euclidean distance between each d multiplied by d candidate feature block and a target feature block a, and finding k similar feature blocks as similar features of aAnd obtaining a similar feature block with the size of ds × ds through feature mapping, namely a high-resolution feature block, wherein d × d represents the size of the sliding window, and ds × ds represents the size of the similar feature block.
In some embodiments of the present invention, in the constructing a graph convolution network by using the target feature block and the similar feature blocks corresponding to the target feature block, the target feature block a and the similar feature blocks are regarded as nodes, and the connection between the target feature block and the similar feature blocks is regarded as an edge; the node label is a feature obtained by acquiring a feature block through 32 layers of residual block layers; the label of the edge is the characteristic difference between the target characteristic block and the similar characteristic block respectively, and the calculation formula isWhereinRepresenting the high-level semantic features of the target feature block a,a high-level semantic feature representing an mth block of similar features, where m ═ 1,... k,representing the feature difference between the target block and the similar target block, m representing the number, a representing the target feature block, nmRepresenting the mth similar feature block.
In some embodiments of the present invention, the aggregating the features of the k similar feature blocks into the target feature block through the graph convolution network to obtain a high resolution feature block of the target feature block, and splicing the high resolution feature blocks to obtain the super-resolution reconstructed image includes:
aggregating the characteristics of the k similar characteristic blocks to a target characteristic block through a graph convolution network to obtain a high-resolution characteristic block of the target characteristic block, and splicing the high-resolution characteristic block through the sliding of a sliding window on a low-resolution image to obtain a super-resolution reconstruction image;
the method for obtaining the super-resolution reconstruction image by aggregating the characteristics of the k similar characteristic blocks to the target characteristic block through the graph convolution network to obtain the high-resolution characteristic block of the target characteristic block and splicing the high-resolution characteristic blocks through the sliding of the sliding window on the low-resolution image comprises the following steps:
aggregating k similar feature blocks with the size of ds × ds through a graph convolution network, wherein an aggregation formula is as follows:
representing the mth ds × ds similar feature block, wherein ds × ds represents the size of the similar feature block;for an improved edge-condition convolutional network, the variable is Representing the difference in characteristics between the target block and a similar target block, nmRepresenting an mth similar feature block; saRepresenting a set of similar feature blocks; ca(Flr) For the normalization factor, the normalization factor is formulated asa represents a target feature block;
for each target feature block a, carrying out aggregation of the high-resolution neighbor blocks through the formula, and obtaining the high-resolution feature block of the target feature block through an aggregation process;
and sliding the sliding window on the low-resolution image to obtain a super-resolution reconstruction image of the whole image.
In some embodiments of the present invention, the degrading the super-resolution reconstructed image into a low-resolution image with a size equal to that of the original low-resolution image by the dual learning module, calculating a dual error between the degraded low-resolution image and the original low-resolution image, and adding the dual error to the loss function includes:
and inputting the obtained super-resolution reconstruction image into a dual learning module to obtain a low-resolution image, calculating dual loss and constraining the super-resolution reconstruction process, wherein the dual learning module consists of a convolutional layer and an activation layer.
In some embodiments of the present invention, the obtained super-resolution reconstructed image is input into a dual learning module to obtain a low resolution image, dual loss is calculated, and the super-resolution reconstruction process is constrained, where for a pair of images, that is, a data set including a low resolution image and corresponding high resolution image data, a loss function is:
wherein λ is1、λ2Represents a weight parameter, L1Denotes the mean absolute error, LmseRepresenting mean square loss error, xiRepresenting the original low resolution image, y'iFor low resolution images xiHigh resolution image, y, obtained after reconstructioniRepresenting a low resolution image xiCorresponding pairs of high resolution images, vgg (y'i) Represents to y'iFeatures obtained by VGG-19 network, VGG (y)i) Represents a pair yiFeatures obtained through a VGG-19 network; l is1(y′i,yi) Representing pixel level errors between the high resolution image and the real image obtained by the super-resolution; lambda [ alpha ]1Lmse(vgg(y′i),vgg(yi) Representing the perceptual error between the super-divided high-resolution image and the real image; lambda [ alpha ]2L1(x′i,xi) Representing a low resolution map of a hyper-divided reconstructed image after passing through a dual learning modulePixel errors between the image and the original low resolution image.
In some embodiments of the present invention, the obtained super-resolution reconstructed image is input into a dual learning module to obtain a low resolution image, dual loss is calculated, and the super-resolution reconstruction process is constrained, where for a non-dual image, that is, a part of a data set includes the low resolution image and corresponding high resolution image data, and a part includes the low resolution image data, a loss function is:
wherein the loss function is increased by a parameter λ in the first two terms compared to the loss function for paired datapConstrained by a parameter lambdapThe value of (d) depends on whether the image is paired data; wherein, if the image is paired data, i.e. the image comprises a low resolution image and a high resolution image, λ p1 is ═ 1; if the images are unpaired data, i.e. the images comprise low resolution images, λp=0;λpL1(y′i,yi) Representing pixel level errors between the high resolution image and the real image obtained by the super-resolution; lambda [ alpha ]pλ1Lmse(vgg(y′i),vgg(yi) Representing the perceptual error between the super-divided high-resolution image and the real image; lambda [ alpha ]2L1(x′i,xi) Representing the pixel error between the low resolution image of the hyper-resolution reconstructed image after passing through the dual learning module and the original low resolution image.
In some embodiments of the present invention, the obtaining of the trained super-resolution reconstruction model by the loss function tending to be stable and the model convergence, inputting the remote sensing image reconstructed at the super-resolution to the super-resolution reconstruction model, and operating the super-resolution reconstruction model to obtain the high-resolution remote sensing image corresponding to the low-resolution remote sensing image includes:
training model updating parameters to enable a loss function to tend to be stable, converging the model to obtain a trained super-resolution reconstruction model, taking a remote sensing image needing to be used for super-resolution reconstruction as the input of the super-resolution reconstruction model, and operating the super-resolution reconstruction model to obtain a high-resolution remote sensing image corresponding to a low-resolution remote sensing image;
updating parameters of the training model to enable a loss function to tend to be stable, converging the model to obtain a trained super-resolution reconstruction model, taking a remote sensing image needing to be used for super-resolution reconstruction as the input of the super-resolution reconstruction model, operating the super-resolution reconstruction model to obtain a high-resolution remote sensing image corresponding to a low-resolution remote sensing image, wherein a vehicle hyper-resolution data set is adopted by the data set and comprises 1170 remote sensing images; performing operations such as random cutting, rotation, color channel conversion and the like on the cut data in a data augmentation mode; and when the loss function is reduced to the minimum and is unchanged and the model parameters are stable, completing the model training.
In some embodiments of the invention, the original low resolution image is a low resolution feature of the entire image; the target feature block is a low resolution feature of the local image.
A remote sensing image super-resolution reconstruction method based on a dual learning graph network comprises the following steps:
s1: and downsampling the low-resolution remote sensing image, acquiring high-level semantic features through a VGG-19 network, and finding k most similar feature blocks in a downsampled feature space for each target feature block in the low-resolution image in the feature space.
S2: and constructing the target feature block and the k most similar feature blocks corresponding to the target feature block into a graph.
S3: and aggregating the characteristics of the k neighbor characteristic blocks to a target characteristic block through a graph convolution network to obtain the high-resolution characteristics of the target characteristic block. And the final super-resolution reconstruction image can be obtained by sliding the sliding window on the low-resolution image. The specific process is shown in fig. 1.
S4: and the obtained super-resolution image is degraded into a low-resolution image with the same size as the original low-resolution image through a dual learning module, the characteristic error between the two low-resolution images, namely the dual error, is calculated, and the dual error is added into a loss function. If the data is paired data, the specific process is shown in fig. 2; if the data is unpaired data, the specific process is shown in fig. 3.
S5: and (3) training the model to update parameters, so that the loss function tends to be stable, and the model converges. And obtaining the trained super-resolution reconstruction model. And taking the remote sensing image needing to be used for super-resolution reconstruction as the input of the model, and operating the model to obtain the high-resolution remote sensing image corresponding to the low-resolution remote sensing image.
Step S1 specifically includes:
s11: low-resolution remote sensing image I by using Bicubic interpolation methodlrDown-sampling is carried out, the down-sampling proportion is the expected super-resolution reconstruction proportion, and the obtained down-sampling image is Ilr↓s。
S12: using VGG-19 network pair IlrAnd Ilr↓sExtracting high-level semantic features to obtain ElrAnd Elr↓s。
S13: at ElrUsing a sliding window of dxd to obtain a target feature block a, at Elr↓sCalculating Euclidean distance between each candidate d feature block and a target feature block a, finding k feature blocks with the most similar features as neighbor feature blocks of a, and mapping (E) the featureslr↓sWith 32 layers res _ block layers) to obtain a ds × ds size neighbor feature block, i.e., a high resolution feature block.
Step S2 specifically includes:
s21: and regarding the target feature block a and the neighbor feature block as nodes, and regarding the connection between the target feature block and the neighbor feature block as an edge.
S22: the node labels are characteristics obtained by the characteristic blocks through 32 layers of res _ block layers; the label of the edge is the characteristic difference between the target characteristic block and the neighbor characteristic block respectively, and the calculation formula isWhereinRepresenting the high-level semantic features of the target feature block a,a high-level semantic feature representing an mth neighbor feature block, where m ═ 1.
Step S3 specifically includes:
s31: and aggregating the k neighbor feature blocks with the size of ds multiplied by ds through a graph volume network. The polymerization formula is:
s32: for each target feature block a, the aggregation of high-resolution neighbor blocks is performed by the above formula, whereinRepresenting the mth ds x ds neighbor feature block,for an improved edge-condition convolutional network, the variable isNamely, the edges of the constructed graph are adaptively weighted through the training of the network. Ca(Flr) In order to normalize the factors, the method comprises the steps of,
and obtaining the high-resolution features of the target feature block through an aggregation process.
S33: through sliding window at ElrAnd the super-resolution reconstruction image of the whole image can be obtained by upward sliding.
Step S4 specifically includes:
s41: the method comprises the steps of firstly inputting the super-resolution remote sensing image obtained in S3 into a dual learning module, wherein the module consists of a convolutional layer and an activation layer, mapping from the super-resolution image to a low-resolution image can be realized, the low-resolution image can be obtained through the dual learning module, and the possible solution space size of the reconstructed image is reduced by calculating dual loss and constraining the super-resolution reconstruction process.
Step S41 specifically includes:
s441: for a pair of images, i.e. a data set, containing a low resolution image and corresponding high resolution image data. The loss function is:
wherein L is1Is represented by1-loss,LmseIs represented by2-loss,xiRepresenting the original low resolution image, y'iFor low resolution images xiHigh resolution image, y, obtained after reconstructioniRepresenting a low resolution image xiCorresponding pairs of high resolution images. L is1(y′i,yi) Pixel level errors between the high resolution image and the real image obtained by the super-resolution; lambda [ alpha ]1Lmse(vgg(y′i),vgg(yi) Perceptual error between the high-resolution image obtained for the super-resolution and the real image; lambda [ alpha ]2L1(x′i,xi) Pixel error between the low resolution image after passing through the dual learning module for the hyper-resolution reconstructed image and the original low resolution image.
S442: for unpaired images, i.e. a portion of the data set containing both low resolution images and corresponding high resolution image data, a portion containing only low resolution image data. The loss function is:
wherein L is1Is represented by1-loss,LmseIs represented by2-loss,xiRepresenting the original low resolution image, y'iIs lowResolution image xiHigh resolution image, y, obtained after reconstructioniRepresenting a low resolution image xiCorresponding pairs of high resolution images.
Compared with the loss function of paired data, the loss function is added with a parameter lambda in the first two termspConstrained by a parameter lambdapThe value of (c) depends on whether the image is paired data or not. If the images are paired, i.e. the images comprise a low resolution image and a high resolution image, lambda p1 is ═ 1; if the images are unpaired data, i.e. the images contain only low resolution images, λp=0。
Step S5 specifically includes:
s51: the data set was taken from the 3K VEHICLE _ SR data set, containing 1170 remotely sensed images. In order to fully train the model and reduce overfitting, the invention adopts a data augmentation mode to carry out operations such as random clipping, rotation, color channel conversion and the like on the clipped data so as to enlarge the training scale. And finally, finishing the model training when the loss function is reduced to the minimum and is not changed any more and the model parameters are stable.
The method fully considers the self characteristics of the remote sensing image, namely the texture self-similarity of the image in a certain area is very obvious, and by utilizing the characteristics, the super-resolution reconstruction of the low-resolution image is assisted by extracting the characteristics of the self-similarity block; through the construction of the graph, on one hand, the low-resolution features and the high-resolution features of the remote sensing image are connected to realize the cross-scale connection of the features, on the other hand, the relation connection among the k self-similar blocks in the image is dynamically established, the non-local aggregation effect is realized, and the computing resources are saved compared with a global method; the dual learning module is used for constraining the super-resolution reconstruction process and the loss function, so that the size of a super-resolution reconstruction graphic space can be reduced, and the super-resolution precision and the visual effect are improved.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (9)
1. A remote sensing image super-resolution reconstruction method based on a dual learning graph network is characterized by comprising the following steps:
the method comprises the steps of conducting down-sampling on an original low-resolution remote sensing image to obtain a plurality of target feature blocks, and searching k feature blocks similar to the target feature blocks in down-sampling features;
constructing a graph convolution network by the target feature block and the k similar feature blocks corresponding to the target feature block;
aggregating the characteristics of the k similar characteristic blocks into a target characteristic block through a graph convolution network to obtain a high-resolution characteristic block of the target characteristic block, and splicing the high-resolution characteristic blocks to obtain a super-resolution reconstruction image;
the super-resolution reconstruction image is degraded into a low-resolution image with the same size as the original low-resolution image through a dual learning module, the dual error between the degraded low-resolution image and the original low-resolution image is calculated, and the dual error is added into a loss function;
and the loss function tends to be stable, the model is converged to obtain a trained super-resolution reconstruction model, the remote sensing image reconstructed by the super-resolution is input to the super-resolution reconstruction model, and the super-resolution reconstruction model is operated to obtain a high-resolution remote sensing image corresponding to the low-resolution remote sensing image.
2. The remote sensing image super-resolution reconstruction method based on the dual learning graph network as claimed in claim 1, wherein the step of downsampling the original low-resolution remote sensing image to obtain a plurality of target feature blocks and searching k feature blocks most similar to the target feature blocks in the downsampled features comprises:
the method comprises the steps of carrying out down-sampling on an original low-resolution remote sensing image, obtaining high-level semantic features through a VGG-19 network, obtaining a plurality of target feature blocks in a high-level semantic feature space by using a sliding window mechanism, and searching k feature blocks which are most similar to the target feature blocks in the down-sampling features;
the downsampling of the low-resolution remote sensing image, acquiring high-level semantic features through a VGG-19 network, acquiring a plurality of target feature blocks by using a sliding window mechanism in a high-level semantic feature space, and searching k feature blocks similar to the target feature blocks in downsampling features comprises the following steps:
low-resolution remote sensing image I by using Bicubic interpolation methodlrDown-sampling to obtain down-sampled image Ilr↓sWherein the down-sampling proportion is an expected super-resolution reconstruction proportion;
using VGG-19 network pair IlrAnd Ilr↓sExtracting high-level semantic features to obtain ElrAnd Elr↓sWherein E islrAnd Elr↓sRepresenting high-level semantic features;
at ElrUsing a sliding window of dxd to obtain a target feature block a, at Elr↓sCalculating Euclidean distance between each d × d candidate feature block and a target feature block a, finding k similar feature blocks as similar feature blocks of a, and obtaining a ds × ds-sized similar feature block, namely a high-resolution feature block, through feature mapping, wherein d × d represents the size of a sliding window, and ds × ds represents the size of the similar feature block.
3. The remote sensing image super-resolution reconstruction method based on the dual learning graph network as claimed in claim 1, wherein k similar feature blocks corresponding to the target feature block and the target feature block are constructed into a graph convolution network, the target feature block a and the similar feature blocks are regarded as nodes, and the connection between the target feature block and the similar feature blocks is regarded as edges; the node label is a feature obtained by acquiring a feature block through 32 layers of residual block layers; the label of the edge is the characteristic difference between the target characteristic block and the similar characteristic block respectively, and the calculation formula isWhereinRepresenting the high-level semantic features of the target feature block a,a high-level semantic feature representing an mth block of similar features, where m ═ 1,... k,representing the feature difference between the target block and the similar target block, m representing the number, a representing the target feature block, nmRepresenting the mth similar feature block.
4. The remote sensing image super-resolution reconstruction method based on the dual learning graph network as claimed in claim 1, wherein the step of aggregating the features of k similar feature blocks into the target feature block through the graph convolution network to obtain the high-resolution feature block of the target feature block, and splicing the high-resolution feature blocks to obtain the super-resolution reconstruction image comprises the steps of:
aggregating the characteristics of the k similar characteristic blocks to a target characteristic block through a graph convolution network to obtain a high-resolution characteristic block of the target characteristic block, and splicing the high-resolution characteristic block through the sliding of a sliding window on a low-resolution image to obtain a super-resolution reconstruction image;
the method for obtaining the super-resolution reconstruction image by aggregating the characteristics of the k similar characteristic blocks to the target characteristic block through the graph convolution network to obtain the high-resolution characteristic block of the target characteristic block and splicing the high-resolution characteristic blocks through the sliding of the sliding window on the low-resolution image comprises the following steps:
aggregating k similar feature blocks with the size of ds × ds through a graph convolution network, wherein an aggregation formula is as follows:
representing the mth ds × ds similar feature block, wherein ds × ds represents the size of the similar feature block;for an improved edge-condition convolutional network, the variable is Representing the difference in characteristics between the target block and a similar target block, nmRepresenting an mth similar feature block; saRepresenting a set of similar feature blocks; ca(Flr) For the normalization factor, the normalization factor is formulated asa represents a target feature block;
for each target feature block a, carrying out aggregation of the high-resolution neighbor blocks through the formula, and obtaining the high-resolution feature block of the target feature block through an aggregation process;
and sliding the sliding window on the low-resolution image to obtain a super-resolution reconstruction image of the whole image.
5. The remote sensing image super-resolution reconstruction method based on the dual learning graph network as claimed in claim 1, wherein the super-resolution reconstruction image is degraded into a low-resolution image with the same size as the original low-resolution image through a dual learning module, a dual error between the degraded low-resolution image and the original low-resolution image is calculated, and the dual error is added to the loss function, comprising:
and inputting the obtained super-resolution reconstruction image into a dual learning module to obtain a low-resolution image, calculating dual loss and constraining the super-resolution reconstruction process, wherein the dual learning module consists of a convolutional layer and an activation layer.
6. The remote sensing image super-resolution reconstruction method based on the dual learning graph network as claimed in claim 5, wherein the super-resolution reconstructed image obtained is input into the dual learning module to obtain a low-resolution image, dual loss is calculated and the super-resolution reconstruction process is constrained, and for the pair of images, i.e. the data set including the low-resolution image and the corresponding high-resolution image data, the loss function is:
wherein λ is1、λ2Represents a weight parameter, L1Denotes the mean absolute error, LmseRepresenting mean square loss error, xiRepresenting the original low resolution image, y'iFor low resolution images xiHigh resolution image, y, obtained after reconstructioniRepresenting a low resolution image xiCorresponding pairs of high resolution images, vgg (y'i) Represents to y'iFeatures obtained by VGG-19 network, VGG (y)i) Represents a pair yiFeatures obtained through a VGG-19 network; l is1(y′i,yi) Representing pixel level errors between the high resolution image and the real image obtained by the super-resolution; lambda [ alpha ]1Lmse(vgg(y′i),vgg(yi) Representing the perceptual error between the super-divided high-resolution image and the real image; lambda [ alpha ]2L1(x′i,xi) Representing the pixel error between the low resolution image of the hyper-resolution reconstructed image after passing through the dual learning module and the original low resolution image.
7. The remote sensing image super-resolution reconstruction method based on the dual learning graph network as claimed in claim 6, wherein the super-resolution reconstructed image obtained is input into the dual learning module to obtain a low-resolution image, dual loss is calculated and the super-resolution reconstruction process is constrained, and for an unpaired image, i.e. a part of a data set comprises the low-resolution image and corresponding high-resolution image data, and a part comprises the low-resolution image data, the loss function is:
wherein the loss function is increased by a parameter λ in the first two terms compared to the loss function for paired datapConstrained by a parameter lambdapThe value of (d) depends on whether the image is paired data; wherein, if the image is paired data, i.e. the image comprises a low resolution image and a high resolution image, λp1 is ═ 1; if the images are unpaired data, i.e. the images comprise low resolution images, λp=0;λpL1(y′i,yi) Representing pixel level errors between the high resolution image and the real image obtained by the super-resolution; lambda [ alpha ]pλ1Lmse(vgg(y′i),vgg(yi) Representing the perceptual error between the super-divided high-resolution image and the real image; lambda [ alpha ]2L1(x′i,xi) Representing the pixel error between the low resolution image of the hyper-resolution reconstructed image after passing through the dual learning module and the original low resolution image.
8. The remote sensing image super-resolution reconstruction method based on the dual learning graph network as claimed in claim 1, wherein the loss function tends to be stable, the model converges to obtain a trained super-resolution reconstruction model, the super-resolution reconstructed remote sensing image is input to the super-resolution reconstruction model, and the super-resolution reconstruction model is operated to obtain a high-resolution remote sensing image corresponding to the low-resolution remote sensing image, comprising:
training model updating parameters to enable a loss function to tend to be stable, converging the model to obtain a trained super-resolution reconstruction model, taking a remote sensing image needing to be used for super-resolution reconstruction as the input of the super-resolution reconstruction model, and operating the super-resolution reconstruction model to obtain a high-resolution remote sensing image corresponding to a low-resolution remote sensing image;
updating parameters of the training model to enable a loss function to tend to be stable, converging the model to obtain a trained super-resolution reconstruction model, taking a remote sensing image needing to be used for super-resolution reconstruction as the input of the super-resolution reconstruction model, operating the super-resolution reconstruction model to obtain a high-resolution remote sensing image corresponding to a low-resolution remote sensing image, wherein a vehicle hyper-resolution data set is adopted by the data set and comprises 1170 remote sensing images; performing operations such as random cutting, rotation, color channel conversion and the like on the cut data in a data augmentation mode; and when the loss function is reduced to the minimum and is unchanged and the model parameters are stable, completing the model training.
9. The remote sensing image super-resolution reconstruction method based on the dual learning graph network as claimed in claim 1, wherein the original low-resolution image is a low-resolution feature of the whole image; the target feature block is a low resolution feature of the local image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110961671.4A CN113643182B (en) | 2021-08-20 | 2021-08-20 | Remote sensing image super-resolution reconstruction method based on dual learning graph network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110961671.4A CN113643182B (en) | 2021-08-20 | 2021-08-20 | Remote sensing image super-resolution reconstruction method based on dual learning graph network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113643182A true CN113643182A (en) | 2021-11-12 |
CN113643182B CN113643182B (en) | 2024-03-19 |
Family
ID=78423139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110961671.4A Active CN113643182B (en) | 2021-08-20 | 2021-08-20 | Remote sensing image super-resolution reconstruction method based on dual learning graph network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113643182B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418854A (en) * | 2022-01-24 | 2022-04-29 | 北京航空航天大学 | Unsupervised remote sensing image super-resolution reconstruction method based on image recursion |
CN116029907A (en) * | 2023-02-14 | 2023-04-28 | 江汉大学 | Processing method, device and processing equipment for image resolution reduction model |
CN116485652A (en) * | 2023-04-26 | 2023-07-25 | 北京卫星信息工程研究所 | Super-resolution reconstruction method for remote sensing image vehicle target detection |
CN116883692A (en) * | 2023-06-06 | 2023-10-13 | 中国地质大学(武汉) | Spectrum feature extraction method, device and storage medium of multispectral remote sensing image |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976435A (en) * | 2010-10-07 | 2011-02-16 | 西安电子科技大学 | Combination learning super-resolution method based on dual constraint |
US20150093018A1 (en) * | 2013-09-27 | 2015-04-02 | Kofax, Inc. | Systems and methods for three dimensional geometric reconstruction of captured image data |
CN111062872A (en) * | 2019-12-17 | 2020-04-24 | 暨南大学 | Image super-resolution reconstruction method and system based on edge detection |
CN111667412A (en) * | 2020-06-16 | 2020-09-15 | 中国矿业大学 | Method and device for reconstructing image super-resolution based on cross learning network |
CN111899168A (en) * | 2020-07-02 | 2020-11-06 | 中国地质大学(武汉) | Remote sensing image super-resolution reconstruction method and system based on feature enhancement |
CN112184552A (en) * | 2020-09-23 | 2021-01-05 | 国电南瑞科技股份有限公司 | Sub-pixel convolution image super-resolution method based on high-frequency feature learning |
WO2021056969A1 (en) * | 2019-09-29 | 2021-04-01 | 中国科学院长春光学精密机械与物理研究所 | Super-resolution image reconstruction method and device |
-
2021
- 2021-08-20 CN CN202110961671.4A patent/CN113643182B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976435A (en) * | 2010-10-07 | 2011-02-16 | 西安电子科技大学 | Combination learning super-resolution method based on dual constraint |
US20150093018A1 (en) * | 2013-09-27 | 2015-04-02 | Kofax, Inc. | Systems and methods for three dimensional geometric reconstruction of captured image data |
WO2021056969A1 (en) * | 2019-09-29 | 2021-04-01 | 中国科学院长春光学精密机械与物理研究所 | Super-resolution image reconstruction method and device |
CN111062872A (en) * | 2019-12-17 | 2020-04-24 | 暨南大学 | Image super-resolution reconstruction method and system based on edge detection |
CN111667412A (en) * | 2020-06-16 | 2020-09-15 | 中国矿业大学 | Method and device for reconstructing image super-resolution based on cross learning network |
CN111899168A (en) * | 2020-07-02 | 2020-11-06 | 中国地质大学(武汉) | Remote sensing image super-resolution reconstruction method and system based on feature enhancement |
CN112184552A (en) * | 2020-09-23 | 2021-01-05 | 国电南瑞科技股份有限公司 | Sub-pixel convolution image super-resolution method based on high-frequency feature learning |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418854A (en) * | 2022-01-24 | 2022-04-29 | 北京航空航天大学 | Unsupervised remote sensing image super-resolution reconstruction method based on image recursion |
CN114418854B (en) * | 2022-01-24 | 2024-08-02 | 北京航空航天大学 | Unsupervised remote sensing image super-resolution reconstruction method based on image recursion |
CN116029907A (en) * | 2023-02-14 | 2023-04-28 | 江汉大学 | Processing method, device and processing equipment for image resolution reduction model |
CN116029907B (en) * | 2023-02-14 | 2023-08-08 | 江汉大学 | Processing method, device and processing equipment for image resolution reduction model |
CN116485652A (en) * | 2023-04-26 | 2023-07-25 | 北京卫星信息工程研究所 | Super-resolution reconstruction method for remote sensing image vehicle target detection |
CN116485652B (en) * | 2023-04-26 | 2024-03-01 | 北京卫星信息工程研究所 | Super-resolution reconstruction method for remote sensing image vehicle target detection |
CN116883692A (en) * | 2023-06-06 | 2023-10-13 | 中国地质大学(武汉) | Spectrum feature extraction method, device and storage medium of multispectral remote sensing image |
Also Published As
Publication number | Publication date |
---|---|
CN113643182B (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113643182A (en) | Remote sensing image super-resolution reconstruction method based on dual learning graph network | |
CN108345890B (en) | Image processing method, device and related equipment | |
Zhu et al. | GAN‐Based Image Super‐Resolution with a Novel Quality Loss | |
CN109800692B (en) | Visual SLAM loop detection method based on pre-training convolutional neural network | |
CN109064405A (en) | A kind of multi-scale image super-resolution method based on dual path network | |
WO2023060746A1 (en) | Small image multi-object detection method based on super-resolution | |
CN111915484A (en) | Reference image guiding super-resolution method based on dense matching and self-adaptive fusion | |
CN110349087B (en) | RGB-D image high-quality grid generation method based on adaptive convolution | |
CN113066017B (en) | Image enhancement method, model training method and equipment | |
CN111640060A (en) | Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module | |
CN113077505B (en) | Monocular depth estimation network optimization method based on contrast learning | |
CN113538234A (en) | Remote sensing image super-resolution reconstruction method based on lightweight generation model | |
CN111402138A (en) | Image super-resolution reconstruction method of supervised convolutional neural network based on multi-scale feature extraction fusion | |
CN113240683A (en) | Attention mechanism-based lightweight semantic segmentation model construction method | |
CN114092824A (en) | Remote sensing image road segmentation method combining intensive attention and parallel up-sampling | |
WO2023206343A1 (en) | Image super-resolution method based on image pre-training strategy | |
CN111667412A (en) | Method and device for reconstructing image super-resolution based on cross learning network | |
CN114862679A (en) | Single-image super-resolution reconstruction method based on residual error generation countermeasure network | |
CN113393385B (en) | Multi-scale fusion-based unsupervised rain removing method, system, device and medium | |
CN114663880A (en) | Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism | |
CN116385265B (en) | Training method and device for image super-resolution network | |
CN112215140A (en) | 3-dimensional signal processing method based on space-time countermeasure | |
CN111696167A (en) | Single image super-resolution reconstruction method guided by self-example learning | |
CN112884643A (en) | Infrared image super-resolution reconstruction method based on EDSR network | |
CN115797674A (en) | Fast stereo matching algorithm for self-adaptive iterative residual optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |