CN112991167A - Aerial image super-resolution reconstruction method based on layered feature fusion network - Google Patents

Aerial image super-resolution reconstruction method based on layered feature fusion network Download PDF

Info

Publication number
CN112991167A
CN112991167A CN202110111223.5A CN202110111223A CN112991167A CN 112991167 A CN112991167 A CN 112991167A CN 202110111223 A CN202110111223 A CN 202110111223A CN 112991167 A CN112991167 A CN 112991167A
Authority
CN
China
Prior art keywords
image
convolution
aerial
feature map
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110111223.5A
Other languages
Chinese (zh)
Inventor
王帮海
杨夏宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110111223.5A priority Critical patent/CN112991167A/en
Publication of CN112991167A publication Critical patent/CN112991167A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an aerial image super-resolution reconstruction method based on a layered feature fusion network, which improves the fusion of features with different scales based on the layered structure of a U-shaped network, reduces the loss of image feature information in the convolution process, and improves the effect of a model in reconstructing a more complex aerial image environment. Especially for tasks with larger magnification factors, the image reconstruction effect is more obvious. And model training is carried out by using more densely connected residual blocks, so that a good aerial image reconstruction effect can be obtained even if the depth of the network is not very deep, and the complexity of the whole model is reduced. The method has good effect on reconstruction of various environments with complex noise, such as underwater images and rain and fog weather images, and the universality of the model is described to a certain extent.

Description

Aerial image super-resolution reconstruction method based on layered feature fusion network
Technical Field
The invention relates to the technical field of low-level computer vision, in particular to an aerial image super-resolution reconstruction method based on a hierarchical feature fusion network.
Background
During the imaging process, the image is often affected by various environmental factors, so that the quality of the image is reduced. Under extreme severe environments such as rainy and foggy weather, underwater operation, high-speed object shooting, high-altitude remote aerial shooting and the like, the imaging quality is often faced with the problems of insufficient exposure/overexposure, motion artifacts, noise interference and the like due to the limitations of shooting cost and physical performance of shooting equipment. Generally, upgrading and commissioning a shooting device is the simplest and straightforward way to improve the imaging problem: the manufacturing process of the sensor is improved, the size of pixels in a unit area is reduced, more pixels can be mounted under the CMOS with the same size, and the purpose of improving the picture quality is achieved by improving the number of pixel points of a single image. However, even if the cost is eliminated, the image cannot be completely prevented by simply increasing the performance of the hardware device. In order to achieve high-quality image generation in a lower-cost and more adaptable manner, researchers try to increase the resolution of images through image processing techniques, and the higher the resolution of images is, the more abundant the texture details are, and the clearer the images are. The image super-resolution reconstruction technology is a technical means without increasing hardware cost, reconstructs details of images through software, and has wide application scenes in the aspects of medical image reconstruction, face/license plate monitoring reconstruction recognition, satellite remote sensing image reconstruction, video stream super-resolution and the like.
Chinese patent publication No. CN111583115A, whose publication date is 08/25/2020, discloses a single-image super-resolution reconstruction method and system based on a deep attention network, including: step 1: preprocessing a starting source image training data set DIV2K to obtain a training set; step 2: establishing a convolutional neural network capable of performing super-resolution reconstruction on the image; and step 3: inputting the training set obtained in the step 1 into the convolutional neural network established in the step 2 for training to obtain a super-resolution reconstruction model; and 4, step 4: and (4) inputting the low-resolution single image to be processed into the super-resolution reconstruction model obtained in the step (3), and outputting the single image super-resolution reconstruction image. The image super-resolution network model based on deep learning of the patent is high in complexity and low in feature utilization rate. Especially in complex environments such as high-altitude shooting operation, the reconstruction effect is often poor due to serious loss of the characteristics of the shot images and interference of environmental noise. Meanwhile, the performance of the network is also one of the keys for better performance, but the performance of the network is increased by stacking the depth of the network, which increases the size of the final model.
Disclosure of Invention
The invention provides an aerial image super-resolution reconstruction method based on a layered feature fusion network, which is a novel method for realizing light-weight better reconstruction of complex aerial images and fully extracting image features.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an aerial image super-resolution reconstruction method based on a layered feature fusion network comprises the following steps:
s1: acquiring aerial images with high resolution, wherein the aerial images are aerial images in different scenes and are divided into training data and verification data;
s2: preprocessing the aerial image;
s3: building a network model, wherein the network model adopts a layered U-shaped structure, symmetrical characteristic diagrams are fused in the U-shaped structure, the number of input characteristic diagram channels is increased to the maximum at the bottom layer of the U-shaped structure, the network model is trained by using the training data of the step S1, and the network model is verified by using the verification data of the step S1;
s4: and reconstructing the new aerial image by using the trained network model to obtain a high-resolution image.
Preferably, 114 high resolution, high quality aerial images are selected from the aerial data camera disclosed in step S1, 100 of which are training data and 14 of which are verification data.
Preferably, the scenes of the aerial images in step S1 include car parks, airports, residential areas, sports grounds, harbors, viaducts, farmlands, and highways.
Preferably, the preprocessing in step S2 specifically includes:
100 aerial images as training data are cut into small pictures with the resolution of 480 x 480, the total number is 8234, and before the aerial images are input into a network model, the small pictures are cut into 96 x 96 and input into the network model in batches for training.
Preferably, the U-shaped structure is specifically:
the image is converted into a coarse characteristic image F through initial convolution0The coarse characteristic diagram is passed through two downward convolution modules to obtain characteristic diagram F1And feature map F2Feature map F2Is increased to a coarse feature map F0Quadruple of the feature map, and then the feature map F2Sending into the stacked dense residual blocks for feature extraction, outputting, and fusing the feature map into F 'by a 3 x 3 convolution'2Feature map F'2And characteristic diagram F1After symmetrically splicing, inputting the image into the dense residual block again for image high-frequency feature extraction, fusing the feature map with the final output by a 3 x 3 convolution, and then fusing the feature map with the rough feature map F0And inputting the spliced images into an up-sampling reconstruction module to generate a high-resolution image.
Preferably, the mathematical representation of the network model is as follows:
F1=fCB(F0)
F2=fCB(F1)
F′2=fPixel(fDB×N(F2)),N=(1,2,…,n)
F′1,2=fPixel(fDB×N(Hconcat(F1,F′2))),N=(1,2,…,N)
FHR=f1×1(Hconcat(F0,F′1,2))+F0
in the formula (f)CBRepresenting convolution module operations, fDB×NRepresenting N stacked dense residual block operations, HconcatRepresenting a merging of image features, f1×1Representing 1 × 1 convolution dimensionality reduction operation, F'1,2Is a characteristic diagram, FHRThe resulting high resolution image.
Preferably, the convolution module includes two convolution kernels of 3 × 3 and an LeakyReLU activation function, and the feature map passes through the first convolution kernel of 3 × 3 first and then the second convolution kernel of 3 × 3 after passing through the LeakyReLU activation function.
Preferably, the dense residual block includes four basic residual blocks, wherein the feature information of the first three basic residual blocks is directly input to the tail of the dense residual block, and is spliced with the output of the last residual block, and finally the features are fused by convolution.
Preferably, the basic residual error module includes two convolution kernels of 3 × 3, a LeakyReLU activation function and a lightweight attention module, wherein the feature map passes through the first convolution kernel of 3 × 3, then passes through the second convolution kernel of 3 × 3 after passing through the LeakyReLU activation function, and finally the feature information obtained by the lightweight attention module is added to the feature map for output.
Preferably, the lightweight attention module uses an ECA model attention module, and a parameter matrix is:
Figure RE-GDA0003064404360000031
in the formula, w represents a weight parameter of each channel, the parameter is k × C, the parameter is a convolution kernel size of learnable one-dimensional convolution, the parameter represents that only cross-channel attention between adjacent k channels is considered, and k is obtained by channel number adaptive learning of an input feature map:
Figure RE-GDA0003064404360000041
wherein | toddRepresenting the odd number nearest to t, a certain mapping relation exists between the channel dimension C and the convolution kernel size k, and the simplest mapping is linear mapping: phi (k) ═ γ k-b, but the neighboring feature representation capability of linear mapping is limited, so it is common practice to extend the power of 2 to a non-linear representation: phi (k) is 2(γ*k-b)Therefore, the dimension C of the channel and the convolution kernel size k can be obtained through the learning of the formula, and the ECA model attention module sets gamma and b as constants 2 and 1 respectively;
meanwhile, in order to further achieve the purpose of lightweight, the ECA module shares all channels with the learned weight, and the final parameter total amount is only k.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
based on the hierarchical structure of the U-shaped network, the method improves the fusion of the features of different scales, reduces the loss of image feature information in the convolution process, and improves the effect of the model in reconstructing a more complex aerial photography environment. Especially for tasks with larger magnification factors, the image reconstruction effect is more obvious. And model training is carried out by using more densely connected residual blocks, so that a good aerial image reconstruction effect can be obtained even if the depth of the network is not very deep, and the complexity of the whole model is reduced. The method has good effect on reconstruction of various environments with complex noise, such as underwater images and rain and fog weather images, and the universality of the model is described to a certain extent.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic structural diagram of a network model with a U-shaped structure in the embodiment.
FIG. 3 is a diagram illustrating an exemplary sub-pixel convolution method.
FIG. 4 is a block diagram of a density residual block according to an embodiment.
FIG. 5 is a diagram illustrating a basic residual module structure according to an embodiment.
Fig. 6 is a schematic structural diagram of the lightweight attention module in the embodiment.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides an aerial image super-resolution reconstruction method based on a layered feature fusion network, as shown in fig. 1, comprising the following steps:
s1: acquiring aerial images with high resolution, wherein the aerial images are aerial images in different scenes and are divided into training data and verification data;
s2: preprocessing the aerial image;
s3: building a network model, wherein the network model adopts a layered U-shaped structure, symmetrical characteristic diagrams are fused in the U-shaped structure, the number of input characteristic diagram channels is increased to the maximum at the bottom layer of the U-shaped structure, the network model is trained by using the training data of the step S1, and the network model is verified by using the verification data of the step S1;
s4: and reconstructing the new aerial image by using the trained network model to obtain a high-resolution image.
In step S1, 114 high-resolution, high-quality aerial images are selected from the aerial data recorder disclosed, 100 of which are training data and 14 of which are verification data.
The scenes of the aerial images in step S1 include car parks, airports, residential areas, sports grounds, ports, viaducts, farmlands, and highways.
The preprocessing in step S2 includes:
100 aerial images as training data are cut into small pictures with the resolution of 480 x 480, the total number is 8234, and before the aerial images are input into a network model, the small pictures are cut into 96 x 96 and input into the network model in batches for training.
In the embodiment, 114 selected high-quality and high-resolution aerial images are used, and 100 aerial images are cut into sub-images of 480 multiplied by 480 pixels, wherein the total amount is 8234; and performing double-cubic downsampling on all images by using MATLAB to generate corresponding low-resolution images, and forming a pair with the high-resolution images to form a pair aerial photography data set required by network training.
As shown in fig. 2, the U-shaped structure is specifically:
the image is converted into a coarse feature map F with the number of feature channels of 64 through initial convolution0Each downward convolution module increases the channel number of the characteristic diagram to 2 times of the original channel number, and the coarse characteristic diagram sequentially obtains a characteristic diagram F after passing through the downward convolution modules twice1(128 layers) and feature map F2(256 layers), feature map F2Is increased to a coarse feature map F0Quadruple of the feature map, and then the feature map F2Sending into the stacked dense residual blocks for feature extraction, outputting, and fusing the feature map into F 'by a 3 x 3 convolution'2Feature map F'2And characteristic diagram F1After symmetrically splicing, inputting the image into the dense residual block again for image high-frequency feature extraction, fusing the feature map with the final output by a 3 x 3 convolution, and then fusing the feature map with the rough feature map F0And inputting the spliced images into an up-sampling reconstruction module to generate a high-resolution image. The up-sampling reconstruction module uses a sub-pixel convolution method, the structure of the method is shown in figure 3, a hidden layer is used for carrying out feature extraction on a low-resolution image, and C is finally generated2Feature maps of the various channels. The sub-pixel convolution layer rearranges all channels of a single pixel of the feature image to form a single pixel in the high-resolution image space, and finally rearranges the single pixel into the high-resolution image
The mathematical representation of the network model is as follows:
F1=fCB(F0)
F2=fCB(F1)
F′2=fPixel(fDB×N(F2)),N=(1,2,…,n)
F′1,2=fPixel(fDB×N(Hconcat(F1,F′2))),N=(1,2,…,N)
in the formula (f)CBRepresenting convolution module operations, fDB×NRepresenting N stacked dense residual block operations, HconcatRepresenting a merging of image features, f1×1Representing 1 × 1 convolution dimensionality reduction operation, F'1,2In order to be a characteristic diagram,FHRthe resulting high resolution image.
The convolution module comprises two convolution kernels of 3 x 3 and a LeakyReLU activation function, the number of channels of the characteristic diagram is doubled after the characteristic diagram passes through the first convolution kernel of 3 x 3, and the characteristic diagram passes through the second convolution kernel of 3 x 3 after the characteristic diagram passes through the LeakyReLU activation function.
As shown in fig. 4, the dense residual block includes four basic residual blocks, wherein the feature information of the first three basic residual blocks is directly input to the tail of the dense residual block, and is spliced with the output of the last residual block, and finally these features are fused by convolution. Compared with a simply stacked residual structure, the dense residual blocks are used for improving the utilization rate of local residual information, the characteristic information is directly connected in each dense residual block, and the residual is transmitted in the dense residual blocks almost without loss, so that high-frequency characteristics which are beneficial to image reconstruction and have higher discriminative performance can be retained to the maximum extent.
As shown in fig. 5, the basic residual error module includes two convolution kernels of 3 × 3, a LeakyReLU activation function, and a lightweight attention module, wherein the feature map passes through the first convolution kernel of 3 × 3, then passes through the second convolution kernel of 3 × 3 after passing through the LeakyReLU activation function, and finally the feature information obtained by the lightweight attention module is added to the feature map for output.
As shown in fig. 6, the lightweight attention module uses an ECA model attention module, and the parameter matrix is:
Figure RE-GDA0003064404360000071
in the formula, w represents a weight parameter of each channel, the parameter is k × C, the parameter is a convolution kernel size of learnable one-dimensional convolution, the parameter represents that only cross-channel attention between adjacent k channels is considered, and k is obtained by channel number adaptive learning of an input feature map:
Figure RE-GDA0003064404360000072
wherein | toddRepresenting the odd number nearest to t, a certain mapping relation exists between the channel dimension C and the convolution kernel size k, and the simplest mapping is linear mapping: phi (k) ═ γ k-b, but the neighboring feature representation capability of linear mapping is limited, so it is common practice to extend the power of 2 to a non-linear representation: phi (k) is 2(γ*k-b)Thus, the dimension C of the channel and the convolution kernel size k can be learned by the above equation, and the ECA model attention module sets γ and b to constants 2 and 1, respectively.
The traditional residual structure can equivalently see the residual information of each path, the human vision is more sensitive to the information such as the edge, the brightness and the like of an image, and in order to effectively reconstruct the information, an attention mechanism is added to each residual block. The attention module needs to be as lightweight as possible so that it can be inserted into each residual module, and furthermore, in order to guarantee the reconstruction effect, the attention module needs to have a large receptive field. To achieve both of these objectives, the referencing ECA channel attention module extracts the attention information of the residual structure and inserts into each block residual module.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A super-resolution reconstruction method for aerial images based on a layered feature fusion network is characterized by comprising the following steps:
s1: acquiring aerial images with high resolution, wherein the aerial images are aerial images in different scenes and are divided into training data and verification data;
s2: preprocessing the aerial image;
s3: building a network model, wherein the network model adopts a layered U-shaped structure, symmetrical characteristic diagrams are fused in the U-shaped structure, the number of input characteristic diagram channels is increased to the maximum at the bottom layer of the U-shaped structure, the network model is trained by using the training data of the step S1, and the network model is verified by using the verification data of the step S1;
s4: and reconstructing the new aerial image by using the trained network model to obtain a high-resolution image.
2. The super-resolution reconstruction method for aerial images based on hierarchical feature fusion network as claimed in claim 1, wherein in step S1, 114 aerial images with high resolution and high quality are selected from the public aerial data machine, wherein 100 aerial images are training data, and 14 aerial images are verification data.
3. The super-resolution reconstruction method for aerial image based on hierarchical feature fusion network as claimed in claim 2, wherein the scene of the aerial image in step S1 includes car parking lot, airport, residential area, sports ground, harbor, viaduct, farmland and high speed runway.
4. The super-resolution reconstruction method for aerial images based on hierarchical feature fusion network according to claim 3, wherein the preprocessing in step S2 specifically comprises:
100 aerial images as training data are cut into small pictures with the resolution of 480 x 480, the total number is 8234, and before the aerial images are input into a network model, the small pictures are cut into 96 x 96 and input into the network model in batches for training.
5. The aerial image super-resolution reconstruction method based on the hierarchical feature fusion network according to claim 4, wherein the U-shaped structure specifically comprises:
the image is converted into a coarse characteristic image F through initial convolution0The coarse characteristic diagram is passed through two downward convolution modules to obtain characteristic diagram F1And feature map F2Feature map F2Is increased to a coarse feature map F0Quadruple of the feature map, and then the feature map F2Sending into the stacked dense residual blocks for feature extraction, outputting, and fusing the feature map into F 'by a 3 x 3 convolution'2Feature map F'2And characteristic diagram F1After symmetrically splicing, inputting the image into the dense residual block again for image high-frequency feature extraction, fusing the feature map with the final output by a 3 x 3 convolution, and then fusing the feature map with the rough feature map F0And inputting the spliced images into an up-sampling reconstruction module to generate a high-resolution image.
6. The super-resolution reconstruction method for aerial images based on hierarchical feature fusion network according to claim 5, characterized in that the mathematical representation of the network model is as follows:
F1=fCB(F0)
F2=fCB(F1)
F′2=fPixel(fDB×N(F2)),N=(1,2,…,n)
F′1,2=fPixel(fDB×N(Hconcat(F1,F′2))),N=(1,2,…,N)
FHR=f1×1(Hconcat(F0,F′1,2))+F0
in the formula (f)CBRepresenting convolution module operations, fDB×NRepresenting N stacked dense residual block operations, HconcatRepresenting a merging of image features, f1×1Representing 1 × 1 convolution dimensionality reduction operation, F'1,2Is a characteristic diagram, FHRThe resulting high resolution image.
7. The super-resolution reconstruction method for aerial images based on hierarchical feature fusion network of claim 6, wherein the convolution module comprises two convolution kernels of 3 x 3 and a LeakyReLU activation function, the feature map passes through the first convolution kernel of 3 x 3 first and then the number of channels is doubled, and passes through the second convolution kernel of 3 x 3 after passing through the LeakyReLU activation function.
8. The super-resolution reconstruction method for aerial image based on hierarchical feature fusion network of claim 7, characterized in that the dense residual block comprises four basic residual modules, wherein feature information of the first three basic residual modules is directly input to the tail of the dense residual block and spliced with the output of the last residual block, and finally these features are fused by convolution.
9. The aerial image super-resolution reconstruction method based on the hierarchical feature fusion network of claim 8, wherein the basic residual module comprises two convolution kernels of 3 x 3, a LeakyReLU activation function and a lightweight attention module, wherein the feature map is subjected to the first convolution kernel of 3 x 3, then the LeakyReLU activation function, then the second convolution kernel of 3 x 3, and finally the feature information obtained by the lightweight attention module is added to the feature map for output.
10. The super-resolution reconstruction method for aerial images based on the hierarchical feature fusion network of claim 9, wherein the lightweight attention module uses an ECA model attention module, and the parameter matrix is:
Figure FDA0002919277690000031
in the formula, w represents a weight parameter of each channel, the parameter is k × C, the parameter is a convolution kernel size of learnable one-dimensional convolution, the parameter represents that only cross-channel attention between adjacent k channels is considered, and k is obtained by channel number adaptive learning of an input feature map:
Figure FDA0002919277690000032
wherein | toddRepresenting the odd number nearest to t, a certain mapping relation exists between the channel dimension C and the convolution kernel size k, and the simplest mapping is linear mapping: phi (k) ═ γ k-b, but the neighboring feature representation capability of linear mapping is limited, so it is common practice to extend the power of 2 to a non-linear representation: phi (k) is 2(γ*k-b)Thus, the dimension C of the channel and the convolution kernel size k can be learned by the above equation, and the ECA model attention module sets γ and b to constants 2 and 1, respectively.
CN202110111223.5A 2021-01-27 2021-01-27 Aerial image super-resolution reconstruction method based on layered feature fusion network Pending CN112991167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110111223.5A CN112991167A (en) 2021-01-27 2021-01-27 Aerial image super-resolution reconstruction method based on layered feature fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110111223.5A CN112991167A (en) 2021-01-27 2021-01-27 Aerial image super-resolution reconstruction method based on layered feature fusion network

Publications (1)

Publication Number Publication Date
CN112991167A true CN112991167A (en) 2021-06-18

Family

ID=76345738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110111223.5A Pending CN112991167A (en) 2021-01-27 2021-01-27 Aerial image super-resolution reconstruction method based on layered feature fusion network

Country Status (1)

Country Link
CN (1) CN112991167A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538244A (en) * 2021-07-23 2021-10-22 西安电子科技大学 Lightweight super-resolution reconstruction method based on adaptive weight learning
CN114004784A (en) * 2021-08-27 2022-02-01 西安市第三医院 Method for detecting bone condition based on CT image and electronic equipment
CN117934286A (en) * 2024-03-21 2024-04-26 西华大学 Lightweight image super-resolution method and device and electronic equipment thereof
CN117934286B (en) * 2024-03-21 2024-06-04 西华大学 Lightweight image super-resolution method and device and electronic equipment thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292259A (en) * 2020-01-14 2020-06-16 西安交通大学 Deep learning image denoising method integrating multi-scale and attention mechanism
CN111476719A (en) * 2020-05-06 2020-07-31 Oppo广东移动通信有限公司 Image processing method, image processing device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292259A (en) * 2020-01-14 2020-06-16 西安交通大学 Deep learning image denoising method integrating multi-scale and attention mechanism
CN111476719A (en) * 2020-05-06 2020-07-31 Oppo广东移动通信有限公司 Image processing method, image processing device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIE LIU ET AL: "Residual Feature Aggregation Network for Image Super-Resolution", 《2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
PAK LUN KEVIN DING ET AL: "Deep residual dense U-Net for resolution enhancement in accelerated MRI acquisition", 《 SPIE MEDICAL IMAGING》 *
QILONG WANG ET AL: "ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks", 《2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538244A (en) * 2021-07-23 2021-10-22 西安电子科技大学 Lightweight super-resolution reconstruction method based on adaptive weight learning
CN113538244B (en) * 2021-07-23 2023-09-01 西安电子科技大学 Lightweight super-resolution reconstruction method based on self-adaptive weight learning
CN114004784A (en) * 2021-08-27 2022-02-01 西安市第三医院 Method for detecting bone condition based on CT image and electronic equipment
CN114004784B (en) * 2021-08-27 2022-06-03 西安市第三医院 Method for detecting bone condition based on CT image and electronic equipment
CN117934286A (en) * 2024-03-21 2024-04-26 西华大学 Lightweight image super-resolution method and device and electronic equipment thereof
CN117934286B (en) * 2024-03-21 2024-06-04 西华大学 Lightweight image super-resolution method and device and electronic equipment thereof

Similar Documents

Publication Publication Date Title
CN111598778B (en) Super-resolution reconstruction method for insulator image
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
CN111179167B (en) Image super-resolution method based on multi-stage attention enhancement network
Zhang et al. One-two-one networks for compression artifacts reduction in remote sensing
CN109064396A (en) A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN111145097B (en) Image processing method, device and system
EP4198875A1 (en) Image fusion method, and training method and apparatus for image fusion model
Rivadeneira et al. Thermal Image Super-resolution: A Novel Architecture and Dataset.
CN113592736B (en) Semi-supervised image deblurring method based on fused attention mechanism
CN112991167A (en) Aerial image super-resolution reconstruction method based on layered feature fusion network
CN113837938A (en) Super-resolution method for reconstructing potential image based on dynamic vision sensor
CN113096239B (en) Three-dimensional point cloud reconstruction method based on deep learning
CN113077505A (en) Optimization method of monocular depth estimation network based on contrast learning
CN112001843A (en) Infrared image super-resolution reconstruction method based on deep learning
CN112949636A (en) License plate super-resolution identification method and system and computer readable medium
CN113538243A (en) Super-resolution image reconstruction method based on multi-parallax attention module combination
CN116957931A (en) Method for improving image quality of camera image based on nerve radiation field
CN116486074A (en) Medical image segmentation method based on local and global context information coding
CN114140357B (en) Multi-temporal remote sensing image cloud zone reconstruction method based on cooperative attention mechanism
CN115272438A (en) High-precision monocular depth estimation system and method for three-dimensional scene reconstruction
CN113610707B (en) Video super-resolution method based on time attention and cyclic feedback network
CN111353982B (en) Depth camera image sequence screening method and device
CN117237207A (en) Ghost-free high dynamic range light field imaging method for dynamic scene
CN116385265B (en) Training method and device for image super-resolution network
CN107392986A (en) A kind of image depth rendering intent based on gaussian pyramid and anisotropic filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination