CN113421188B - Method, system, device and storage medium for image equalization enhancement - Google Patents

Method, system, device and storage medium for image equalization enhancement Download PDF

Info

Publication number
CN113421188B
CN113421188B CN202110681163.0A CN202110681163A CN113421188B CN 113421188 B CN113421188 B CN 113421188B CN 202110681163 A CN202110681163 A CN 202110681163A CN 113421188 B CN113421188 B CN 113421188B
Authority
CN
China
Prior art keywords
image
low
frequency branch
resolution
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110681163.0A
Other languages
Chinese (zh)
Other versions
CN113421188A (en
Inventor
金龙存
卢盛林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong OPT Machine Vision Co Ltd
Original Assignee
Guangdong OPT Machine Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong OPT Machine Vision Co Ltd filed Critical Guangdong OPT Machine Vision Co Ltd
Priority to CN202110681163.0A priority Critical patent/CN113421188B/en
Publication of CN113421188A publication Critical patent/CN113421188A/en
Application granted granted Critical
Publication of CN113421188B publication Critical patent/CN113421188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T3/4076Super resolution, i.e. output image resolution higher than sensor resolution by iteratively correcting the provisional high resolution image using the original low-resolution image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method, a system, a device and a storage medium for image equalization enhancement, wherein the method comprises the following steps: collecting a low-resolution image to be processed, and processing the low-resolution image through an image super-resolution model to generate a high-resolution image; and collecting training samples, wherein the training samples comprise low-resolution image samples and high-resolution image samples, and establishing an image super-resolution model based on a preset loss function and the high-resolution image samples according to the collected training samples. The invention realizes the effect of recovering the low-resolution image into the high-resolution image through the trained image super-resolution model, can integrate the advantages of the evaluation index oriented method and the perception driving method, further improves the recovery effect of the super-resolution of the single image, and can be widely applied to the technical field of image processing.

Description

Method, system, device and storage medium for image equalization enhancement
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to a method, a system, a device and a storage medium for image equalization enhancement.
Background
With the development of technology, the requirements of people on image quality are continuously improved, and how to obtain high-quality images becomes an indispensable research direction. Super-resolution of a single image is a method for recovering a high-resolution image from a single low-resolution image, and is receiving a great deal of attention in computer vision research. The technology has important practical significance in various fields, for example, in the field of safety monitoring, the problem of low resolution of a monitoring image obtained by the technology and the problem of excessively far shooting targets are caused by the problem of low resolution and difficult recognition, and the problem of difficulty in mining information from the monitoring image is caused to a certain degree. Through the single image super-resolution technology, the resolution of the image can be effectively improved, and the texture information of the image is refined, so that the quality of the monitoring image is improved. In the medical field, the image obtained by the medical instrument is often a low-resolution image, and a doctor can make a proper diagnosis according to the image. Through a single image super-resolution technology, super-resolution is carried out on the low-resolution image, the quality of the low-resolution image is improved, and doctors are helped to diagnose better. The resolution of the old photo is often unsatisfactory in the current age due to the limitation of the technology at the time, the visual experience is seriously affected, the old photo can be recovered to a certain extent by utilizing the single-image super-resolution technology, and the quality of the old photo is improved. In addition, the single image superdivision can be used in other fields such as remote sensing images, target recognition and the like.
The current single image super-resolution method can be divided into the following three types: interpolation-based methods, reconstruction-based methods, and learning-based methods. Interpolation-based methods are very fast and straightforward, such as bicubic interpolation. However, the interpolation method can lose the detail information of the image, and the effect of super resolution is not ideal. Some sharpened details can be recovered by limiting the possible solution space with complex prior knowledge based on the reconstruction method. However, as the scale factor increases, the performance of the reconstruction-based method may decrease significantly, and the reconstruction-based method tends to take a significant amount of time. Learning-based methods typically use machine learning algorithms to obtain models, resulting in a mapping from low resolution to high resolution images. Learning-based methods have attracted considerable attention for their excellent performance and fast computational speed.
In recent years, with the development and wide application of deep learning, many deep learning-based methods have been proposed to solve the problem of single super resolution, and the deep learning-based method has great superiority compared with other methods. The previous method based on deep learning mainly uses the mean square error loss as an optimization function to obtain a higher evaluation index. However, these methods of targeting the evaluation index typically produce severely overly smooth edges, resulting in images lacking high frequency detail and perceived poor quality. Therefore, in order to improve the visual quality of the super-resolution result, a perceptually driven method is proposed. However, while the perceptually driven approach produces true details, it also introduces some uncomfortable noise.
Disclosure of Invention
The invention aims to provide a method, a system, a device and a storage medium for image balance enhancement, which can integrate the advantages of an evaluation index-oriented method and a perception driving method, further improve the recovery effect of super-resolution of a single image and enable the single image to process the task of super-resolution of the single image in various complex scenes.
To achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a method for image equalization enhancement includes:
the training method of the image super-resolution model comprises the following steps:
collecting a training sample, the training sample comprising a low resolution image sample and a high resolution image sample;
extracting image features from the low-resolution image sample based on a sharing module to generate shallow image features;
combining left-right asymmetric superdivision networks, and further extracting global feature information and detail feature information for the shallow image features by a global guiding mechanism;
generating an attention mask for the image shallow features based on a mask network;
using the attention mask to adaptively reconstruct the global feature information and the detail feature information, and reconstructing a high-resolution image;
Reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function, and establishing an image super-resolution model;
and processing the low-resolution image through the image super-resolution model, and outputting a high-resolution image.
Optionally, the collecting a training sample, the training sample containing a low resolution image sample and a high resolution image sample, includes:
collecting high-resolution image samples, and downsampling the high-resolution image samples by adopting an image degradation algorithm to generate low-resolution image samples;
pairs of training samples are created using the high resolution image samples and the generated low resolution image samples.
Optionally, the sharing module, the left-right asymmetric superdivision network and the mask network all use a residual dense connecting block as a basic block to extract multi-level characteristic information of the low-resolution image.
Optionally, the sharing module is provided with 2 residual dense connecting blocks;
the sharing module is used for extracting image features from the low-resolution image sample to generate shallow image features, and the method comprises the following steps:
before the low-resolution image sample is input into the sharing module, a shallow feature map is obtained by a convolution layer;
Inputting the shallow feature map into the sharing module, extracting image features and generating image shallow features;
the shallow features of the obtained image are shared by a left-right asymmetric superdivision network and a mask network.
Optionally, the asymmetric left-right hyper-branch network is composed of a low-frequency branch and a high-frequency branch, the low-frequency branch is used for extracting global characteristic information, and the high-frequency branch is used for extracting detail characteristic information;
the method for extracting the global feature information and the detail feature information from the shallow image features by the global guiding mechanism in combination with the asymmetric left and right supersplit network comprises the following steps:
the asymmetric left-right hyper-split network extracts global characteristic information and detail characteristic information from the shallow image characteristics through a low-frequency branch and a high-frequency branch;
the global guiding mechanism connects the global characteristic information extracted by the low-frequency branch with the detail characteristic information obtained by the low-layer module of the high-frequency branch in series;
the feature information after being connected in series is continuously input into a high-layer module of the high-frequency branch, so that global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further acquire detail feature information;
after the global feature information and the detail feature information are extracted by the low-frequency branch and the high-frequency branch, the global feature image and the detail feature image are reconstructed through the respective up-sampling layer and the reconstruction layer.
Optionally, the low-frequency branch is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module is composed of 15 residual intensive connecting blocks, the up-sampling module is composed of a 1-layer convolution layer and a 1-layer nearest neighbor up-sampling layer, and the reconstruction module is composed of a 1-layer convolution layer;
the high-frequency branch consists of a generating countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
Optionally, the mask network is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module is composed of 5 residual intensive connecting blocks, the up-sampling module is composed of 1 layer convolution layer and 1 nearest neighbor up-sampling layer, and the reconstruction module is composed of 1 layer convolution layer;
the generating an attention mask for the image shallow features based on a mask network includes:
the mask network further extracts deep features from the shallow features of the image through 5 residual error dense blocks;
the further extracted deep features generate a deep feature map through an upsampling layer and a reconstruction layer;
The deep feature map then uses the sigmoid function to generate an attention mask.
Optionally, the attention mask is a probability matrix, which represents the contribution degree of each pixel in the detail characteristic image reconstructed by the high-frequency branch to the final output image;
said adaptively reconstructing said global feature information and detailed feature information using said attention mask, reconstructing a high resolution image, comprising:
the attention mask is combined with the high-frequency branch and the low-frequency branch to reconstruct a global characteristic image and a detail characteristic image, and the results of the high-frequency branch and the low-frequency branch are combined in a self-adaptive mode;
and outputting the final reconstructed high-resolution image through the final convolution layer.
Optionally, the preset loss function includes a loss function of a low-frequency branch and a loss function of a high-frequency branch of the asymmetric left-right superdivision network;
the loss function of the low-frequency branch calculates the mean square error between the global characteristic image generated by the low-frequency branch and the high-resolution image sample;
the loss function of the high-frequency branch consists of the loss function of the discriminator and the loss function of the generator; the loss function of the generator is composed of an average absolute error, a perceived loss and an antagonism loss, and the average absolute error, the perceived loss and the antagonism loss between the detail characteristic image generated by the high-frequency branch and the high-resolution image sample are calculated.
In a second aspect, a system for image equalization enhancement is provided, comprising:
the training module is used for training the image super-resolution model and comprises the following steps:
the sampling submodule is used for collecting training samples, and the training samples comprise low-resolution image samples and high-resolution image samples;
the first extraction submodule is used for extracting image features from the low-resolution image samples based on the sharing module and generating image shallow features;
the second extraction submodule is used for combining the left-right asymmetric hyper-split network, and the global guiding mechanism is used for further extracting global characteristic information and detail characteristic information for the shallow image characteristics;
a generating sub-module for generating an attention mask for the image shallow features based on a mask network;
a reconstruction sub-module, configured to reconstruct the global feature information and the detail feature information adaptively using the attention mask, and reconstruct a high resolution image;
the model building sub-module is used for reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function to build an image super-resolution model;
and the output module is used for processing the low-resolution image through the image super-resolution model and outputting a high-resolution image.
Optionally, the sampling submodule includes:
the sampling unit is used for downsampling the high-resolution image sample by adopting an image degradation algorithm to generate a low-resolution image sample;
and the sample establishing unit is used for establishing paired training samples by using the high-resolution image samples and the generated low-resolution image samples.
Optionally, the sharing module, the left-right asymmetric superdivision network and the mask network all use the residual intensive connecting block as a basic block to extract multi-level characteristic information of the low-resolution image.
Optionally, the sharing module is provided with 2 intensive residual connection blocks, and the first extraction submodule includes:
the first input subunit is used for inputting the shallow feature map obtained by the low-resolution image sample through a layer of convolution layer before the sharing module is input;
the second input subunit is used for inputting the shallow feature map into the sharing module again, extracting image features and generating image shallow features;
and the sharing subunit is used for sharing the obtained image shallow layer characteristics by a left-right asymmetric superdivision network and a mask network.
Optionally, the asymmetric left-right superbranch network is composed of a low-frequency branch and a high-frequency branch, the low-frequency branch is used for extracting global feature information, the high-frequency branch is used for extracting detail feature information, and the second extraction submodule includes:
The first extraction subunit is used for extracting global feature information and detail feature information from the shallow image features through a low-frequency branch and a high-frequency branch by the asymmetric left-right hyper-division network;
the serial subunit is used for connecting the global characteristic information extracted by the low-frequency branch and the detail characteristic information obtained by the low-layer module of the high-frequency branch in series by a global guiding mechanism;
the second extraction subunit is used for continuously inputting the feature information after being connected in series into the high-level module of the high-frequency branch, so that the global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further acquire detail feature information;
and the first output subunit is used for reconstructing the global feature image and the detail feature image through the respective upsampling layer and reconstruction layer after the global feature information and the detail feature information are extracted by the low-frequency branch and the high-frequency branch.
Optionally, the low-frequency branch is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module is composed of 15 residual intensive connecting blocks, the up-sampling module is composed of 1 layer of convolution layer and 1 layer of nearest neighbor up-sampling layer, and the reconstruction module is composed of 1 layer of convolution layer;
The high-frequency branch consists of a generating countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
Optionally, the mask network is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module is composed of 5 residual intensive connecting blocks, the up-sampling module is composed of 1 layer convolution layer and 1 nearest neighbor up-sampling layer, and the reconstruction module is composed of 1 layer convolution layer;
the generating submodule includes:
a third extraction subunit, configured to further extract deep features from the shallow features of the image by using the mask network through 5 residual error dense blocks;
the second output subunit is used for generating a deep feature map through the up-sampling layer and the reconstruction layer by the deep features extracted further;
and the attention generation subunit is used for generating an attention mask by utilizing the sigmoid function through the deep feature map.
Optionally, the attention mask is a probability matrix, which represents the contribution degree of each pixel in the detail characteristic image reconstructed by the high-frequency branch to the final output image;
the reconstruction submodule includes:
A reconstruction subunit, configured to reconstruct a global feature image and a detail feature image by combining the attention mask with the high-frequency branch and the low-frequency branch, and adaptively combine the results of the high-frequency branch and the low-frequency branch;
and the third output subunit is used for outputting the final reconstructed high-resolution image from the combined result through the final layer of convolution layer.
Optionally, the preset loss function includes a loss function of a low-frequency branch and a loss function of a high-frequency branch of the asymmetric left-right superdivision network;
the loss function of the low-frequency branch calculates the mean square error between the global characteristic image generated by the low-frequency branch and the high-resolution image sample;
the loss function of the high-frequency branch consists of the loss function of the discriminator and the loss function of the generator; the loss function of the generator is composed of an average absolute error, a perceived loss and an antagonism loss, and the average absolute error, the perceived loss and the antagonism loss between the detail characteristic image generated by the high-frequency branch and the high-resolution image sample are calculated.
In a third aspect, an apparatus is provided comprising a memory for storing at least one program and a processor for loading the at least one program to perform the method of image equalization enhancement as described above.
In a fourth aspect, a storage medium is provided storing a processor-executable program which, when executed by a processor, is adapted to carry out the method of image equalization enhancement as described above.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the image super-resolution model established by the image super-resolution model comprising the low-resolution image sample, the high-resolution image sample and the preset loss function is adopted to carry out resolution processing on the low-resolution image to be processed, so that the effect of recovering the low-resolution image into the high-resolution image can be accurately and efficiently realized, the advantages of an evaluation index oriented method and a perception driving method can be fused, and the super-resolution recovery effect of a single image is further improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the scope of the invention.
FIG. 1 is a flowchart illustrating steps of a method for image equalization enhancement according to an embodiment of the present invention;
FIG. 2 is a block diagram of a system for image equalization enhancement according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an image super-resolution model in an embodiment of the invention;
FIG. 4 is a schematic diagram of the structure of the residual dense connecting blocks used in the shared module, the asymmetric left-right superdivider network, and the mask network according to the embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the embodiments described below are only some embodiments of the present invention, not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, the present embodiment provides a method for enhancing image equalization, which includes the following steps:
s1, collecting a training sample, wherein the training sample comprises a low-resolution image sample and a high-resolution image sample;
s2, extracting image features from the low-resolution image sample based on a sharing module to generate shallow image features;
s3, combining the left-right asymmetric superdivision network, and further extracting global feature information and detail feature information for the shallow image features by a global guiding mechanism;
s4, generating an attention mask for the image shallow layer features based on a mask network;
s5, using the attention mask, adaptively reconstructing the global characteristic information and the detail characteristic information, and reconstructing a high-resolution image;
s6, reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function, and establishing an image super-resolution model;
s7, processing the low-resolution image through the image super-resolution model, and outputting a high-resolution image.
The steps S1 to S6 are steps included in the training method of the image super-resolution model.
Optionally, the step S1 includes:
s11, collecting high-resolution image samples, and downsampling the high-resolution image samples by adopting an image degradation algorithm to generate low-resolution image samples;
S12, establishing pairs of training samples by using the high-resolution image samples and the generated low-resolution image samples.
Optionally, the sharing module, the left-right asymmetric superdivision network and the mask network all use the residual intensive connecting block as a basic block to extract multi-level characteristic information of the low-resolution image.
Optionally, the sharing module is provided with 2 residual dense connecting blocks, and the step S2 includes:
s21, before the low-resolution image sample is input into the sharing module, a shallow feature map is obtained by a convolution layer;
s22, inputting the shallow feature map into the sharing module, extracting image features and generating image shallow features;
s23, the obtained shallow image features are shared by a left-right asymmetric superdivision network and a mask network.
Optionally, the asymmetric left-right superbranch network is composed of a low-frequency branch and a high-frequency branch, the low-frequency branch is used for extracting global feature information, the high-frequency branch is used for extracting detail feature information, and the step S3 includes:
s31, extracting global feature information and detail feature information from the shallow image features through a low-frequency branch and a high-frequency branch by the asymmetric left-right superdivision network;
S32, the global guiding mechanism connects the global characteristic information extracted by the low-frequency branch with the detail characteristic information obtained by the low-layer module of the high-frequency branch in series;
s33, continuously inputting the serial characteristic information into a high-level module of the high-frequency branch, so that global characteristic information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further acquire detail characteristic information;
s34, after the global feature information and the detail feature information are extracted by the low-frequency branch and the high-frequency branch, the global feature image and the detail feature image are rebuilt through the respective upsampling layer and rebuilding layer.
Optionally, the low-frequency branch is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module is composed of 15 residual intensive connecting blocks, the up-sampling module is composed of a 1-layer convolution layer and a 1-layer nearest neighbor up-sampling layer, and the reconstruction module is composed of a 1-layer convolution layer;
the high-frequency branch consists of a generating countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
Optionally, the mask network is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module is composed of 5 residual intensive connecting blocks, the up-sampling module is composed of 1 layer convolution layer and 1 nearest neighbor up-sampling layer, and the reconstruction module is composed of 1 layer convolution layer;
The step S4 includes:
s41, the mask network further extracts deep features from the shallow features of the image through 5 residual error dense blocks;
s42, further extracted deep features generate a deep feature map through an upsampling layer and a reconstruction layer;
s43, generating an attention mask by utilizing a sigmoid function on the deep feature map.
Optionally, the attention mask is a probability matrix representing the extent to which each pixel in the detail feature image reconstructed by the high frequency branch contributes to the final output image.
The step S5 includes:
s51, reconstructing a global feature image and a detail feature image by combining the high-frequency branch and the low-frequency branch through the attention mask, and adaptively combining the results of the high-frequency branch and the low-frequency branch;
s52, outputting the combined result through the final layer of convolution layer to finally reconstruct the high-resolution image.
Optionally, the preset loss function includes a loss function of a low-frequency branch and a loss function of a high-frequency branch of the asymmetric left-right superdivision network;
the loss function of the low-frequency branch calculates the mean square error between the global characteristic image generated by the low-frequency branch and the high-resolution image sample;
The loss function of the high-frequency branch consists of the loss function of the discriminator and the loss function of the generator; the loss function of the generator is composed of an average absolute error, a perceived loss and an antagonism loss, and the average absolute error, the perceived loss and the antagonism loss between the detail characteristic image generated by the high-frequency branch and the high-resolution image sample are calculated.
According to the image balance enhancement method provided by the embodiment of the invention, the image super-resolution model which is built by the low-resolution image sample, the high-resolution image sample and the preset loss function is adopted to carry out resolution processing on the low-resolution image to be processed, so that the effect of recovering the low-resolution image into the high-resolution image can be accurately and efficiently realized, the advantages of an evaluation index oriented method and a perception driving method can be fused, and the super-resolution recovery effect of a single image is further improved.
Example 2
As shown in fig. 2, this embodiment provides a system for image equalization enhancement, which can implement the method provided in embodiment 1 above. The system comprises:
training module 10, is used for training the super-resolution model of image, training module 10 includes:
a sampling sub-module 11 for collecting training samples, the training samples comprising low resolution image samples and high resolution image samples;
A first extraction sub-module 12, configured to extract image features from the low resolution image samples based on a sharing module, and generate shallow image features;
the second extraction sub-module 13 is used for combining the left-right asymmetric super-division network, and the global guiding mechanism is used for further extracting global characteristic information and detail characteristic information for the shallow image characteristics;
a generation sub-module 14 for generating an attention mask for the image shallow features based on a mask network;
a reconstruction sub-module 15, configured to reconstruct the global feature information and the detail feature information adaptively using the attention mask, and reconstruct a high resolution image;
the model building sub-module 16 is configured to inversely converge the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function, and build an image super-resolution model;
and an output module 20, configured to process the low resolution image through the image super resolution model and output a high resolution image.
Optionally, the sampling submodule 11 includes:
the sampling unit is used for downsampling the high-resolution image sample by adopting an image degradation algorithm to generate a low-resolution image sample;
and the sample establishing unit is used for establishing paired training samples by using the high-resolution image samples and the generated low-resolution image samples.
Optionally, the sharing module, the left-right asymmetric superdivision network and the mask network all use the residual intensive connecting block as a basic block to extract multi-level characteristic information of the low-resolution image.
Optionally, the sharing module is provided with 2 residual intensive connection blocks, and the first extraction sub-module 12 includes:
the first input subunit is used for inputting the shallow feature map obtained by the low-resolution image sample through a layer of convolution layer before the sharing module is input;
the second input subunit is used for inputting the shallow feature map into the sharing module again, extracting image features and generating image shallow features;
and the sharing subunit is used for sharing the obtained image shallow layer characteristics by a left-right asymmetric superdivision network and a mask network.
Optionally, the asymmetric left-right superbranch network is composed of a low-frequency branch for extracting global feature information and a high-frequency branch for extracting detail feature information, and the second extracting submodule 13 includes:
the first extraction subunit is used for extracting global feature information and detail feature information from the shallow image features through a low-frequency branch and a high-frequency branch by the asymmetric left-right hyper-division network;
The serial subunit is used for connecting the global characteristic information extracted by the low-frequency branch and the detail characteristic information obtained by the low-layer module of the high-frequency branch in series by a global guiding mechanism;
the second extraction subunit is used for continuously inputting the feature information after being connected in series into the high-level module of the high-frequency branch, so that the global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further acquire detail feature information;
and the first output subunit is used for reconstructing the global feature image and the detail feature image through the respective upsampling layer and reconstruction layer after the global feature information and the detail feature information are extracted by the low-frequency branch and the high-frequency branch.
Optionally, the low-frequency branch is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module is composed of 15 residual intensive connecting blocks, the up-sampling module is composed of 1 layer of convolution layer and 1 layer of nearest neighbor up-sampling layer, and the reconstruction module is composed of 1 layer of convolution layer;
the high-frequency branch consists of a generating countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
Optionally, the mask network is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module is composed of 5 residual intensive connecting blocks, the up-sampling module is composed of 1 layer convolution layer and 1 nearest neighbor up-sampling layer, and the reconstruction module is composed of 1 layer convolution layer;
the generating sub-module 14 includes:
a third extraction subunit, configured to further extract deep features from the shallow features of the image by using the mask network through 5 residual error dense blocks;
the second output subunit is used for generating a deep feature map through the up-sampling layer and the reconstruction layer by the deep features extracted further;
and the attention generation subunit is used for generating an attention mask by utilizing the sigmoid function through the deep feature map.
Optionally, the attention mask is a probability matrix, which represents the contribution degree of each pixel in the detail characteristic image reconstructed by the high-frequency branch to the final output image;
the reconstruction sub-module 15 comprises:
a reconstruction subunit, configured to reconstruct a global feature image and a detail feature image by combining the attention mask with the high-frequency branch and the low-frequency branch, and adaptively combine the results of the high-frequency branch and the low-frequency branch;
And the third output subunit is used for outputting the final reconstructed high-resolution image from the combined result through the final layer of convolution layer.
Optionally, the preset loss function includes a loss function of a low-frequency branch and a loss function of a high-frequency branch of the asymmetric left-right superdivision network;
the loss function of the low-frequency branch calculates the mean square error between the global characteristic image generated by the low-frequency branch and the high-resolution image sample;
the loss function of the high-frequency branch consists of the loss function of the discriminator and the loss function of the generator; the loss function of the generator is composed of an average absolute error, a perceived loss and an antagonism loss, and the average absolute error, the perceived loss and the antagonism loss between the detail characteristic image generated by the high-frequency branch and the high-resolution image sample are calculated.
The image equalization enhancement system provided by the embodiment of the invention adopts the image super-resolution model which is built by the low-resolution image sample, the high-resolution image sample and the preset loss function to carry out resolution processing on the low-resolution image to be processed, can accurately and efficiently realize the effect of recovering the low-resolution image into the high-resolution image, can integrate the advantages of the evaluation index oriented method and the perception driving method, and further improves the recovery effect of the super-resolution of the single image.
Example 3
The present embodiment provides an apparatus including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the steps of the method of image equalization enhancement as described in embodiment 1 above.
Example 4
A storage medium having stored therein a program executable by a processor for performing the steps of the method of image equalization enhancement as described in embodiment 1 when executed by the processor.
Example 5
Referring to fig. 3 to 4, a method for image equalization enhancement specifically includes the following steps:
A. collecting a training sample, the training sample comprising a low resolution image sample and a high resolution image sample;
B. establishing an image super-resolution model according to the acquired training samples;
C. and monitoring network output by utilizing high resolution in the training sample, setting a loss function, and performing end-to-end training on the network. After a certain number of iterations, updating network parameters, and training the network until convergence;
D. and inputting the low-resolution image to be restored into the trained network model, thereby outputting the high-resolution image.
Wherein, the specific implementation scheme of the step A is as follows:
the disclosed large-scale image data set DIV2K is obtained, wherein the DIV2K comprises 800, 100 and 100 images with 2K resolution and is respectively used as a training data set, a verification set and a test set. And performing double three times downsampling on the original high-resolution image by 4 times by using an 'im size' function of MATLAB to obtain a corresponding low-resolution image, and forming paired training data. A horizontal or vertical flip, a 90 ° rotation is used as a way of data enhancement.
The specific implementation scheme of the step B is as follows:
b1, selecting a low-resolution image, and randomly cutting out an image block with the size of 32 multiplied by 32 to be used as network input. Using a 3 x 3 convolutional layer H LR For input low resolution image I LR Extracting shallow feature map F 0 . The resulting feature contains 64 channels, the same size as the input picture. Using 2 residual dense connecting blocks (RRDB) as sharing Module H SM For shallow feature map F 0 Further extracting shallow features. As shown in fig. 2, each residual dense joint block contains 3 dense joint blocks. Each dense connection block consists of 5 convolution layers, the number of channel increases in the dense connection block is set to 32, and the output of each convolution layer is transmitted to the subsequent convolution layer in the dense connection block through a plurality of jump connections as an additional input. This step can be expressed as:
F 0 =H LR (I LR )
F SF =H SM (F 0 )
B2, sharing the module H by the high-frequency branch HFB and the low-frequency branch LFB pairs by using the asymmetric hyper-split network SM Output shallow features F SF And respectively extracting detail information and global information. The low-frequency branch consists of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein the deep feature extraction module consists of 15 RRDB. The upsampling module consists of a 3 x 3 convolutional layer and a nearest neighbor upsampling layer. The reconstruction module is a 3 x 3 convolutional layer. The high frequency branch is a generating countermeasure network, which is composed of a generator and a discriminator, the structure of the generator is similar to that of the low frequency branch, the generator is also composed of a deep feature extraction module, an up-sampling module and a reconstruction module, and the deep feature extraction module is composed of 15 RRDB. A 3 x 3 convolution layer and a nearest neighbor upsampling layer. The reconstruction module is a 3 x 3 convolutional layer. We select RaD as the arbiter for our high frequency branches. The two branches can effectively extract global information and detail information of the image, and the high-resolution image can be reconstructed further.
And B3, using a global guiding mechanism, and using an output characteristic diagram of the 10 th RRDB in the low-frequency branch LFB of the asymmetric left-right super-resolution network to guide the subsequent characteristic extraction of the 5 th RRDB in the high-frequency branch HFB. The output characteristic diagram of the 10 th RRDB in the low frequency branch LFB is serially connected with the output characteristic diagram of the 5 th RRDB in the high frequency branch HFB, and the channel voltage is retracted by 64 through a 3×3 convolution layer, and then is input into the HFB for subsequent feature extraction. The global information of the low-frequency branch is injected into the high-frequency branch, so that fine-grained reconstruction of the high-frequency branch is facilitated.
And B4, generating an attention mask for adaptively reconstructing the final output image by using a mask network, and realizing better balance between reconstruction accuracy and perceived quality. Mask network to share shallow features F of the output of the module SF As input, the mask network consists of a deep feature extraction module, an up-sampling module and a reconstruction module and a sigmoid function, the deep feature extraction module of which consists of 5 RRDBs. The upsampling module consists of a 3 x 3 convolutional layer and a nearest neighbor upsampling layer. The reconstruction module is a 3 x 3 convolutional layer. F (F) SF The extracted characteristic diagram W is obtained through 5 RRDB, an up-sampling module and a 3 multiplied by 3 convolution layer M . Operation of 5 RRDB, one layer up-sampling module and one layer 3×3 convolution layer, recording bit H mask
The process can be expressed as
W M =H mask (F SF )
Then the characteristic diagram W M The sigmoid type function is used to process the probability matrix into a attention mask A.
This procedure can be expressed as:
A=o(W M )
f high (F SF ) Representing a feature map containing global information after high frequency branching HFB reconstruction, f low (F SF ) Representing a feature map containing local detail information after reconstruction of the low frequency branch LFB, note that mask a represents f high (F SF ) The extent to which each pixel contributes to the final output image. The feature maps of the low frequency branch and the high frequency branch are fused using the attention mask a as follows:
l y =(1-A)·f low (F SF )+A·f high (F SF )
In this way, the mask network can learn the weights of each pixel in the feature map, and can adaptively combine the results of the high frequency branch and the low frequency branch to output a high resolution image with a channel number of 3 through the 3×3 convolutional layer of the last layer.
The scheme of the step C is specifically as follows:
using the MSE loss function as the loss function of the low frequency branch, it is calculated the high resolution image f generated by the low frequency branch low (F SF ) And the true high resolution image I in the sample HR Mean square error between. The loss function of the high frequency branch is composed of the loss function of the arbiter and the loss function of the generator. The relative average arbiter (RaD) is used as the arbiter for our high frequency branches. The output of RaD being closer to 1 means that the real image x r Pseudo-image x f More realistic. The loss function of the arbiter is defined as follows:
wherein D is Ra (. Cndot.) is RaD asC (·) represents the output of the arbiter, ++>Represents the operation of averaging all the dummy data in the batch, and sigma (·) sigmoid function. The loss function of the generator is composed of average absolute error, perceived loss and counterloss. We use L 1 Constraining the generated image to be closer to the real image by the loss function, L 1 The formula of the loss function is as follows:
Wherein W, H, C respectively represent the width, height and number of channels of the high resolution image,representing the function of the generator, θ representing the parameters of the generator, I i Representing the ith image. The purpose of the perceptual penalty is to measure the perceptual similarity between the SR image and the corresponding HR image, enabling pre-training prior to activating the layerThe distance between two advanced features extracted in a good network is minimized. The SR image and HR image are taken as inputs to the pretraining network VGG 19. The loss formula of the perceptual function is as follows:
wherein,represented as a function of the pretrained network VGG19, I i Representing the ith image, G (-) represents the function of the generator. The antagonism loss of the generator and the loss function of the arbiter are in a symmetrical form:
wherein D is Ra (. Cndot.) is RaD asC (·) represents the output of the arbiter, ++>Represents the operation of averaging all the dummy data in the batch, and sigma (·) sigmoid function.
The training process is divided into two phases. First, we pretrain a model with all branches for PSNR using MSE loss. We then use the trained PSNR-oriented model as an initialization for the HFB network. Second, we train HFBs in a antagonistic manner. At the same time we continue to use L 1 Loss updates the LFB and mask network branches until the model converges.
During training, the batch size was set to 32, and the initial learning rate was set to 10 -4 . In the iterative training process, according to the convergence condition of the network, each time is 2 multiplied by 10 5 The iteration reduces the learning rate once by half. The present invention uses an ADAM optimizer to reverse gradient propagate a model, whereinParameters of ADAM are set to beta 1 =0.9,β 2 =0.999 and e=10 -8
The scheme of the step D is specifically as follows:
the DIV2K is selected as 100 images of a test Set, and images of reference data sets Set5, set14, BSD100, urban100 and Manga109 are sequentially input into a previously trained network model, and high-resolution images are output.
In summary, the method, the system, the device and the storage medium for image balance enhancement provided by the embodiment of the invention adopt the image super-resolution model which is built by the low-resolution image sample, the high-resolution image sample and the preset loss function to carry out resolution processing on the low-resolution image to be processed, can accurately and efficiently realize the effect of recovering the low-resolution image into the high-resolution image, and can integrate the advantages of the evaluation index oriented method and the perception driving method to further improve the recovery effect of the super-resolution of the single image.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. A method of image equalization enhancement, comprising:
the training method of the image super-resolution model comprises the following steps:
collecting a training sample, the training sample comprising a low resolution image sample and a high resolution image sample;
extracting image features from the low-resolution image sample based on a sharing module to generate shallow image features;
combining left-right asymmetric super-division networks, and extracting global feature information and detail feature information from the shallow image features through a low-frequency branch and a high-frequency branch; the global feature information extracted by the low-frequency branch and the detail feature information obtained by the low-layer module of the high-frequency branch are connected in series through a global guiding mechanism; the asymmetric left-right hyper-branch network consists of a low-frequency branch and a high-frequency branch, wherein the low-frequency branch is used for extracting global characteristic information, and the high-frequency branch is used for extracting detail characteristic information;
Performing feature extraction and reconstruction on the image shallow features based on a mask network to generate an attention mask;
using the attention mask to adaptively reconstruct the global feature information and the detail feature information, and reconstructing a high-resolution image;
reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function, and establishing an image super-resolution model; the preset loss function comprises a loss function of a low-frequency branch and a loss function of a high-frequency branch of the asymmetric left-right super-division network;
processing the low-resolution image through the image super-resolution model, and outputting a high-resolution image;
the method for extracting the image shallow features by the global guiding mechanism further extracts global feature information and detail feature information by combining the asymmetric left and right supersplit networks, and further comprises the following steps:
the feature information after being connected in series is continuously input into a high-layer module of the high-frequency branch, so that global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further acquire detail feature information;
after the global feature information and the detail feature information are extracted by the low-frequency branch and the high-frequency branch, the global feature image and the detail feature image are reconstructed through the respective up-sampling layer and the reconstruction layer.
2. The method of image equalization enhancement of claim 1, wherein said collecting training samples, said training samples comprising low resolution image samples and high resolution image samples, comprises:
collecting high-resolution image samples, and downsampling the high-resolution image samples by adopting an image degradation algorithm to generate low-resolution image samples;
pairs of training samples are created using the high resolution image samples and the generated low resolution image samples.
3. The method of claim 1, wherein the sharing module, the left-right asymmetric superdivision network and the mask network all use a residual dense connection block as a basic block to extract multi-level feature information of the low-resolution image.
4. A method of image equalization enhancement as defined in claim 3, wherein said sharing module is provided with 2 residual dense connecting blocks;
the sharing module is used for extracting image features from the low-resolution image sample to generate shallow image features, and the method comprises the following steps:
before the low-resolution image sample is input into the sharing module, a shallow feature map is obtained by a convolution layer;
Inputting the shallow feature map into the sharing module, extracting image features and generating image shallow features;
the shallow features of the obtained image are shared by a left-right asymmetric superdivision network and a mask network.
5. The method of image equalization enhancement according to claim 4, wherein said low frequency branch is composed of a deep feature extraction module, an up-sampling module and a reconstruction module, wherein said deep feature extraction module is composed of 15 residual dense connecting blocks, said up-sampling module is composed of 1 layer convolution layer and 1 nearest neighbor up-sampling layer, and said reconstruction module is composed of 1 layer convolution layer;
the high-frequency branch consists of a generating countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
6. The method of image equalization enhancement according to claim 5, wherein said mask network is comprised of a deep feature extraction module comprised of 5 residual dense connecting blocks, an upsampling module comprised of 1 convolution layer and 1 nearest neighbor upsampling layer, and a reconstruction module comprised of 1 convolution layer;
The generating an attention mask for the image shallow features based on a mask network includes:
the mask network further extracts deep features from the shallow features of the image through 5 residual error dense blocks;
the further extracted deep features generate a deep feature map through an upsampling layer and a reconstruction layer;
the deep feature map then uses the sigmoid function to generate an attention mask.
7. The method of image equalization enhancement according to claim 6, wherein said attention mask is a probability matrix representing the extent to which each pixel in the detail feature image reconstructed by said high frequency branch contributes to the final output image;
said adaptively reconstructing said global feature information and detailed feature information using said attention mask, reconstructing a high resolution image, comprising:
the attention mask is combined with the high-frequency branch and the low-frequency branch to reconstruct a global characteristic image and a detail characteristic image, and the results of the high-frequency branch and the low-frequency branch are combined in a self-adaptive mode;
and outputting the final reconstructed high-resolution image through the final convolution layer.
8. The method of image equalization enhancement of claim 7, wherein the loss function of the low frequency branch calculates a mean square error between a global feature image generated by the low frequency branch and the high resolution image samples;
The loss function of the high-frequency branch consists of the loss function of the discriminator and the loss function of the generator; the loss function of the generator is composed of an average absolute error, a perceived loss and an antagonism loss, and the average absolute error, the perceived loss and the antagonism loss between the detail characteristic image generated by the high-frequency branch and the high-resolution image sample are calculated.
9. A system for image equalization enhancement, comprising:
the training module is used for training the image super-resolution model and comprises the following steps:
the sampling submodule is used for collecting training samples, and the training samples comprise low-resolution image samples and high-resolution image samples;
the first extraction submodule is used for extracting image features from the low-resolution image samples based on the sharing module and generating image shallow features;
the second extraction submodule is used for extracting global feature information and detail feature information from the shallow image features through a low-frequency branch and a high-frequency branch by combining a left-right asymmetric super-division network; the global feature information extracted by the low-frequency branch and the detail feature information obtained by the low-layer module of the high-frequency branch are connected in series through a global guiding mechanism; the asymmetric left-right hyper-branch network consists of a low-frequency branch and a high-frequency branch, wherein the low-frequency branch is used for extracting global characteristic information, and the high-frequency branch is used for extracting detail characteristic information;
The generating submodule is used for carrying out feature extraction and reconstruction on the shallow image features based on a mask network to generate an attention mask;
a reconstruction sub-module, configured to reconstruct the global feature information and the detail feature information adaptively using the attention mask, and reconstruct a high resolution image;
the model building sub-module is used for reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function to build an image super-resolution model; the preset loss function comprises a loss function of a low-frequency branch and a loss function of a high-frequency branch of the asymmetric left-right super-division network;
and the output module is used for processing the low-resolution image through the image super-resolution model and outputting a high-resolution image.
10. An apparatus comprising a memory for storing at least one program and a processor for loading the at least one program to perform the method of any of claims 1-8.
11. A storage medium storing a processor-executable program which, when executed by a processor, is adapted to carry out the method of any one of claims 1-8.
CN202110681163.0A 2021-06-18 2021-06-18 Method, system, device and storage medium for image equalization enhancement Active CN113421188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110681163.0A CN113421188B (en) 2021-06-18 2021-06-18 Method, system, device and storage medium for image equalization enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110681163.0A CN113421188B (en) 2021-06-18 2021-06-18 Method, system, device and storage medium for image equalization enhancement

Publications (2)

Publication Number Publication Date
CN113421188A CN113421188A (en) 2021-09-21
CN113421188B true CN113421188B (en) 2024-01-05

Family

ID=77789242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110681163.0A Active CN113421188B (en) 2021-06-18 2021-06-18 Method, system, device and storage medium for image equalization enhancement

Country Status (1)

Country Link
CN (1) CN113421188B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114781601B (en) * 2022-04-06 2022-12-23 北京科技大学 Image super-resolution method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method
CN109410239A (en) * 2018-11-07 2019-03-01 南京大学 A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110675321A (en) * 2019-09-26 2020-01-10 兰州理工大学 Super-resolution image reconstruction method based on progressive depth residual error network
CN111476745A (en) * 2020-01-13 2020-07-31 杭州电子科技大学 Multi-branch network and method for motion blur super-resolution
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network
CN112561799A (en) * 2020-12-21 2021-03-26 江西师范大学 Infrared image super-resolution reconstruction method
CN112699844A (en) * 2020-04-23 2021-04-23 华南理工大学 Image super-resolution method based on multi-scale residual error level dense connection network
CN112862689A (en) * 2021-03-09 2021-05-28 南京邮电大学 Image super-resolution reconstruction method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018053340A1 (en) * 2016-09-15 2018-03-22 Twitter, Inc. Super resolution using a generative adversarial network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method
CN109410239A (en) * 2018-11-07 2019-03-01 南京大学 A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110675321A (en) * 2019-09-26 2020-01-10 兰州理工大学 Super-resolution image reconstruction method based on progressive depth residual error network
CN111476745A (en) * 2020-01-13 2020-07-31 杭州电子科技大学 Multi-branch network and method for motion blur super-resolution
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network
CN112699844A (en) * 2020-04-23 2021-04-23 华南理工大学 Image super-resolution method based on multi-scale residual error level dense connection network
CN112561799A (en) * 2020-12-21 2021-03-26 江西师范大学 Infrared image super-resolution reconstruction method
CN112862689A (en) * 2021-03-09 2021-05-28 南京邮电大学 Image super-resolution reconstruction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Residual Dense Network for Image Super-Resolution;Yulun Zhang;arxiv;全文 *

Also Published As

Publication number Publication date
CN113421188A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN109903228B (en) Image super-resolution reconstruction method based on convolutional neural network
CN109509152B (en) Image super-resolution reconstruction method for generating countermeasure network based on feature fusion
CN109816593B (en) Super-resolution image reconstruction method for generating countermeasure network based on attention mechanism
CN106683067B (en) Deep learning super-resolution reconstruction method based on residual sub-images
WO2022267641A1 (en) Image defogging method and system based on cyclic generative adversarial network
CN112348743B (en) Image super-resolution method fusing discriminant network and generation network
CN109727195B (en) Image super-resolution reconstruction method
CN108259994B (en) Method for improving video spatial resolution
CN109685716B (en) Image super-resolution reconstruction method for generating countermeasure network based on Gaussian coding feedback
CN110634105B (en) Video high-space-time resolution signal processing method combining optical flow method and depth network
CN110675321A (en) Super-resolution image reconstruction method based on progressive depth residual error network
CN111524068A (en) Variable-length input super-resolution video reconstruction method based on deep learning
CN109035146A (en) A kind of low-quality image oversubscription method based on deep learning
CN111932461A (en) Convolutional neural network-based self-learning image super-resolution reconstruction method and system
CN112288632A (en) Single image super-resolution method and system based on simplified ESRGAN
CN112184547B (en) Super resolution method of infrared image and computer readable storage medium
CN111462208A (en) Non-supervision depth prediction method based on binocular parallax and epipolar line constraint
CN111861886A (en) Image super-resolution reconstruction method based on multi-scale feedback network
CN115393191A (en) Method, device and equipment for reconstructing super-resolution of lightweight remote sensing image
CN113421188B (en) Method, system, device and storage medium for image equalization enhancement
CN111696042A (en) Image super-resolution reconstruction method based on sample learning
CN116029902A (en) Knowledge distillation-based unsupervised real world image super-resolution method
CN113379606B (en) Face super-resolution method based on pre-training generation model
CN111754399A (en) Image super-resolution method for keeping geometric structure based on gradient
CN112435165B (en) Two-stage video super-resolution reconstruction method based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant