CN113421188A - Method, system, device and storage medium for image equalization enhancement - Google Patents
Method, system, device and storage medium for image equalization enhancement Download PDFInfo
- Publication number
- CN113421188A CN113421188A CN202110681163.0A CN202110681163A CN113421188A CN 113421188 A CN113421188 A CN 113421188A CN 202110681163 A CN202110681163 A CN 202110681163A CN 113421188 A CN113421188 A CN 113421188A
- Authority
- CN
- China
- Prior art keywords
- image
- low
- resolution
- frequency branch
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000006870 function Effects 0.000 claims abstract description 78
- 238000012549 training Methods 0.000 claims abstract description 45
- 230000008447 perception Effects 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims description 50
- 238000005070 sampling Methods 0.000 claims description 33
- 239000000284 extract Substances 0.000 claims description 14
- 230000007246 mechanism Effects 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 6
- 230000015556 catabolic process Effects 0.000 claims description 5
- 238000006731 degradation reaction Methods 0.000 claims description 5
- 230000002146 bilateral effect Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 12
- 238000011156 evaluation Methods 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 2
- 230000003042 antagnostic effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 101100365548 Caenorhabditis elegans set-14 gene Proteins 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method, a system, a device and a storage medium for image equalization enhancement, wherein the method comprises the following steps: collecting a low-resolution image to be processed, and processing the low-resolution image through an image super-resolution model to generate a high-resolution image; the method comprises the steps of collecting training samples, wherein the training samples comprise low-resolution image samples and high-resolution image samples, and establishing an image super-resolution model based on a preset loss function and the high-resolution image samples according to the collected training samples. The method realizes the effect of restoring the low-resolution image into the high-resolution image through the trained image super-resolution model, can combine the advantages of the evaluation index-oriented method and the perception driving method, further improves the restoring effect of the single image super-resolution, and can be widely applied to the technical field of image processing.
Description
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to a method, a system, a device and a storage medium for image equalization enhancement.
Background
With the development of technology, the requirements of people on image quality are continuously improved, and how to obtain high-quality images becomes an indispensable research direction. Single image super resolution is a method for recovering from a single low resolution image to a high resolution image, and is receiving much attention in computer vision research. The technology has important practical significance in a plurality of fields, for example, in the field of safety monitoring, due to the problem of the resolution of a camera or the problem of the too far shooting target object, the obtained monitoring image has the problems of low resolution and difficulty in identification, and certain obstruction is caused to information mining from the monitoring image. By means of the single image super-resolution technology, the resolution of the image can be effectively improved, the texture information of the image can be refined, and therefore the quality of the monitoring image is improved. In the medical field, the images obtained by the medical instrument are often low-resolution images, and have a certain influence on the doctor to make a proper diagnosis according to the images. The super-resolution is carried out on the low-resolution images through a single image super-resolution technology, so that the quality of the low-resolution images is improved, and doctors are helped to better diagnose. The resolution of the old photo is often unsatisfactory after the old photo is put into the current era due to the limitation of the technology at that time, the visual experience is seriously influenced, the old photo can be recovered to a certain extent by utilizing a single image super-resolution technology, and the quality of the old photo is improved. In addition, the single image super-resolution can also be used in other fields such as remote sensing images and target recognition.
The current single image super-resolution methods can be divided into the following three categories: interpolation-based methods, reconstruction-based methods, and learning-based methods. Interpolation-based methods are very fast and straightforward, such as bicubic interpolation. But the interpolation method can lose the detail information of the image, and the super-resolution effect is not ideal. Based on the reconstruction method, some sharpened details can be restored by limiting the possible solution space with complex a priori knowledge. However, as the scale factor increases, the reconstruction-based methods degrade significantly in performance and often take a significant amount of time. Learning-based methods typically use machine learning algorithms to obtain the model, resulting in a mapping from a low resolution to a high resolution image. Learning-based methods have attracted considerable attention for their excellent performance and fast computational speed.
In recent years, with the development and wide application of deep learning, many methods based on deep learning are proposed to solve the problem of single super-resolution, and compared with other methods, the methods based on deep learning have great superiority. The prior deep learning-based method mainly takes mean square error loss as an optimization function to obtain a higher evaluation index. However, these methods for evaluating the index usually produce severe and excessively smooth edges, and the generated image lacks high-frequency details and has poor perceived quality. Therefore, in order to improve the visual quality of the super-resolution result, a perception-driven method is proposed. However, the perceptually-driven approach produces real details and also introduces some uncomfortable noise.
Disclosure of Invention
The invention aims to provide a method, a system, a device and a storage medium for image equalization enhancement, which can combine the advantages of an evaluation index-oriented method and a perception driving method, further improve the recovery effect of single-image super-resolution and enable the single-image super-resolution to process the task of single-image super-resolution in various complex scenes.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a method for image equalization enhancement includes:
the training method of the image super-resolution model comprises the following steps:
collecting training samples, wherein the training samples comprise low-resolution image samples and high-resolution image samples;
extracting image features of the low-resolution image sample based on a sharing module to generate image shallow features;
a global guide mechanism further extracts global feature information and detail feature information from the image shallow feature by combining a left-right asymmetric hyper-division network;
generating an attention mask for the image shallow feature based on a mask network;
adaptively reconstructing the global characteristic information and the detail characteristic information by using the attention mask to reconstruct a high-resolution image;
reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function, and establishing an image super-resolution model;
and processing the low-resolution image through the image super-resolution model, and outputting a high-resolution image.
Optionally, the collecting training samples, the training samples including low resolution image samples and high resolution image samples, comprises:
collecting a high-resolution image sample, and performing down-sampling on the high-resolution image sample by adopting an image degradation algorithm to generate a low-resolution image sample;
and establishing pairs of training samples by using the high-resolution image samples and the generated low-resolution image samples.
Optionally, the shared module, the left-right asymmetric hyper-division network, and the mask network all use a residual dense connection block as a basic block to extract multi-level feature information of the low-resolution image.
Optionally, the shared module is provided with 2 residual error dense connection blocks;
the extracting image features from the low-resolution image sample based on a sharing module to generate image shallow features comprises:
before the low-resolution image sample is input into the sharing module, the low-resolution image sample is subjected to a layer of convolution layer to obtain a shallow characteristic diagram;
inputting the shallow feature map into the sharing module, extracting image features and generating image shallow features;
the obtained image shallow feature is shared by the left-right asymmetric hyper-division network and the mask network.
Optionally, the left-right asymmetric hyper-division network is composed of a low-frequency branch and a high-frequency branch, the low-frequency branch is used for extracting global feature information, and the high-frequency branch is used for extracting detail feature information;
the global guide mechanism further extracts global feature information and detail feature information from the image shallow feature by combining the left-right asymmetric hyper-division network, and the method comprises the following steps:
the bilateral asymmetric hyper-resolution network extracts global characteristic information and detail characteristic information from the image shallow layer characteristic through a low-frequency branch and a high-frequency branch;
the global guide mechanism connects the global characteristic information extracted by the low-frequency branch with the detail characteristic information obtained by the low-layer module of the high-frequency branch in series;
the feature information after series connection is continuously input into a high-level module of the high-frequency branch, so that the global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further obtain the detailed feature information;
after the low-frequency branch and the high-frequency branch extract the global characteristic information and the detail characteristic information, the global characteristic image and the detail characteristic image are reconstructed through respective up-sampling layers and reconstruction layers.
Optionally, the low-frequency branch is composed of a deep feature extraction module, an upsampling module and a reconstruction module, the deep feature extraction module is composed of 15 residual dense-connected blocks, the upsampling module is composed of 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module is composed of 1 convolutional layer;
the high-frequency branch is composed of a generation countermeasure network, a generator and a discriminator, wherein the generator structure is composed of a deep layer feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
Optionally, the mask network is composed of a deep feature extraction module, an upsampling module and a reconstruction module, wherein the deep feature extraction module is composed of 5 residual dense connection blocks, the upsampling module is composed of 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module is composed of 1 convolutional layer;
the generating an attention mask for the image shallow feature based on the mask network comprises:
the mask network further extracts deep features from the shallow features of the image through 5 residual error dense blocks;
further extracting deep features, and generating a deep feature map through an upper sampling layer and a reconstruction layer;
the deep feature map is used for generating an attention mask by using a sigmoid function.
Optionally, the attention mask is a probability matrix representing the degree of contribution of each pixel in the detail feature image reconstructed by the high-frequency branch to the final output image;
the adaptively reconstructing the global feature information and the detail feature information by using the attention mask to reconstruct the high resolution image includes:
the attention mask is combined with the high-frequency branch and the low-frequency branch to reconstruct a global characteristic image and a detail characteristic image, and the results of the high-frequency branch and the low-frequency branch are combined in a self-adaptive mode;
and outputting the combined result to a final reconstructed high-resolution image through the last layer of convolution layer.
Optionally, the preset loss function includes a loss function of a low-frequency branch and a loss function of a high-frequency branch of the left-right asymmetric hyper-division network;
the loss function of the low-frequency branch is used for calculating the mean square error between the global feature image generated by the low-frequency branch and the high-resolution image sample;
the loss function of the high-frequency branch is composed of a loss function of a discriminator and a loss function of a generator; and the loss function of the generator consists of average absolute error, perception loss and countermeasure loss, and the average absolute error, the perception loss and the countermeasure loss between the detail feature image generated by the high-frequency branch and the high-resolution image sample are calculated.
In a second aspect, a system for image equalization enhancement is provided, including:
the training module is used for training the image super-resolution model and comprises:
the sampling submodule is used for collecting training samples, and the training samples comprise low-resolution image samples and high-resolution image samples;
the first extraction submodule is used for extracting image features from the low-resolution image sample based on the sharing module and generating image shallow features;
the second extraction submodule is used for further extracting global feature information and detail feature information from the image shallow feature by combining a left-right asymmetric hyper-division network and a global guide mechanism;
the generation submodule is used for generating an attention mask for the image shallow feature based on a mask network;
the reconstruction submodule is used for adaptively reconstructing the global characteristic information and the detail characteristic information by utilizing the attention mask and reconstructing a high-resolution image;
the model establishing submodule is used for reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function and establishing an image super-resolution model;
and the output module is used for processing the low-resolution image through the image super-resolution model and outputting the high-resolution image.
Optionally, the sampling sub-module comprises:
the sampling unit is used for carrying out downsampling on the high-resolution image sample by adopting an image degradation algorithm to generate a low-resolution image sample;
and the sample establishing unit is used for establishing a pair of training samples by using the high-resolution image sample and the generated low-resolution image sample.
Optionally, the shared module, the left-right asymmetric hyper-division network, and the mask network all use a residual error secret-connection block as a basic block to extract multi-level feature information of the low-resolution image.
Optionally, the shared module is provided with 2 residual error dense connection blocks, and the first extraction sub-module includes:
the first input subunit is used for obtaining a shallow feature map by passing the low-resolution image sample through a convolutional layer before inputting the shared module;
the second input subunit is used for inputting the shallow feature map into the sharing module, extracting image features and generating image shallow features;
and the sharing subunit is used for sharing the obtained image shallow feature by the left-right asymmetric hyper-division network and the mask network.
Optionally, the left-right asymmetric hyper-division network is composed of a low-frequency branch and a high-frequency branch, the low-frequency branch is used for extracting global feature information, the high-frequency branch is used for extracting detail feature information, and the second extraction sub-module includes:
the first extraction subunit is used for extracting global feature information and detail feature information from the shallow feature of the image through a low-frequency branch and a high-frequency branch by the bilateral asymmetric hyper-division network;
the cascade subunit is used for a global guide mechanism to cascade the global feature information extracted by the low-frequency branch and the detail feature information obtained by the low-layer module of the high-frequency branch;
the second extraction subunit is used for continuously inputting the feature information after the serial connection into the high-level module of the high-frequency branch, so that the global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further obtain the detailed feature information;
and the first output subunit is used for reconstructing a global characteristic image and a detail characteristic image through respective up-sampling layers and reconstruction layers after the global characteristic information and the detail characteristic information are extracted by the low-frequency branch and the high-frequency branch.
Optionally, the low-frequency branch comprises a deep feature extraction module, an upsampling module and a reconstruction module, wherein the deep feature extraction module comprises 15 residual dense connection blocks, the upsampling module comprises 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module comprises 1 convolutional layer;
the high-frequency branch consists of a generation countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
Optionally, the mask network comprises a deep feature extraction module, an upsampling module and a reconstruction module, wherein the deep feature extraction module comprises 5 residual error dense connection blocks, the upsampling module comprises 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module comprises 1 convolutional layer;
the generation submodule includes:
a third extraction subunit, configured to, by the mask network, further extract deep features from the shallow features of the image through 5 residual dense blocks;
the second output subunit is used for further extracting the deep features and generating a deep feature map through the up-sampling layer and the reconstruction layer;
and generating an attention subunit for the deep feature map to generate an attention mask by using the sigmoid function.
Optionally, the attention mask is a probability matrix representing the degree of contribution of each pixel in the detail feature image reconstructed by the high-frequency branch to the final output image;
the reconstruction sub-module includes:
the reconstruction subunit is used for reconstructing a global feature image and a detail feature image by combining the high-frequency branch and the low-frequency branch and adaptively combining the results of the high-frequency branch and the low-frequency branch;
and the third output subunit is used for outputting the combined result to the finally reconstructed high-resolution image through the last layer of convolution layer.
Optionally, the preset loss function includes a loss function of a low-frequency branch and a loss function of a high-frequency branch of the left-right asymmetric hyper-division network;
the loss function of the low-frequency branch is used for calculating the mean square error between the global feature image generated by the low-frequency branch and the high-resolution image sample;
the loss function of the high-frequency branch is composed of a loss function of a discriminator and a loss function of a generator; and the loss function of the generator consists of average absolute error, perception loss and countermeasure loss, and the average absolute error, the perception loss and the countermeasure loss between the detail feature image generated by the high-frequency branch and the high-resolution image sample are calculated.
In a third aspect, an apparatus is provided that includes a memory for storing at least one program and a processor for loading the at least one program to perform the method of image equalization enhancement as described above.
In a fourth aspect, a storage medium is provided that stores a processor-executable program, which when executed by a processor is configured to perform the method of image equalization enhancement as described above.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
according to the method, the system, the device and the storage medium for image equalization enhancement, resolution processing is performed on a low-resolution image to be processed by adopting an image super-resolution model which comprises a low-resolution image sample, a high-resolution image sample and a preset loss function, the effect of restoring the low-resolution image into the high-resolution image can be accurately and efficiently realized, the advantages of an evaluation index-oriented method and a perception driving method can be combined, and the restoring effect of the single-image super-resolution is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so that those skilled in the art can understand and read the present invention, and do not limit the conditions for implementing the present invention, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the functions and purposes of the present invention, should still fall within the scope covered by the contents disclosed in the present invention.
FIG. 1 is a flowchart illustrating steps of a method for enhancing image equalization according to an embodiment of the present invention;
FIG. 2 is a block diagram of a system for enhancing image equalization according to an embodiment of the present invention;
FIG. 3 is a flow chart of an image super-resolution model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the structure of the residual dense connection block used by the shared module, the left-right asymmetric hyper-division network, and the mask network in the embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides a method for image equalization enhancement, which includes the following steps:
s1, collecting training samples, wherein the training samples comprise low-resolution image samples and high-resolution image samples;
s2, extracting image features from the low-resolution image sample based on a sharing module, and generating shallow image features;
s3, further extracting global feature information and detail feature information from the image shallow feature by a global guide mechanism in combination with a left-right asymmetric hyper-branched network;
s4, generating an attention mask for the image shallow feature based on a mask network;
s5, utilizing the attention mask to adaptively reconstruct the global characteristic information and the detail characteristic information and reconstruct a high-resolution image;
s6, reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function, and establishing an image super-resolution model;
and S7, processing the low-resolution image through the image super-resolution model and outputting a high-resolution image.
Wherein, the steps S1-S6 are included in the training method of the image super-resolution model.
Optionally, the step S1 includes:
s11, collecting high-resolution image samples, and adopting an image degradation algorithm to carry out down-sampling on the high-resolution image samples to generate low-resolution image samples;
and S12, establishing paired training samples by using the high-resolution image samples and the generated low-resolution image samples.
Optionally, the shared module, the left-right asymmetric hyper-division network, and the mask network all use a residual error secret-connection block as a basic block to extract multi-level feature information of the low-resolution image.
Optionally, the shared module is provided with 2 residual dense connection blocks, and the step S2 includes:
s21, before the low-resolution image sample is input into the sharing module, the low-resolution image sample is processed by a convolutional layer to obtain a shallow feature map;
s22, inputting the shallow feature map into the sharing module, extracting image features, and generating image shallow features;
and S23, sharing the shallow feature of the obtained image by the left-right asymmetric hyper-division network and the mask network.
Optionally, the left-right asymmetric hyper-division network is composed of a low-frequency branch and a high-frequency branch, the low-frequency branch is used for extracting global feature information, and the high-frequency branch is used for extracting detail feature information, where step S3 includes:
s31, extracting global feature information and detail feature information from the image shallow feature through a low-frequency branch and a high-frequency branch by the left-right asymmetric hyper-division network;
s32, the global guide mechanism connects the global feature information extracted by the low-frequency branch and the detail feature information obtained by the low-layer module of the high-frequency branch in series;
s33, continuously inputting the feature information after series connection into a high-level module of the high-frequency branch, so that the global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further obtain detailed feature information;
and S34, after extracting the global characteristic information and the detail characteristic information by the low-frequency branch and the high-frequency branch, reconstructing a global characteristic image and a detail characteristic image through respective up-sampling layers and reconstruction layers.
Optionally, the low-frequency branch is composed of a deep feature extraction module, an upsampling module and a reconstruction module, the deep feature extraction module is composed of 15 residual dense-connected blocks, the upsampling module is composed of 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module is composed of 1 convolutional layer;
the high-frequency branch consists of a generation countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
Optionally, the mask network is composed of a deep feature extraction module, an upsampling module and a reconstruction module, wherein the deep feature extraction module is composed of 5 residual error dense connection blocks, the upsampling module is composed of 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module is composed of 1 convolutional layer;
the step S4 includes:
s41, the mask network further extracts deep features from the image shallow features through 5 residual error dense blocks;
s42, further extracting deep features, and generating a deep feature map through an upsampling layer and a reconstruction layer;
and S43, generating an attention mask by the deep feature map by using the sigmoid function.
Optionally, the attention mask is a probability matrix representing a degree of contribution of each pixel in the detail feature image reconstructed by the high-frequency branch to a final output image.
The step S5 includes:
s51, the attention mask is combined with the high-frequency branch and the low-frequency branch to reconstruct a global feature image and a detail feature image, and the results of the high-frequency branch and the low-frequency branch are combined in a self-adaptive mode;
and S52, outputting the combined result through the last convolution layer to the final reconstructed high-resolution image.
Optionally, the preset loss function includes a loss function of a low-frequency branch and a loss function of a high-frequency branch of the left-right asymmetric hyper-division network;
the loss function of the low-frequency branch is used for calculating the mean square error between the global feature image generated by the low-frequency branch and the high-resolution image sample;
the loss function of the high-frequency branch is composed of a loss function of a discriminator and a loss function of a generator; and the loss function of the generator consists of average absolute error, perception loss and countermeasure loss, and the average absolute error, the perception loss and the countermeasure loss between the detail feature image generated by the high-frequency branch and the high-resolution image sample are calculated.
According to the method for image equalization enhancement provided by the embodiment of the invention, resolution processing is carried out on the low-resolution image to be processed by adopting the image super-resolution model which comprises the low-resolution image sample, the high-resolution image sample and the preset loss function, the effect of restoring the low-resolution image into the high-resolution image can be accurately and efficiently realized, the advantages of an evaluation index-oriented method and a perception driving method can be combined, and the restoring effect of the single image super-resolution is further improved.
Example 2
As shown in fig. 2, this embodiment provides a system for image equalization enhancement, which can implement the method provided in embodiment 1 above. The system comprises:
a training module 10, configured to train the image super-resolution model, where the training module 10 includes:
the sampling submodule 11 is used for collecting training samples, and the training samples comprise low-resolution image samples and high-resolution image samples;
the first extraction submodule 12 is configured to extract image features from the low-resolution image sample based on a sharing module, and generate shallow image features;
the second extraction submodule 13 is configured to further extract global feature information and detail feature information from the image shallow feature by using a global guidance mechanism in combination with a left-right asymmetric hyper-branched network;
a generation submodule 14, configured to generate an attention mask for the image shallow feature based on a mask network;
the reconstruction submodule 15 is configured to adaptively reconstruct the global feature information and the detail feature information by using the attention mask, and reconstruct a high-resolution image;
the model establishing submodule 16 is used for reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function and establishing an image super-resolution model;
and the output module 20 is used for processing the low-resolution image through the image super-resolution model and outputting a high-resolution image.
Optionally, the sampling submodule 11 includes:
the sampling unit is used for carrying out downsampling on the high-resolution image sample by adopting an image degradation algorithm to generate a low-resolution image sample;
and the sample establishing unit is used for establishing a pair of training samples by using the high-resolution image sample and the generated low-resolution image sample.
Optionally, the shared module, the left-right asymmetric hyper-division network, and the mask network all use a residual error secret-connection block as a basic block to extract multi-level feature information of the low-resolution image.
Optionally, the shared module is provided with 2 residual error dense connection blocks, and the first extraction sub-module 12 includes:
the first input subunit is used for obtaining a shallow feature map by passing the low-resolution image sample through a convolutional layer before inputting the shared module;
the second input subunit is used for inputting the shallow feature map into the sharing module, extracting image features and generating image shallow features;
and the sharing subunit is used for sharing the obtained image shallow feature by the left-right asymmetric hyper-division network and the mask network.
Optionally, the left-right asymmetric hyper-division network is composed of a low-frequency branch and a high-frequency branch, the low-frequency branch is used for extracting global feature information, the high-frequency branch is used for extracting detail feature information, and the second extraction sub-module 13 includes:
the first extraction subunit is used for extracting global feature information and detail feature information from the shallow feature of the image through a low-frequency branch and a high-frequency branch by the bilateral asymmetric hyper-division network;
the cascade subunit is used for a global guide mechanism to cascade the global feature information extracted by the low-frequency branch and the detail feature information obtained by the low-layer module of the high-frequency branch;
the second extraction subunit is used for continuously inputting the feature information after the serial connection into the high-level module of the high-frequency branch, so that the global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further obtain the detailed feature information;
and the first output subunit is used for reconstructing a global characteristic image and a detail characteristic image through respective up-sampling layers and reconstruction layers after the global characteristic information and the detail characteristic information are extracted by the low-frequency branch and the high-frequency branch.
Optionally, the low-frequency branch comprises a deep feature extraction module, an upsampling module and a reconstruction module, wherein the deep feature extraction module comprises 15 residual dense connection blocks, the upsampling module comprises 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module comprises 1 convolutional layer;
the high-frequency branch consists of a generation countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
Optionally, the mask network comprises a deep feature extraction module, an upsampling module and a reconstruction module, wherein the deep feature extraction module comprises 5 residual error dense connection blocks, the upsampling module comprises 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module comprises 1 convolutional layer;
the generation submodule 14 includes:
a third extraction subunit, configured to, by the mask network, further extract deep features from the shallow features of the image through 5 residual dense blocks;
the second output subunit is used for further extracting the deep features and generating a deep feature map through the up-sampling layer and the reconstruction layer;
and generating an attention subunit for the deep feature map to generate an attention mask by using the sigmoid function.
Optionally, the attention mask is a probability matrix representing the degree of contribution of each pixel in the detail feature image reconstructed by the high-frequency branch to the final output image;
the reconstruction sub-module 15 includes:
the reconstruction subunit is used for reconstructing a global feature image and a detail feature image by combining the high-frequency branch and the low-frequency branch and adaptively combining the results of the high-frequency branch and the low-frequency branch;
and the third output subunit is used for outputting the combined result to the finally reconstructed high-resolution image through the last layer of convolution layer.
Optionally, the preset loss function includes a loss function of a low-frequency branch and a loss function of a high-frequency branch of the left-right asymmetric hyper-division network;
the loss function of the low-frequency branch is used for calculating the mean square error between the global feature image generated by the low-frequency branch and the high-resolution image sample;
the loss function of the high-frequency branch is composed of a loss function of a discriminator and a loss function of a generator; and the loss function of the generator consists of average absolute error, perception loss and countermeasure loss, and the average absolute error, the perception loss and the countermeasure loss between the detail feature image generated by the high-frequency branch and the high-resolution image sample are calculated.
According to the image equalization enhancement system provided by the embodiment of the invention, the resolution processing is carried out on the low-resolution image to be processed by adopting the image super-resolution model which comprises the low-resolution image sample, the high-resolution image sample and the preset loss function, the effect of restoring the low-resolution image into the high-resolution image can be accurately and efficiently realized, the advantages of an evaluation index-oriented method and a perception driving method can be combined, and the restoring effect of the single image super-resolution is further improved.
Example 3
The present embodiments provide an apparatus, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the steps of the method of image equalization enhancement as described in embodiment 1 above.
Example 4
A storage medium having stored therein a program executable by a processor, the program being executable by the processor for performing the steps of the method of image equalization enhancement as described in embodiment 1.
Example 5
Referring to fig. 3 to 4, a method for image equalization enhancement specifically includes the following steps:
A. collecting training samples, wherein the training samples comprise low-resolution image samples and high-resolution image samples;
B. establishing an image super-resolution model according to the acquired training samples;
C. and (4) monitoring the network output by utilizing the high resolution in the training sample, setting a loss function, and performing end-to-end training on the network. After a certain number of iterations, network parameters are updated, and the network is trained to be convergent;
D. and inputting the low-resolution image to be restored into the trained network model, thereby outputting a high-resolution image.
Wherein, the specific implementation scheme of the step A is as follows:
the public large-scale image dataset DIV2K, DIV2K, was obtained containing 800, 100 and 100 images of 2K resolution as training dataset, validation set, test set, respectively. And (3) carrying out double-three down-sampling on the original high-resolution image by 4 times by using an 'imresize' function of MATLAB to obtain a corresponding low-resolution image, and forming paired training data. Horizontal or vertical flipping, 90 ° rotation is used as a way of data enhancement.
The specific embodiment of the step B is as follows:
b1, selecting a low-resolution image, and randomly cutting image blocks with the size of 32 × 32 as network input. Using a layer of 3X 3 convolutional layers HLRFor input low resolution image ILRExtracting shallow characteristic diagram F0. The resulting feature contains 64 channels, the size of which is the same as the size of the input picture. Using 2 residual dense connection blocks (RRDB) as shared module HSMFor shallow feature pattern F0And further extracting shallow features. As shown in fig. 2, each residual dense connection block contains 3 dense connection blocks. Each densely packed block consists of 5 convolutional layers, the number of channel increments inside the densely packed block is set to 32, and the output of each convolutional layer is transferred to the subsequent convolutional layer in the densely packed block as an additional input through a plurality of hopping connections. This step can be expressed as:
F0=HLR(ILR)
FSF=HSM(F0)
b2, sharing module H by high frequency branch HFB and low frequency branch LFB using hyper-division network with left and right asymmetrySMShallow feature of output FSFAnd respectively extracting detail information and global information. The low-frequency branch comprises a deep characteristic extraction module, an up-sampling module and a reconstruction module, wherein the deep characteristic extraction module comprises 15 RRDB. The up-sampling module consists of a layer of 3 x 3 convolutional layers and a nearest neighbor up-sampling layer. The reconstruction module is a layer of 3 x 3 convolutional layers. The high-frequency branch is a generation countermeasure network and comprises a generator and a discriminator, the structure of the generator is similar to that of the low-frequency branch, the generator also comprises a deep feature extraction module, an up-sampling module and a reconstruction module, and the deep feature extraction module comprises 15 RRDBs. A 3 x 3 convolutional layer and a nearest neighbor upsampling layer. The reconstruction module is a layer of 3 x 3 convolutional layers. We chooseLet RaD be the discriminator of our high frequency branch. The extraction of the global information and the detail information of the image can be effectively realized through the two branches, and the subsequent further reconstruction of a high-resolution image is facilitated.
B3, using a global guiding mechanism, the output feature map of the 10 th RRDB in the low-frequency branch LFB of the super-resolution network with left and right asymmetry is used to guide the subsequent feature extraction of the 5 th RRDB in the high-frequency branch HFB. The 10 th RRDB output signature of the low frequency branch LFB is concatenated with the 5 th RRDB output signature of the high frequency branch HFB, and the channel is compressed back 64 by a 3 x 3 convolutional layer and then input to the HFB for subsequent feature extraction. The global information of the low-frequency branch is injected into the high-frequency branch, so that fine-grained reconstruction of the high-frequency branch is facilitated.
B4, using a mask network to generate an attention mask for adaptively reconstructing the final output image, a better trade-off between reconstruction accuracy and perceptual quality is achieved. Masking the network to share the shallow features F of the output of the moduleSFThe mask network is used as input and comprises a deep feature extraction module, an up-sampling module, a reconstruction module and a sigmoid function, wherein the deep feature extraction module comprises 5 RRDB. The up-sampling module consists of a layer of 3 x 3 convolutional layers and a nearest neighbor up-sampling layer. The reconstruction module is a layer of 3 x 3 convolutional layers. FSFObtaining the extracted characteristic diagram W through 5 RRDBs, one layer of up-sampling module and one layer of 3 multiplied by 3 convolution layerM. Operation of 5 RRDBs, one layer of upsampling module and one layer of 3 x 3 convolutional layer, register Hmask. This process can be expressed as
WM=Hmask(FSF)
Then the feature map WMAnd processing the probability matrix into a probability matrix by using a sigmoid function, namely the attention mask A. This process can be expressed as:
A=σ(WM)
fhigh(FSF) Representing a feature map containing global information after HFB reconstruction of high frequency branches, flow(FSF) A feature map containing local detail information after reconstruction of the low frequency branch LFB is shown, note that mask A shows fhigh(FSF) The degree to which each pixel contributes to the final output image. The feature maps of the low and high frequency branches are fused using attention mask a as follows:
Iy=(1-A)·flow(FSF)+A·fhigh(FSF)
in this way, the mask network can learn the weight of each pixel in the feature map, and can adaptively combine the results of the high-frequency branch and the low-frequency branch, and output a high-resolution image with the channel number of 3 through the last layer of 3 × 3 convolutional layer.
The scheme of the step C is specifically as follows:
using the MSE loss function as the loss function of the low-frequency branch, the high-resolution image f generated by the low-frequency branch is calculatedlow(FSF) With the true high resolution image I in the sampleHRMean square error between. The loss function of the high frequency branch is composed of the loss function of the discriminator and the loss function of the generator. The relative average discriminator (RaD) is used as the discriminator for our high frequency branch. RaD the output being closer to 1 means that the real image x isrFalse image xfAnd (4) the method is more real. The penalty function of the arbiter is defined as follows:
wherein DRa(. is) RaD denotesC (-) represents the output of the discriminator,represents the operation of averaging all the false data in a small batch, while the sigma (·) sigmoid function. The loss function of the generator is composed of average absolute error, sensing loss and counter loss. We use L1The loss function constrains the generated image to be closer to the true image, L1The formula for the loss function is as follows:
wherein W, H, C represent the width, height and number of channels, respectively,a function representing the generator, theta represents a parameter of the generator, IiRepresenting the ith image. The purpose of the perceptual loss is to measure the perceptual similarity between the SR image and the corresponding HR image, minimizing the distance between two high-level features extracted in the network that were pre-trained before the activation layer. The SR image and HR image are used as inputs to a pre-training network VGG 19. The loss formula of the perceptual function is as follows:
wherein,expressed as a function of the pre-training network VGG19, IiRepresenting the ith image, and G (-) represents a function of the generator. The generator's antagonistic loss is in a symmetric form with the arbiter's loss function:
wherein DRa(. is) RaD denotesC (-) represents the output of the discriminator,represents the operation of averaging all the false data in a small batch, while the sigma (·) sigmoid function.
The training process is divided into two phases. First, we pre-train a model with all branches for PSNR using MSE loss. We then use the trained PSNR-oriented model as initialization of the HFB network. Second, we train HFBs in a antagonistic manner. At the same time, we continue to use the L1 penalty to update the LFB and mask network branches until the model converges.
During training, the batch size was set to 32, and the initial learning rate was set to 10-4. In the process of iterative training, 2 x 10 times each time according to the convergence condition of the network5The iteration halves the learning rate once. The present invention uses an ADAM optimizer for back-gradient propagation of models, where the parameter setting of ADAM is beta1=0.9,β20.999 and e 10-8。
The scheme of the step D is specifically as follows:
100 images of DIV2K as a test Set and images of a reference data Set5, Set14, BSD100, Urban100 and Manga109 are sequentially input into a previously trained network model, and a high-resolution image is output.
In summary, the method, the system, the apparatus, and the storage medium for image equalization enhancement provided by the embodiments of the present invention perform resolution processing on a low-resolution image to be processed by using an image super-resolution model that is established by using a low-resolution image sample, a high-resolution image sample, and a preset loss function, so that an effect of restoring the low-resolution image into the high-resolution image can be accurately and efficiently achieved, and advantages of an evaluation index-oriented method and a perception driving method can be combined, thereby further improving a restoring effect of a single image super-resolution.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (12)
1. A method for image equalization enhancement, comprising:
the training method of the image super-resolution model comprises the following steps:
collecting training samples, wherein the training samples comprise low-resolution image samples and high-resolution image samples;
extracting image features of the low-resolution image sample based on a sharing module to generate image shallow features;
a global guide mechanism further extracts global feature information and detail feature information from the image shallow feature by combining a left-right asymmetric hyper-division network;
generating an attention mask for the image shallow feature based on a mask network;
adaptively reconstructing the global characteristic information and the detail characteristic information by using the attention mask to reconstruct a high-resolution image;
reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function, and establishing an image super-resolution model;
and processing the low-resolution image through the image super-resolution model, and outputting a high-resolution image.
2. The method of image equalization enhancement according to claim 1, wherein said collecting training samples, said training samples comprising low resolution image samples and high resolution image samples, comprises:
collecting a high-resolution image sample, and performing down-sampling on the high-resolution image sample by adopting an image degradation algorithm to generate a low-resolution image sample;
and establishing pairs of training samples by using the high-resolution image samples and the generated low-resolution image samples.
3. The method of claim 1, wherein the shared module, the left-right asymmetric hyper-division network and the mask network are all based on residual dense connection blocks to extract multi-level feature information of a low resolution image.
4. The method of image equalization enhancement according to claim 3, wherein the shared module is provided with 2 residual error dense connection blocks;
the extracting image features from the low-resolution image sample based on a sharing module to generate image shallow features comprises:
before the low-resolution image sample is input into the sharing module, the low-resolution image sample is subjected to a layer of convolution layer to obtain a shallow characteristic diagram;
inputting the shallow feature map into the sharing module, extracting image features and generating image shallow features;
the obtained image shallow feature is shared by the left-right asymmetric hyper-division network and the mask network.
5. The method for image equalization enhancement according to claim 4, wherein the left-right asymmetric hyper-division network is composed of a low-frequency branch and a high-frequency branch, the low-frequency branch is used for extracting global feature information, and the high-frequency branch is used for extracting detail feature information;
the global guide mechanism further extracts global feature information and detail feature information from the image shallow feature by combining the left-right asymmetric hyper-division network, and the method comprises the following steps:
the bilateral asymmetric hyper-resolution network extracts global characteristic information and detail characteristic information from the image shallow layer characteristic through a low-frequency branch and a high-frequency branch;
the global guide mechanism connects the global characteristic information extracted by the low-frequency branch with the detail characteristic information obtained by the low-layer module of the high-frequency branch in series;
the feature information after series connection is continuously input into a high-level module of the high-frequency branch, so that the global feature information is injected into the high-frequency branch to assist in guiding the high-frequency branch to further obtain the detailed feature information;
after the low-frequency branch and the high-frequency branch extract the global characteristic information and the detail characteristic information, the global characteristic image and the detail characteristic image are reconstructed through respective up-sampling layers and reconstruction layers.
6. The method of image equalization enhancement according to claim 5, wherein the low frequency branch is composed of a deep feature extraction module, an upsampling module and a reconstruction module, wherein the deep feature extraction module is composed of 15 residual dense-connected blocks, the upsampling module is composed of 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module is composed of 1 convolutional layer;
the high-frequency branch consists of a generation countermeasure network, a generator and a discriminator, wherein the generator structure consists of a deep feature extraction module, an up-sampling module and a reconstruction module, and the discriminator is a classification network.
7. The method of image equalization enhancement according to claim 6, wherein the mask network is composed of a deep feature extraction module, an upsampling module and a reconstruction module, wherein the deep feature extraction module is composed of 5 residual error dense-connected blocks, the upsampling module is composed of 1 convolutional layer and 1 nearest neighbor upsampling layer, and the reconstruction module is composed of 1 convolutional layer;
the generating an attention mask for the image shallow feature based on the mask network comprises:
the mask network further extracts deep features from the shallow features of the image through 5 residual error dense blocks;
further extracting deep features, and generating a deep feature map through an upper sampling layer and a reconstruction layer;
the deep feature map is used for generating an attention mask by using a sigmoid function.
8. The method of image equalization enhancement according to claim 7, wherein the attention mask is a probability matrix representing the degree of contribution of each pixel in the detail feature image reconstructed by the high-frequency branch to the final output image;
the adaptively reconstructing the global feature information and the detail feature information by using the attention mask to reconstruct the high resolution image includes:
the attention mask is combined with the high-frequency branch and the low-frequency branch to reconstruct a global characteristic image and a detail characteristic image, and the results of the high-frequency branch and the low-frequency branch are combined in a self-adaptive mode;
and outputting the combined result to a final reconstructed high-resolution image through the last layer of convolution layer.
9. The method of image equalization enhancement according to claim 8, wherein the preset loss function comprises a loss function of a low frequency branch and a loss function of a high frequency branch of a left-right asymmetric hyper-division network;
the loss function of the low-frequency branch is used for calculating the mean square error between the global feature image generated by the low-frequency branch and the high-resolution image sample;
the loss function of the high-frequency branch is composed of a loss function of a discriminator and a loss function of a generator; and the loss function of the generator consists of average absolute error, perception loss and countermeasure loss, and the average absolute error, the perception loss and the countermeasure loss between the detail feature image generated by the high-frequency branch and the high-resolution image sample are calculated.
10. A system for image equalization enhancement, comprising:
the training module is used for training the image super-resolution model and comprises:
the sampling submodule is used for collecting training samples, and the training samples comprise low-resolution image samples and high-resolution image samples;
the first extraction submodule is used for extracting image features from the low-resolution image sample based on the sharing module and generating image shallow features;
the second extraction submodule is used for further extracting global feature information and detail feature information from the image shallow feature by combining a left-right asymmetric hyper-division network and a global guide mechanism;
the generation submodule is used for generating an attention mask for the image shallow feature based on a mask network;
the reconstruction submodule is used for adaptively reconstructing the global characteristic information and the detail characteristic information by utilizing the attention mask and reconstructing a high-resolution image;
the model establishing submodule is used for reversely converging the reconstructed high-resolution image and the high-resolution image sample based on a preset loss function and establishing an image super-resolution model;
and the output module is used for processing the low-resolution image through the image super-resolution model and outputting a high-resolution image.
11. An apparatus comprising a memory for storing at least one program and a processor for loading the at least one program to perform the method of any one of claims 1-9.
12. A storage medium storing a program executable by a processor, the program executable by the processor being configured to perform the method of any one of claims 1-9 when executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110681163.0A CN113421188B (en) | 2021-06-18 | 2021-06-18 | Method, system, device and storage medium for image equalization enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110681163.0A CN113421188B (en) | 2021-06-18 | 2021-06-18 | Method, system, device and storage medium for image equalization enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113421188A true CN113421188A (en) | 2021-09-21 |
CN113421188B CN113421188B (en) | 2024-01-05 |
Family
ID=77789242
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110681163.0A Active CN113421188B (en) | 2021-06-18 | 2021-06-18 | Method, system, device and storage medium for image equalization enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113421188B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114781601A (en) * | 2022-04-06 | 2022-07-22 | 北京科技大学 | Image super-resolution method and device |
CN114897677A (en) * | 2022-03-28 | 2022-08-12 | 北京航空航天大学 | Unsupervised remote sensing image super-resolution reconstruction method based on constrained reconstruction |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180075581A1 (en) * | 2016-09-15 | 2018-03-15 | Twitter, Inc. | Super resolution using a generative adversarial network |
CN109345449A (en) * | 2018-07-17 | 2019-02-15 | 西安交通大学 | A kind of image super-resolution based on converged network and remove non-homogeneous blur method |
CN109410239A (en) * | 2018-11-07 | 2019-03-01 | 南京大学 | A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition |
CN110136063A (en) * | 2019-05-13 | 2019-08-16 | 南京信息工程大学 | A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition |
CN110675321A (en) * | 2019-09-26 | 2020-01-10 | 兰州理工大学 | Super-resolution image reconstruction method based on progressive depth residual error network |
CN111476745A (en) * | 2020-01-13 | 2020-07-31 | 杭州电子科技大学 | Multi-branch network and method for motion blur super-resolution |
CN111583109A (en) * | 2020-04-23 | 2020-08-25 | 华南理工大学 | Image super-resolution method based on generation countermeasure network |
CN112561799A (en) * | 2020-12-21 | 2021-03-26 | 江西师范大学 | Infrared image super-resolution reconstruction method |
CN112699844A (en) * | 2020-04-23 | 2021-04-23 | 华南理工大学 | Image super-resolution method based on multi-scale residual error level dense connection network |
CN112862689A (en) * | 2021-03-09 | 2021-05-28 | 南京邮电大学 | Image super-resolution reconstruction method and system |
-
2021
- 2021-06-18 CN CN202110681163.0A patent/CN113421188B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180075581A1 (en) * | 2016-09-15 | 2018-03-15 | Twitter, Inc. | Super resolution using a generative adversarial network |
CN109345449A (en) * | 2018-07-17 | 2019-02-15 | 西安交通大学 | A kind of image super-resolution based on converged network and remove non-homogeneous blur method |
CN109410239A (en) * | 2018-11-07 | 2019-03-01 | 南京大学 | A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition |
CN110136063A (en) * | 2019-05-13 | 2019-08-16 | 南京信息工程大学 | A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition |
CN110675321A (en) * | 2019-09-26 | 2020-01-10 | 兰州理工大学 | Super-resolution image reconstruction method based on progressive depth residual error network |
CN111476745A (en) * | 2020-01-13 | 2020-07-31 | 杭州电子科技大学 | Multi-branch network and method for motion blur super-resolution |
CN111583109A (en) * | 2020-04-23 | 2020-08-25 | 华南理工大学 | Image super-resolution method based on generation countermeasure network |
CN112699844A (en) * | 2020-04-23 | 2021-04-23 | 华南理工大学 | Image super-resolution method based on multi-scale residual error level dense connection network |
CN112561799A (en) * | 2020-12-21 | 2021-03-26 | 江西师范大学 | Infrared image super-resolution reconstruction method |
CN112862689A (en) * | 2021-03-09 | 2021-05-28 | 南京邮电大学 | Image super-resolution reconstruction method and system |
Non-Patent Citations (2)
Title |
---|
YULUN ZHANG: "Residual Dense Network for Image Super-Resolution", ARXIV * |
YULUN ZHANG: "Residual Dense Network for Image Super-Resolution", Retrieved from the Internet <URL:http://arxiv.org/abs/1802.08797> * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114897677A (en) * | 2022-03-28 | 2022-08-12 | 北京航空航天大学 | Unsupervised remote sensing image super-resolution reconstruction method based on constrained reconstruction |
CN114781601A (en) * | 2022-04-06 | 2022-07-22 | 北京科技大学 | Image super-resolution method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113421188B (en) | 2024-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816593B (en) | Super-resolution image reconstruction method for generating countermeasure network based on attention mechanism | |
CN113139907B (en) | Generation method, system, device and storage medium for visual resolution enhancement | |
CN113658051B (en) | Image defogging method and system based on cyclic generation countermeasure network | |
CN107154023B (en) | Based on the face super-resolution reconstruction method for generating confrontation network and sub-pix convolution | |
CN112348743B (en) | Image super-resolution method fusing discriminant network and generation network | |
Wang et al. | Laplacian pyramid adversarial network for face completion | |
CN109118495B (en) | Retinal vessel segmentation method and device | |
CN111353940B (en) | Image super-resolution reconstruction method based on deep learning iterative up-down sampling | |
CN110136062B (en) | Super-resolution reconstruction method combining semantic segmentation | |
CN111242999B (en) | Parallax estimation optimization method based on up-sampling and accurate re-matching | |
CN105023240A (en) | Dictionary-type image super-resolution system and method based on iteration projection reconstruction | |
CN113421188A (en) | Method, system, device and storage medium for image equalization enhancement | |
CN113222825B (en) | Infrared image super-resolution reconstruction method based on visible light image training and application | |
CN111861886B (en) | Image super-resolution reconstruction method based on multi-scale feedback network | |
CN105488759B (en) | A kind of image super-resolution rebuilding method based on local regression model | |
Yang et al. | Image super-resolution based on deep neural network of multiple attention mechanism | |
CN114339409A (en) | Video processing method, video processing device, computer equipment and storage medium | |
CN112184547B (en) | Super resolution method of infrared image and computer readable storage medium | |
Shao et al. | Uncertainty-guided hierarchical frequency domain transformer for image restoration | |
Gao et al. | Bayesian image super-resolution with deep modeling of image statistics | |
Liu et al. | Facial image inpainting using attention-based multi-level generative network | |
CN110415169A (en) | A kind of depth map super resolution ratio reconstruction method, system and electronic equipment | |
Karthick et al. | Deep regression network for the single image super resolution of multimedia text image | |
CN113379606B (en) | Face super-resolution method based on pre-training generation model | |
Xing et al. | Digital rock resolution enhancement and detail recovery with multi attention neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |