CN113240581A - Real world image super-resolution method for unknown fuzzy kernel - Google Patents
Real world image super-resolution method for unknown fuzzy kernel Download PDFInfo
- Publication number
- CN113240581A CN113240581A CN202110381837.5A CN202110381837A CN113240581A CN 113240581 A CN113240581 A CN 113240581A CN 202110381837 A CN202110381837 A CN 202110381837A CN 113240581 A CN113240581 A CN 113240581A
- Authority
- CN
- China
- Prior art keywords
- resolution
- image
- fuzzy
- noise
- fuzzy kernel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000009466 transformation Effects 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000013507 mapping Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 18
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 230000000593 degrading effect Effects 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 abstract description 6
- 238000002347 injection Methods 0.000 abstract description 3
- 239000007924 injection Substances 0.000 abstract description 3
- 230000008447 perception Effects 0.000 abstract description 3
- 238000011084 recovery Methods 0.000 abstract 1
- 238000006731 degradation reaction Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000015556 catabolic process Effects 0.000 description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- NEVVHGHXJACRQZ-OCZHUQCSSA-N (7r,8s,9s,10r,13r,14r,17s)-17-acetyl-7,14-dihydroxy-10,13-dimethyl-2,6,7,8,9,11,12,15,16,17-decahydro-1h-cyclopenta[a]phenanthren-3-one Chemical compound C([C@H]1O)C2=CC(=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@]1(O)CC[C@H](C(=O)C)[C@@]1(C)CC2 NEVVHGHXJACRQZ-OCZHUQCSSA-N 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004445 quantitative analysis Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 239000000243 solution Substances 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a real world image super-resolution method aiming at unknown fuzzy core, comprising a predictor, a predictor and a corrector, wherein the predictor is used for preliminarily estimating the fuzzy core and designing the corrector to obtain accurate image fuzzy core information; the high-resolution image is subjected to fuzzification processing and noise injection to construct a low-resolution real world image; designing a novel super-resolution network structure, and performing spatial feature transformation on each convolution layer according to a fuzzy check feature map so as to improve the capability of the network in processing different fuzzy images; the non-linear mapping parts are connected in a residual dense block structure and are integrated into the structure for generating the enhanced texture detail recovery capability against the network framework. Compared with other methods, the method provided by the invention is remarkably improved, and test results on two data sets of DF2K and DIV2K show that the peak signal-to-noise ratio, the structural similarity and the perception index of the novel method are higher than those of classical methods such as EDSR and ESRGAN.
Description
Technical Field
The invention belongs to the technical field of graphic image processing, and particularly relates to a real world image super-resolution method for an unknown fuzzy core.
Background
Super-Resolution reconstruction (SR) is a classic problem in the field of computer vision, aims to reconstruct a high-Resolution image with accurate low-frequency information and rich high-frequency texture details from a low-Resolution image, and is widely applied in the fields of monitoring equipment, satellite image remote sensing, digital high-definition, microscopic imaging, video coding communication, video restoration, medical images and the like. Since the SRCNN pioneered the use of deep learning to the image super-resolution problem, there has been a significant development in this area. The traditional super-resolution method is based on the relation between deep learning and traditional sparse coding, and divides a network into three stages of low-resolution image feature extraction, feature map nonlinear mapping and image reconstruction, so that end-to-end learning from a low-resolution image to a high-resolution image is realized. In order to acquire an end-to-end training image, the traditional super-resolution method performs double-three down-sampling on a high-resolution image into a low-resolution image, namely:
ILR=(IHR)↓bic
wherein IHRFor high resolution images, ILRIs a low resolution image. The real-world image, although the degradation mode is unknown, can be understood to contain a fuzzy kernel and noise, that is:
ILR=(IHR*k)↓s+n
where k, n and s represent the blur kernel, noise and down-sampling scale, respectively. Therefore, solving the accurate fuzzy kernel and noise is the key to simulate the accurate low-resolution image. The method designs a degradation model to replace fuzzy kernel estimation, and introduces a plug-and-play module through a variable segmentation technology to realize image restoration. The proposed degradation parameter model is more real, any fuzzy kernel is considered, and a new idea is introduced, namely the existing deblurring method can be used for estimating the fuzzy kernel. The KMSR generates a fuzzy core by generating a countermeasure network WGAN-GP, and stores the fuzzy core into a fuzzy core pool. Fuzzy kernels are sampled from the fuzzy kernel pool to construct a paired LR-HR training data set, and then super-resolution is carried out through the existing deep convolution neural network. The idea of KMSR has a very strong practical significance, and it is first proposed that real-world blurred images that are difficult to acquire can be generated from high-resolution images in cooperation with blur kernels. However, the accuracy of the blur kernel estimation is low, and the images generated by the generation countermeasure network inevitably contain artifacts, so the KMSR-simulated images to be super-resolved cannot completely conform to the full real world. The RealSR adopts a method similar to KMSR in the process of creating training data, collects fuzzy kernels by using Kernel GAN, collects noise and stores the noise into a degradation pool, but a single fuzzy Kernel has a large error in prediction and the noise extraction is too coarse.
Disclosure of Invention
Based on the defects of the prior art, the technical problem to be solved by the invention is to provide a real-world image super-resolution method aiming at unknown fuzzy kernels, wherein the generated image does not contain fuzzy and noise, and a real-world image super-resolution data set is provided and established.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention provides a real world image super-resolution method aiming at an unknown fuzzy core, which comprises the following steps:
step 1: denoising a high-resolution image of a real world, removing noise through double-triple downsampling, storing important low-frequency information, and regarding the obtained noiseless image as a high-resolution image, namely a training target HR;
step 2: designing a fuzzy kernel extractor, obtaining fuzzy kernel information contained in the image by simulating a real world image, and storing the fuzzy kernel information into a fuzzy kernel set K;
and step 3: designing a noise extractor, calculating the covariance of the network input image and the hyper-resolution result, filtering out a part of noise with the covariance larger than a threshold value, and storing the part of noise in a noise set N;
and 4, step 4: randomly extracting information from the fuzzy kernel set K and the noise set N, and degrading the high-resolution image to obtain a low-resolution image LR for training the network;
and 5: designing a super-resolution network structure, and embedding a spatial feature transformation layer in a residual dense block structure; and carrying out spatial feature transformation on the feature map according to the input fuzzy kernel information, then carrying out nonlinear mapping on the feature map to obtain a high-resolution feature map, and carrying out three-channel convolution to output a high-resolution RGB image after the processing results of each basic block are globally connected.
Further, the spatial features are transformed into: SFT (F, K) ═ γ -
Where γ represents the scaling parameter, F represents the feature map, and β represents the displacement parameter.
The invention provides a real world image super-resolution network aiming at unknown fuzzy kernels, which is suitable for amplifying and improving the resolution of images of unknown fuzzy kernels and noise in the real world. The invention has the following beneficial effects:
1. the method proposes that the super-resolution network is more suitable for the fuzzy image of the real world
2. Network generation of images without blur and noise
3. The generated image has higher human eye perception effect.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following detailed description is given in conjunction with the preferred embodiments, together with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings of the embodiments will be briefly described below.
FIG. 1 is a flow chart of the real world image super resolution network for unknown blur kernel according to the present invention.
FIG. 2 is a structure diagram of the spatial feature transform SFT layer of the present invention.
FIG. 3 is a graph comparing the super-resolution effect of the present invention with that of ESRGAN, EDSR and ZSSR.
Detailed Description
Other aspects, features and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which form a part of this specification, and which illustrate, by way of example, the principles of the invention. In the referenced drawings, the same or similar components in different drawings are denoted by the same reference numerals.
The invention designs a small convolutional neural network named as a predictor to simulate a fuzzy kernel k0. The predictor contains four convolutional layers activated with Leaky ReLU and one global average pooling layer. The convolution layer gives a fuzzy kernel k0And forming a distribution map. The global average pooling layer then gives a global estimate by taking a spatial average. The prediction function is:
ki=P(ILR)
and training network parameters by using the known fuzzy core to achieve the result that the network generated image is close to the real world fuzzy image. Therefore, the optimization method adopts the distance between the minimized real fuzzy core and the fuzzy core obtained by the network, and the distance is specifically represented by the formula:
wherein theta isPAre parameters of the predictor P. k represents the known blur kernel used for training. The noise extractor is designed in a manner similar to a fuzzy kernel extractor. And extracting and storing the degradation information of the i images. The specific operation flow is shown as the table:
firstly, initializing a fuzzy kernel set K and a noise set N, and obtaining the fuzzy kernel K by using a fuzzy kernel extractoriAdding the fuzzy kernel into a fuzzy kernel set K; obtaining noise information n using a noise extractoriTo the noise set N. Randomly extracting k from the degenerate pooliAnd niFor high resolution image IHRDegrading to obtain low resolution image ILR。
In order to improve the accuracy of fuzzy approval, the invention designs a small convolutional neural network which is named as a corrector and is used for correcting a fuzzy core with inaccurate prediction. The input hyper-score results are first processed into a signature Fsr by five convolutional layers activated using Leaky ReLU. Note that the hyper-segmentation results may contain artifacts due to ambiguous kernel misestimates that will be extracted by the five convolutional layers. Since k is a low-dimensional representation of the blur kernel, the lower the correlation per dimension, the better the internal correlation of the blur kernel k should be learned with two fully connected layers with LeakyReLU. The internal correlation of the blur kernel k is the error of the blur kernel prediction. Using stretching strategy proposed in SRMD to apply predicted fuzzy kernel or corrected fuzzy kernel fkDrawing into a characteristic diagram Fk. Hypothetical feature map FsrIf the size of (a) is C × H × W, F after stretching of the blur kernel kkThe size of (b) is b × H × W. FkIs equivalent to the blur kernel fkThe ith element of (1). FkAnd FsrThe size of the two profiles after channel connection is (b + C). times.HxW. The connection result is converted into a global vector representation by the same method as the predictor, the spatial estimation of the fuzzy kernel variation is given by adopting three convolution layers with convolution kernel size of 1 multiplied by 1 and activated by Leaky ReLU, and the global pooling is the variation delta k of the fuzzy kernel updating. The parameters of the perfectly trained corrector function C may be obtained by minimizing the distance between the corrected blur kernel and the true value:
wherein theta isCA parameter of C, IsrThe SR result obtained for the last correction. The corrector adjusts the estimated fuzzy core according to the characteristics of the SR image, and the SR result obtained by using the adjusted fuzzy core has less artifacts.
The network structure of the invention is shown in fig. 1, the first part of the network is feature extraction, and a feature diagram is obtained for the first time. The second part is a nonlinear mapping basic Block which adopts a Dense Block and is connected with an RRDB structure. And the third part adopts sub-pixel convolution to amplify the characteristic diagram and generates an RGB three-channel image through convolution. In the basic block, the output result of each intermediate layer is combined with fuzzy core information by adopting a space characteristic transformation SFT layer. The fuzzy core information carries out affine transformation on the output characteristic diagram of each intermediate layer through the SFT layer to influence the output of the network. Affine transformation does not participate in the processing process of the input image, so that the processing of the input image by the original network cannot be influenced even if the blur kernel information does not contain image content. In addition, the SFT layer operates the intermediate output result of each layer of the network, so that the operation can also ensure that the residual dense block structure plays a role. The SFT takes fuzzy core information as a basis, affine transformation is carried out on a feature graph output by each layer of intermediate network through scaling and displacement, and a mathematical expression of specific operation is as follows:
SFT(F,K)=γ⊙F+β
where γ and β are parameters for scaling and shifting, which represents a Hadamard product.
Specifically, assume that the first convolution yields a feature map of size C1×H1× W1Then the blur kernel k is stretched to b by a stretching strategy1×H1×W1And then connected with the characteristic diagram to obtain the size of (b)1+C1)×H1×W1The image of (2). The channel connection image is used as input, scaling and displacement parameters gamma and beta are obtained by a small convolutional neural network, affine transformation is carried out on the characteristic diagram, and the result of the affine transformation is input into the next convolutional layer. In the SFT layer after the next convolutional layer, the blur kernel will be stretched to the same size as the next feature map, and the operation of the first convolutional layer is repeated. In the entire network, SFT layers are used both after each convolutional layer of the base Block density and after global connection of the base Block.
The experiment of the invention is realized by a PyTorch deep learning framework on processors i9-8700k, a 16GB memory, an NVIDIA GeForce GTX1080Ti 8GB video card and a windows operating system. Data set aspect: the DPED data set, which contains 5614 images taken by the iPhone3 camera, is an unprocessed real-world image that includes low quality problems such as noise, blur, etc. Fuzzy kernel and noise information is collected from the data set by a fuzzy kernel and noise extractor.
DIV2K contains 1000 high definition images (2K resolution) that are degraded with noise and blur information collected from the DPED data set, resulting in an LR-HR image pair that is used to train the network. 800 sheets are training sets, 100 sheets are verification sets, and 100 sheets are test sets. The results of a particular quantitative analysis can be obtained from experiments with this data set.
The Flickr2k data set contained 2650 high resolution images and corresponding bicubic downsampling results for the first training of the RRDB-SFT model. During first training, the fuzzy kernel extractor and the corrector adopt default values, and at the moment, relevant parameters of the super-resolution model are obtained on the premise of not considering fuzzy kernels and noise. And then alternately training the predictor and the corrector, wherein the RRDB-SFT parameters are fixed and unchanged in the training process of the predictor and the corrector. Referring to the training flow in the table, the parameters of the predictor are updated first, and then the parameters of the corrector are updated. Through experiments, beta is finally adopted1=0.9,β2An Adam optimizer with a learning rate of 1 x 10-4 was trained at 0.999.
The Flickr2k is used as a classic bicubic downsampling data set, has practical significance compared with a comparison test of a classic algorithm, and a training result of the method can prove the robustness of the RRDB-SFT method in the traditional super-resolution range. Selecting the most representative EDSR, ESRGAN and ZSSR as representatives of the existing super-resolution model, performing bicubic down-sampling with the scale of 8 by adopting 2650 HD images of Flickr2k, selecting 2120 images, training according to the original text of each method, selecting 265 images from the rest images as a test set, and calculating the average value of PSNR, SSIM and LPIPS.
Quantitative analysis on Flickr2k dataset comparing EDSR, ESRGAN, ZSSR and RRDB-SFT
As can be seen from the table above, the average values of PSNR and SSIM of the invention are comparable to those of the classical method under the condition of the traditional double-triple downsampling, which shows that the method of the invention is suitable for the traditional super-resolution problem. As the methods such as EDSR and the like do not contain perception loss, the best LPIPS performance is obtained, and the generated image has the highest visual perception quality. The ESRGAN method involves perceptual indexes, but it uses a deeper VGG-128 network, emphasizes that local detail textures are ignored globally, and therefore the effect is less than that of the VGG-19 adopted by the invention.
To quantify the experimental effect of the present invention on real-world image datasets, blur kernel and noise information were collected from real-world images of DPED datasets, using the blur kernel to degrade high resolution images in DIV2K followed by down-sampling at scale 8 and noise injection. The method of training the comparative experiment by using 800 images is used for calculating the PSNR, SSIM and LPIPS average values of the 100 image over-separation results in the test set and the high-resolution images in DIV 2K.
Quantitative analysis on degenerate DIV2K dataset comparing EDSR, ESRGAN, ZSSR and RRDB-SFT
As can be seen from the above table, the capability of the present invention considering the influence of the blur kernel and noise to cope with the real-world image super-resolution problem is significantly stronger than that of the conventional super-resolution method. The degradation processing is fuzzification and noise injection, and the signal power and the noise power are obviously improved, so the PSNR value of the method is obviously improved. Real-world images are more complex, so the values are not comparable to test results on a simple bi-cubic downsampled dataset. The degradation process does not change the image brightness, texture and contrast much, so the improvement in SSIM value is small. The lower LPIPS values demonstrate that the quality of visual perception envisaged by the present invention is still significantly higher than that of the conventional method.
While the foregoing is directed to the preferred embodiment of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (2)
1. A real world image super-resolution method for an unknown fuzzy kernel is characterized by comprising the following steps:
step 1: denoising a high-resolution image of a real world, removing noise through double-triple downsampling, storing important low-frequency information, and regarding the obtained noiseless image as a high-resolution image, namely a training target HR;
step 2: designing a fuzzy kernel extractor, obtaining fuzzy kernel information contained in the image by simulating a real world image, and storing the fuzzy kernel information into a fuzzy kernel set K;
and step 3: designing a noise extractor, calculating the covariance of the network input image and the hyper-resolution result, filtering out a part of noise with the covariance larger than a threshold value, and storing the part of noise in a noise set N;
and 4, step 4: randomly extracting information from the fuzzy kernel set K and the noise set N, and degrading the high-resolution image to obtain a low-resolution image LR for training the network;
and 5: designing a super-resolution network structure, and embedding a spatial feature transformation layer in a residual dense block structure; and carrying out spatial feature transformation on the feature map according to the input fuzzy kernel information, then carrying out nonlinear mapping on the feature map to obtain a high-resolution feature map, and carrying out three-channel convolution to output a high-resolution RGB image after the processing results of each basic block are globally connected.
2. The method for real-world image super-resolution of unknown blur kernel of claim 1, wherein the spatial feature transform is: SFT (F, K) ═ γ -
Where γ represents the scaling parameter, F represents the feature map, and β represents the displacement parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110381837.5A CN113240581A (en) | 2021-04-09 | 2021-04-09 | Real world image super-resolution method for unknown fuzzy kernel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110381837.5A CN113240581A (en) | 2021-04-09 | 2021-04-09 | Real world image super-resolution method for unknown fuzzy kernel |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113240581A true CN113240581A (en) | 2021-08-10 |
Family
ID=77127873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110381837.5A Pending CN113240581A (en) | 2021-04-09 | 2021-04-09 | Real world image super-resolution method for unknown fuzzy kernel |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113240581A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115115516A (en) * | 2022-06-27 | 2022-09-27 | 天津大学 | Real-world video super-resolution algorithm based on Raw domain |
WO2023072072A1 (en) * | 2021-10-26 | 2023-05-04 | 北京字跳网络技术有限公司 | Blurred image generating method and apparatus, and network model training method and apparatus |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070669A (en) * | 2020-08-28 | 2020-12-11 | 西安科技大学 | Super-resolution image reconstruction method for any fuzzy kernel |
-
2021
- 2021-04-09 CN CN202110381837.5A patent/CN113240581A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070669A (en) * | 2020-08-28 | 2020-12-11 | 西安科技大学 | Super-resolution image reconstruction method for any fuzzy kernel |
Non-Patent Citations (3)
Title |
---|
JINGWEN CHEN等: "Image Blind Denoising With Generative Adversarial Network Based Noise Modeling", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, pages 3 * |
JINJIN GU等: "Blind Super-Resolution With Iterative Kernel Correction", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, pages 3 * |
XIAOZHONG JI等: "Real-World Super-Resolution via Kernel Estimation and Noise Injection", 《CVPR》, pages 3 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023072072A1 (en) * | 2021-10-26 | 2023-05-04 | 北京字跳网络技术有限公司 | Blurred image generating method and apparatus, and network model training method and apparatus |
CN115115516A (en) * | 2022-06-27 | 2022-09-27 | 天津大学 | Real-world video super-resolution algorithm based on Raw domain |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111062872B (en) | Image super-resolution reconstruction method and system based on edge detection | |
CN106952228B (en) | Super-resolution reconstruction method of single image based on image non-local self-similarity | |
CN108961186B (en) | Old film repairing and reproducing method based on deep learning | |
CN111192200A (en) | Image super-resolution reconstruction method based on fusion attention mechanism residual error network | |
CN105631807B (en) | The single-frame image super-resolution reconstruction method chosen based on sparse domain | |
CN107123089A (en) | Remote sensing images super-resolution reconstruction method and system based on depth convolutional network | |
CN106204447A (en) | The super resolution ratio reconstruction method with convolutional neural networks is divided based on total variance | |
CN110136060B (en) | Image super-resolution reconstruction method based on shallow dense connection network | |
CN112669214B (en) | Fuzzy image super-resolution reconstruction method based on alternating direction multiplier algorithm | |
CN113808032A (en) | Multi-stage progressive image denoising algorithm | |
CN111951164B (en) | Image super-resolution reconstruction network structure and image reconstruction effect analysis method | |
CN112837224A (en) | Super-resolution image reconstruction method based on convolutional neural network | |
CN114170088A (en) | Relational reinforcement learning system and method based on graph structure data | |
CN109949217B (en) | Video super-resolution reconstruction method based on residual learning and implicit motion compensation | |
CN111861886B (en) | Image super-resolution reconstruction method based on multi-scale feedback network | |
CN103020898A (en) | Sequence iris image super-resolution reconstruction method | |
CN113240581A (en) | Real world image super-resolution method for unknown fuzzy kernel | |
CN110689509A (en) | Video super-resolution reconstruction method based on cyclic multi-column 3D convolutional network | |
CN112163998A (en) | Single-image super-resolution analysis method matched with natural degradation conditions | |
CN116468605A (en) | Video super-resolution reconstruction method based on time-space layered mask attention fusion | |
CN115578255A (en) | Super-resolution reconstruction method based on inter-frame sub-pixel block matching | |
CN116934592A (en) | Image stitching method, system, equipment and medium based on deep learning | |
Mikaeli et al. | Single-image super-resolution via patch-based and group-based local smoothness modeling | |
CN115578262A (en) | Polarization image super-resolution reconstruction method based on AFAN model | |
CN115526777A (en) | Blind over-separation network establishing method, blind over-separation method and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |