CN109272450B - Image super-resolution method based on convolutional neural network - Google Patents

Image super-resolution method based on convolutional neural network Download PDF

Info

Publication number
CN109272450B
CN109272450B CN201810959380.XA CN201810959380A CN109272450B CN 109272450 B CN109272450 B CN 109272450B CN 201810959380 A CN201810959380 A CN 201810959380A CN 109272450 B CN109272450 B CN 109272450B
Authority
CN
China
Prior art keywords
image
network
feature
unit
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810959380.XA
Other languages
Chinese (zh)
Other versions
CN109272450A (en
Inventor
颜波
马晨曦
巴合提亚尔·巴热
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201810959380.XA priority Critical patent/CN109272450B/en
Publication of CN109272450A publication Critical patent/CN109272450A/en
Application granted granted Critical
Publication of CN109272450B publication Critical patent/CN109272450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image editing, and particularly relates to an image hyper-segmentation method based on a convolutional neural network. The convolutional neural network includes: a feature extraction network, a feature learning network and an image reconstruction network; the method comprises the following steps: extracting image features through a feature extraction network; learning high-resolution image features through a feature learning network; and reconstructing the image through an image reconstruction network. The network structure for hierarchical feature learning provided by the invention can make full use of feature information learned by all different levels in the network at each stage in the network, also retains important features in the network to a great extent, reduces feature loss, realizes maximum feature utilization, and avoids feature repetition and redundancy. Experimental results show that the method generates the high-resolution image which is more in line with subjective visual quality, restores vivid detail texture information, realizes a high-efficiency image super-resolution process, and has high practical value.

Description

Image super-resolution method based on convolutional neural network
Technical Field
The invention belongs to the technical field of image editing, and particularly relates to an image super-resolution method.
Background
The image resolution is an important index for measuring the image quality, and the higher the resolution is, the finer the details are, the better the quality is, and the more abundant the information content is contained in the image. Therefore, the image with higher resolution has important application value and research prospect in various computer vision tasks, such as military security, satellite monitoring, traffic supervision, criminal investigation and the like. However, due to cost problems, the image acquisition, storage and transmission processes in practical situations are inevitably limited by external conditions or interfered with, resulting in different quality degradation.
The image super-resolution technique is a branch of research as an image quality enhancement technique. The method aims to recover information in a high-resolution image from a low-resolution image, and is a modern image processing technology with high scientific research value and wide application field. Image super-resolution is not only a simple enlargement of the size of the image, it produces new images containing more valuable information. The image super-resolution technology adopts a method based on signal processing to improve the image resolution, which is a way to effectively improve the image resolution and the image performance, and the method has low cost, so the method is more important for the research of the high-efficiency and high-quality image super-resolution technology.
Image hyper-segmentation can be achieved by interpolation-based algorithms, instance-based methods, and neural network-based methods. Early methods of overcentre were based on interpolation, such as bicubic interpolation and Lanuss resampling [1] Since hyper-segmentation is an ill-defined problem, there are many solutions on the mapping of each pixel from a low resolution image to a high resolution image, and this type of method uses only the information of the low resolution image, it is difficult to simulate the visual complexity of a real image, and interpolation is likely to generate unreal effects for images with complex textures and smooth rendering. High resolution images are not well reconstructed.
Therefore, hyper-diversity requires a strong a priori to constrain the solution space, and most of the better approaches in recent times employ an instance-based strategy to learn strong a priori knowledge. The method comprises the steps of finding out the corresponding relation between a plurality of low-resolution fragments and a high-resolution fragment, finding out a plurality of fragments which are most similar to the fragment in a low-resolution image for each low-resolution fragment, calculating a weight parameter which enables reconstruction cost to be minimum, and finally generating the high-resolution fragment by using the plurality of low-resolution fragments and the weight parameter to form the high-resolution image. The disadvantage of this method is that high frequency content in the image is lost and, in addition, the calculation due to the presence of overlapping slices results in an increased amount of calculation.
In recent years, with the application of Convolutional Neural Networks (CNNs) in the field of computer vision, many CNN-based image hyper-segmentation methods have emerged. These methods achieve this breakthrough development in SRCNN [2] And VDSR [3] The method is most representative. By applying these methods to each frame of an image, the image can be simply super-divided and extended to the imageAnd (4) the field of supernumeration.
Dong et al proposed a convolutional neural network-based image hyper-segmentation method (srnnn) in 2015 to reconstruct a high-resolution image by learning a mapping relationship between low-resolution and high-resolution images. The map is represented as a CNN with the low resolution image as input and the high resolution image as output. The method utilizes the superiority of the neural network, models the image over-resolution problem into a neural network structure, and trains a proper neural network by optimizing an objective function to obtain a simple and effective model for enhancing the image resolution.
The neural network is easy to learn and obtain a large amount of training set data, and in addition, once the hyper-resolution model is trained, the reconstruction of a high-resolution image is a simple feedforward process, so the calculation complexity is greatly reduced. Dong et al also improved SRCNN method, and proposed FSRCNN [4] The method improves the structure of the neural network to realize a faster overdivision effect. In 2016, kim J et al have achieved a better effect on image resolution by deepening the neural network structure, and utilize residual learning to improve network efficiency and accelerate the training speed of the network. With the realization of the continuously improved effect of the convolutional neural network in the hyper-resolution field, more scholars continuously break through the subjective visual quality and objective numerical standard of the hyper-resolution result by continuously improving the network structure.
The invention relates to an image hyper-resolution technology, which is to construct a hyper-resolution convolution neural network with higher parameter efficiency based on the idea of deep learning. The constructed network structure can learn high-frequency details and texture contents in the image by utilizing the correlation of local structures and detail information in the image on the basis of the existing low-resolution image, and reconstruct a clear image with better visual quality.
Disclosure of Invention
In order to improve the prior art and obtain a better super-resolution effect, the invention provides a method for improving the spatial resolution of an image so as to enhance the quality of the image and improve the super-resolution efficiency.
The image super-resolution method provided by the invention is an image super-resolution method based on a convolutional neural networkA method, the convolutional neural network comprising: feature extraction network F FE Feature learning network F FL Image reconstruction network F IR The method comprises the following specific steps:
(1) Image feature extraction
Low resolution image I LR Inputting the data into a convolutional neural network, firstly passing through a feature extraction network F FE Converting the input image from a pixel space to a feature space through a convolution operation to generate features of the input image:
F LR =F FE (I LR )
(2) Learning of image features
This step extracts the features F of the low resolution image LR As input, through a multi-layer feature learning network F FL Learning the detail feature F in the high resolution image from the original image HR
F HR =F FL (F LR )
(3) Reconstruction of high resolution images
The feature F containing rich image detail information predicted in the last step HR Via an image reconstruction network F IR Restoring the high-frequency detail content in the original image to reconstruct the high-resolution image I with higher quality SR
I SR =F IR (F HR )
Multilayer network structure (including feature extraction network F) adopted by the invention FE Feature learning network F FL Image reconstruction network F IR ) Is composed of a plurality of basic units U with the same structure; wherein each unit U comprises two volume blocks and a connection node, the structure of the volume blocks adopts a paper [5 ]]In the GEU unit proposed in (1), the connection node is implemented by a convolution layer having a convolution kernel size of 3 × 3; the specific structure of each unit U is as follows:
first, let f in Indicates the input of unit U, inputs it to the first GEU (noted as GEU) in unit U 1 ) Feature f1 generation in block:
f 1 =GEU 1 (f in )
feature f to be generated 1 Input to the next GEU (noted as GEU) 2 ) Unit for obtaining the feature f 2 :)
f 2 =GEU 2 (f 1 )
Finally, the outputs f of the two GEU units are 1 、f 1 Simultaneously inputting a convolution layer with convolution kernel size of 3 multiplied by 3 for feature fusion to generate the output feature f of the unit out
f out =Conv(f 1 ,f 2 )。
In step (1) of the present invention, a feature extraction network F E A layer of network structure is adopted, namely the network structure consists of a basic unit U, and the unit comprises the following specific steps:
first, a low-resolution image is input to the feature extraction network F E Extracting the feature F of the low-resolution image through a unit U LR
F LR =U(I LR )。
In step (2) of the present invention, a feature learning network F FL The method is a multi-layer network structure, and the specific characteristic learning steps are as follows:
firstly, F output in step (1) is LR Input feature learning network F FL In, pass through unit U 1,1 Generating the feature f 1,1
f 1,1 =U 1,1 (F LR )
Will f is 1,1 Unit U input to second layer 2,1 In (1), generating the feature f 2,1
f 2,1 =U 2,1 (f 1,1 )
Then the second layer unit U is put 2,1 Characteristic f of the output 2,1 Input to the next unit U of the first layer 1,2 In, output characteristic f 1,2
f 1,2 =U 1,2 (f 2,1 )
Will f is 1,2 Input to the next second layer unit U 2,2 In generating the featuresf 2,2
f 2,2 =U 2,2 (f 1,2 )
Then the unit U of the second layer is put 2,2 Output characteristic f of 2,2 Input to first layer unit U 1,3 In, output characteristic f 1,3
f 1,3 =U 1,3 (f 2,2 )
Finally, the output f of the last unit of the first layer is output 1,3 And the output f of the last cell of the second layer 2,2 Units U input into the third layer together 3,1 To generate the feature f 3,1 Since the network structure of the present invention only adopts a three-layer structure, the output f of the first unit of the third layer 3,1 Detail feature F of high resolution image learned as a feature learning part SR
F SR =f 1,3
Unit U 1,1 ,U 2,1 ,U 1,2 ,U 2,2 ,U 1,3 ,U 3,1 All are basic units U.
In step (3) of the present invention, an image reconstruction network F IR A one-layer network structure is adopted, namely the network structure consists of a basic unit U, and the method comprises the following specific steps:
firstly, the output F of the previous step SR As input, via a basic unit U 1 And generating a residual image I between the reconstructed image and the real high-resolution image by deconvolution layer amplification image resolution res
I res =Deconv(U 1 (F SR ))
Finally, the low-resolution image and the residual image I after being up-sampled by using the Bicubic method are compared res Adding to obtain the final high-resolution image I output by the network SR
I SR =I res +Bic(I LR )。
The invention adopts a multilayer network structure to learn the image characteristics, connects the image characteristics learned between different layers in the network in a hierarchical aggregation mode, fully utilizes the structural information of each layer in the network, recovers more accurate high-frequency image content, effectively reduces the information loss of the image characteristics in the network transmission process, excavates deeper image characteristics and finally obtains better reconstruction effect.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 shows the result of the super-resolution reconstruction of the low-resolution image using the method.
Detailed Description
For a low resolution image, the method shown in fig. 1 can be used to perform the super-resolution processing. The method comprises the following specific steps:
(1) The low resolution image is first input to the network, via a convolutional layer, a unit U.
Detailed steps for each unit referring to fig. 1, the input to unit U is first passed to the first GEU 1 Block, input the generated features into the next GEU 2 Finally, simultaneously inputting the outputs of the two GEU units into a convolution layer with convolution kernel size of 3 multiplied by 3 to generate the output characteristic f of the unit 1
(2) Will f is 1 Input to the first layer unit U 1,1 Generating the feature f 1,1 (ii) a Will f is 1,1 Input to the second layer unit U 2,1 Generating the feature f 2,1 (ii) a Will f is 2,1 Input to first layer unit U 1,2 In, output characteristic f 1,2 (ii) a Will f is 1,2 Input into the second layer unit U 2,2 Generating the feature f 2,2 (ii) a Will f is 2,2 Back to the first layer unit U 1,3 Output characteristic f 1,3 (ii) a Will f is 1,3 And f 2,2 To the third layer unit U 3,1 Generating the feature f 3,1
(3) Will f is mixed 3,1 To a unit U for generating a characteristic f 2 A 1 is to f 2 Generating a residual map I by means of a deconvolution layer res (ii) a Finally, the original low-resolution image is up-sampled by using a bicubic interpolation method to generate I bic Is shown by bic And residual error map I res Adding the two images to generate a high-resolution image I HR
FIG. 2 shows an example of an experiment. Wherein, the images (a) and (d) are respectively input low-resolution images, the images (b) and (e) are corresponding high-resolution images which are subjected to 4-time hyper-resolution reconstruction by using the method of the invention, and the images (c) and (f) are real high-resolution images. Therefore, the method can effectively recover the texture and the edge information in the original high-resolution image, and brings better visual effect.
Reference documents:
[1]C.E.Duchon.Lanczos filtering in one and two dimensions.Journal of Applied Meteorology,18(8):1016–1022,1979.
[2]C.Dong,C.C.Loy,K.He,and X.Tang.Image super-resolution using deep convolutional networks.IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),38(2):295–307,2015.
[3]Kim J,Lee J K,Lee K M.Accurate Image Super-Resolution Using Very Deep Convolutional Networks[C]//IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society,2016:1646-1654.
[4]C.Dong,C.C.Loy,and X.Tang.Accelerating the super-resolution convolutional neural network.In European Conference on Computer Vision(ECCV),pages 391–407.Springer International Publishing,2016.
[5]Ke Li,Bahetiyaer Bare,B.Y.B.F.,and Yao,C.2018.Face hallucination based on key parts enhancement.In IEEE International Conference on Acoustics,Speech and Signal Processing.。

Claims (4)

1. an image hyper-segmentation method based on a convolutional neural network, the convolutional neural network comprising: feature extraction network F FE Feature learning network F FL Image reconstruction network F IR The method is characterized by comprising the following specific steps:
(1) Image feature extraction
Low resolution image I LR Inputting the data into a convolutional neural network, firstly passing through a feature extraction network F FE Converting an input image from pixel space to feature space via a convolution operationGenerating features of the input image:
F LR =F FE (I LR )
(2) Learning of image features
Extracting the features F of the low-resolution image LR As input, via a feature learning network F FL Learning the detail feature F in the high resolution image from the original image HR
F HR =F FL (F LR )
(3) Reconstruction of high resolution images
The feature F containing rich image detail information predicted in the last step HR Via an image reconstruction network F IR Restoring the high-frequency detail content in the original image to reconstruct the high-resolution image I with higher quality SR
I SR =F IR (F HR );
Here, the feature extraction network F FE Feature learning network F FL Image reconstruction network F IR The system consists of different numbers of basic units U with the same structure; each unit U comprises two convolution blocks and a connecting node, the structure of each convolution block adopts a GEU unit, and the connecting node is realized by a convolution layer with the convolution kernel size of 3 multiplied by 3; the processing flow of each unit U is as follows:
first, let f in Representing the input of unit U, inputting it into the first GEU block in unit U to generate feature f 1 The first GEU is marked as GEU 1
f 1 =GEU 1 (f in )
Feature f to be generated 1 Input into the next GEU unit to obtain the characteristic f 2 The next GEU is marked as GEU 2
f 2 =GEU 2 (f 1 )
Finally, the outputs f of the two GEU units are combined 1 、f 1 Simultaneously inputting a convolution layer with convolution kernel size of 3 multiplied by 3 for feature fusion to generate the output feature f of the unit out
f out =Conv(f 1 ,f 2 )。
2. The convolutional neural network-based image hyper-segmentation method as claimed in claim 1, wherein in step (1), the feature extraction network F FE A layer of network structure is adopted, namely the network structure consists of a basic unit U, and the specific processing flow is as follows:
first, a low-resolution image is input to the feature extraction network F FE Extracting the feature F of the low-resolution image through a unit U LR
F LR =U(I LR )。
3. The convolutional neural network-based image hyper-segmentation method as claimed in claim 1, wherein in the step (2), the feature learning network F FL The method is a multi-layer network structure, and the characteristic learning process comprises the following steps:
firstly, F output in step (1) LR Input feature learning network F FL In, pass through unit U 1,1 Generating the feature f 1,1
f 1,1 =U 1,1 (F LR )
Will f is mixed 1,1 Unit U input to the second layer 2,1 In (1), generating the feature f 2,1
f 2,1 =U 2,1 (f 1,1 )
Then, the second layer unit U is put into 2,1 Characteristic f of the output 2,1 Next unit U input to first layer 1,2 In, output characteristic f 1,2
f 1,2 =U 1,2 (f 2,1 )
Will f is mixed 1,2 Input to the next second layer unit U 2,2 In (1), generating the feature f 2,2
f 2,2 =U 2,2 (f 1,2 )
Then the unit U of the second layer is put 2,2 Output characteristic f of 2,2 Input to first layer unit U 1,3 In (1), output characteristic f 1,3
f 1,3 =U 1,3 (f 2,2 )
Finally, the output f of the last cell of the first layer is compared 1,3 And the output f of the last cell of the second layer 2,2 Units U input into the third layer together 3,1 To generate the feature f 3,1 ,f 3,1 Detail feature F of high-resolution image learned as a feature learning part SR
F SR =f 3,1
Wherein, the unit U 1,1 ,U 2,1 ,U 1,2 ,U 2,2 ,U 1,3 ,U 3,1 All are basic units U.
4. The convolutional neural network-based image hyper-segmentation method as claimed in claim 1, wherein in step (3), the image reconstruction network F IR A one-layer network structure is adopted, namely the network structure consists of a basic unit U, and the specific processing flow is as follows:
firstly, the output F of the previous step SR As input, the resolution of the image is enlarged through a basic unit U and a deconvolution layer to generate a residual image I between a reconstructed image and a real high-resolution image res The basic unit U is marked as U 1
I res =Deconv(U 1 (F SR ))
Finally, the low-resolution image and the residual image I after being up-sampled by using the Bicubic method are compared res Adding to obtain high resolution image I finally output by network SR
I SR =I res +Bic(I LR )。
CN201810959380.XA 2018-08-22 2018-08-22 Image super-resolution method based on convolutional neural network Active CN109272450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810959380.XA CN109272450B (en) 2018-08-22 2018-08-22 Image super-resolution method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810959380.XA CN109272450B (en) 2018-08-22 2018-08-22 Image super-resolution method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN109272450A CN109272450A (en) 2019-01-25
CN109272450B true CN109272450B (en) 2023-01-06

Family

ID=65153956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810959380.XA Active CN109272450B (en) 2018-08-22 2018-08-22 Image super-resolution method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN109272450B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978785B (en) * 2019-03-22 2020-11-13 中南民族大学 Image super-resolution reconstruction system and method based on multi-level recursive feature fusion
CN111182254B (en) 2020-01-03 2022-06-24 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN113556496B (en) * 2020-04-23 2022-08-09 京东方科技集团股份有限公司 Video resolution improving method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683067A (en) * 2017-01-20 2017-05-17 福建帝视信息科技有限公司 Deep learning super-resolution reconstruction method based on residual sub-images
CN107358576A (en) * 2017-06-24 2017-11-17 天津大学 Depth map super resolution ratio reconstruction method based on convolutional neural networks
CN107392855A (en) * 2017-07-19 2017-11-24 苏州闻捷传感技术有限公司 Image Super-resolution Reconstruction method based on sparse autoencoder network Yu very fast study
CN108259994A (en) * 2018-01-15 2018-07-06 复旦大学 A kind of method for improving video spatial resolution

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204449B (en) * 2016-07-06 2019-09-10 安徽工业大学 A kind of single image super resolution ratio reconstruction method based on symmetrical depth network
CN106709875B (en) * 2016-12-30 2020-02-18 北京工业大学 Compressed low-resolution image restoration method based on joint depth network
CN107492070B (en) * 2017-07-10 2019-12-03 华北电力大学 A kind of single image super-resolution calculation method of binary channels convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683067A (en) * 2017-01-20 2017-05-17 福建帝视信息科技有限公司 Deep learning super-resolution reconstruction method based on residual sub-images
CN107358576A (en) * 2017-06-24 2017-11-17 天津大学 Depth map super resolution ratio reconstruction method based on convolutional neural networks
CN107392855A (en) * 2017-07-19 2017-11-24 苏州闻捷传感技术有限公司 Image Super-resolution Reconstruction method based on sparse autoencoder network Yu very fast study
CN108259994A (en) * 2018-01-15 2018-07-06 复旦大学 A kind of method for improving video spatial resolution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ke Li ; Bahetiyaer Bare ; Bo Yan ; Bailan Feng ; Chunfeng Yao.HNSR: Highway Networks Based Deep Convolutional Neural Networks Model for Single Image Super-Resolution.《2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)》.2018, *
基于卷积神经网络的图像复原方法研究;兰妙萍;《中国优秀硕士学位论文全文数据库》;20180215;全文 *

Also Published As

Publication number Publication date
CN109272450A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN109859106B (en) Image super-resolution reconstruction method of high-order fusion network based on self-attention
Liu et al. An attention-based approach for single image super resolution
CN108259994B (en) Method for improving video spatial resolution
CN109035146B (en) Low-quality image super-resolution method based on deep learning
CN111105352A (en) Super-resolution image reconstruction method, system, computer device and storage medium
CN115222601A (en) Image super-resolution reconstruction model and method based on residual mixed attention network
CN108989731B (en) Method for improving video spatial resolution
CN109272450B (en) Image super-resolution method based on convolutional neural network
CN112288632B (en) Single image super-resolution method and system based on simplified ESRGAN
CN111932461A (en) Convolutional neural network-based self-learning image super-resolution reconstruction method and system
Luo et al. Lattice network for lightweight image restoration
Kang et al. Multilayer degradation representation-guided blind super-resolution for remote sensing images
Yang et al. Multilevel and multiscale network for single-image super-resolution
Zou et al. Joint wavelet sub-bands guided network for single image super-resolution
Esmaeilzehi et al. UPDResNN: A deep light-weight image upsampling and deblurring residual neural network
CN114841859A (en) Single-image super-resolution reconstruction method based on lightweight neural network and Transformer
CN114022356A (en) River course flow water level remote sensing image super-resolution method and system based on wavelet domain
CN116977651B (en) Image denoising method based on double-branch and multi-scale feature extraction
CN117078539A (en) CNN-transducer-based local global interactive image restoration method
CN109087247B (en) Method for performing super-resolution on stereo image
CN115797181A (en) Image super-resolution reconstruction method for mine fuzzy environment
CN114529450B (en) Face image super-resolution method based on improved depth iteration cooperative network
CN115511733A (en) Image degradation modeling method, neural network training method and device
CN115082307A (en) Image super-resolution method based on fractional order differential equation
Wang et al. Image quality enhancement using hybrid attention networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant