CN113240583A - Image super-resolution method based on convolution kernel prediction - Google Patents

Image super-resolution method based on convolution kernel prediction Download PDF

Info

Publication number
CN113240583A
CN113240583A CN202110395719.XA CN202110395719A CN113240583A CN 113240583 A CN113240583 A CN 113240583A CN 202110395719 A CN202110395719 A CN 202110395719A CN 113240583 A CN113240583 A CN 113240583A
Authority
CN
China
Prior art keywords
convolution
pixel
output
activation module
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110395719.XA
Other languages
Chinese (zh)
Other versions
CN113240583B (en
Inventor
李奇
杨一帆
徐之海
冯华君
陈跃庭
王静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Beijing Institute of Environmental Features
Original Assignee
Zhejiang University ZJU
Beijing Institute of Environmental Features
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Beijing Institute of Environmental Features filed Critical Zhejiang University ZJU
Priority to CN202110395719.XA priority Critical patent/CN113240583B/en
Publication of CN113240583A publication Critical patent/CN113240583A/en
Application granted granted Critical
Publication of CN113240583B publication Critical patent/CN113240583B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image super-resolution method based on convolution kernel prediction. The invention comprises the following steps: 1, selecting a plurality of original images, processing each original image to obtain a corresponding short-focus image and a corresponding long-focus image, and taking all the original images and the corresponding short-focus image and long-focus image as a training set; 2, establishing a convolution kernel prediction neural network; 3, inputting the training set into a convolution kernel prediction neural network for training to obtain a trained convolution kernel prediction neural network; and 4, at the same time, respectively shooting by using a long-focus lens and a short-focus lens of the bifocal camera under the same optical axis to obtain a long-focus image and a short-focus image to be repaired, inputting the long-focus image and the short-focus image to be repaired to the trained convolution kernel prediction neural network, and obtaining a repaired super-resolution image. The method realizes the super resolution of the digital image with high magnification through long-focus mapping and short-focus mapping and convolution kernel prediction based on the super resolution requirement of the digital image.

Description

Image super-resolution method based on convolution kernel prediction
Technical Field
The invention belongs to an image super-resolution method in the technical field of digital imaging, and particularly relates to an image super-resolution method of convolution kernel prediction.
Background
It is well known that high resolution images can provide more detail than their corresponding low resolution images. These details should be crucial in all areas, such as remote sensing, medical diagnostics, intelligent monitoring, etc. Digital zooming has been widely used in many imaging devices due to the limitations of optical zooming. Digital zooming is the digital magnification of an image without changing the focal length of a lens, thus causing the degradation of image quality: however, image processing algorithms (e.g., image interpolation as used in digital zoom systems) do not produce high quality pictures other than aliasing and blurring artifacts. To address this problem, many improved algorithms have been proposed over the past few decades. E.g. for increasing the spatial resolution of the input image using interpolation or super resolution, interpolation based restoration methods aim at searching for connections between neighboring pixels and filling missing pixel functions or interpolation kernels one by one, etc. Although it has a fast processing time at low computational complexity, the method of stepwise operation does not guarantee the accuracy of the estimation, especially in the presence of noise. Some documents propose to fuse images of different focal lengths by an optical flow matching method, but the output image quality is not high due to expensive time cost and the problem that optical flow matching cannot be completely registered.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides an image super-resolution method based on convolution kernel prediction, which improves the imaging quality of a zoom image, designs images with different focal lengths and provides a new method on the aspect of utilizing a long-focus image to map a short-focus image.
The technical scheme adopted by the invention is as follows:
the invention comprises the following steps:
step 1: selecting a plurality of original images, carrying out down-sampling on each original image to obtain a corresponding short-focus image, cutting out a region with the same size as the corresponding short-focus image at the center of each original image and using the region as a long-focus image, and using all the original images and the corresponding short-focus image and long-focus image as a training set;
step 2: establishing a convolution kernel prediction neural network, wherein the convolution kernel prediction neural network comprises a convolution kernel prediction part and an image reconstruction part; the convolution kernel prediction part is connected with the image reconstruction part, and the output of the image reconstruction part is used as the output of the convolution kernel prediction neural network;
and step 3: inputting the training set into a convolution kernel prediction neural network for training to obtain a trained convolution kernel prediction neural network;
and 4, step 4: at the same time, a long-focus lens and a short-focus lens of a bifocal camera are used for shooting respectively under the same optical axis to obtain a long-focus image and a short-focus image to be restored, the long-focus image and the short-focus image to be restored are input to a trained convolution kernel prediction neural network, and a restored super-resolution image is obtained.
The convolution kernel prediction part comprises 15 convolution activation modules, a central region clipping module and a first sub-pixel volume block; the input of the convolution kernel prediction part is input into a first convolution activation module, the first convolution activation module is sequentially connected with an eighth convolution activation module after passing through a second convolution activation module, a third convolution activation module, a fourth convolution activation module, a fifth convolution activation module, a sixth convolution activation module and a seventh convolution activation module, the output of the second convolution activation module is further connected with a tenth convolution activation module after passing through a ninth convolution activation module, the second convolution activation module is connected with an eleventh convolution activation module, the output of the first convolution activation module and the output of the eleventh convolution activation module are subjected to pixel-by-pixel addition to output a first pixel-by-pixel addition characteristic, the output of the first pixel-by-pixel addition characteristic after passing through the twelfth convolution activation module and the first pixel-by-pixel addition characteristic are subjected to pixel-by-pixel addition to output a second pixel-by-pixel addition characteristic, the output of the second pixel-by-pixel addition characteristic after passing through the thirteenth convolution activation module and the first pixel-by-pixel addition characteristic and the second pixel addition characteristic The pixel addition characteristic is subjected to pixel-by-pixel addition and then a third pixel-by-pixel addition characteristic is output, the output of the eighth convolution activation module, the output of the tenth convolution activation module and the output of the third pixel-by-pixel addition characteristic after cascade connection are input into the fourteenth convolution activation module, the output of the thirteenth convolution activation module and the output of the first convolution activation module are subjected to pixel-by-pixel addition and then input into the fifteenth convolution activation module, and the output of the fifteenth convolution activation module is used as the first output of the convolution kernel prediction part;
the input of the convolution kernel prediction part is also input into a central region clipping module, the central region clipping module is connected with the first sub-pixel volume block, and the output of the first sub-pixel volume block after being convolved with the output of the fifteenth convolution activation module is used as the second output of the convolution kernel prediction part.
The image reconstruction portion comprises 8 convolution activation modules, a second sub-pixel volume block and a third sub-pixel volume block; the convolution kernel prediction part inputs a first input of the image reconstruction part to a sixteenth convolution activation module, the sixteenth convolution activation module is connected with a nineteenth convolution activation module after sequentially passing through a seventeenth convolution activation module and an eighteenth convolution activation module, the nineteenth convolution activation module outputs a first output of the image reconstruction part, the first output and the second output are subjected to pixel-by-pixel addition and then input to a second sub-pixel convolution block, the second sub-pixel convolution block is connected with a twenty-third convolution activation module after sequentially passing through a twentieth convolution activation module, a twenty-first convolution activation module and a twenty-second convolution activation module, and an output of the twenty-third convolution activation module and an output of the second sub-pixel convolution block are subjected to pixel-by-pixel addition and then output a fourth pixel-by-pixel addition characteristic;
the second input of the image reconstruction part is input to a third sub-pixel volume block, and the output of the third sub-pixel volume block is convolved with the fourth pixel-by-pixel addition characteristic and serves as the output of the image reconstruction part.
The step 3 specifically comprises the following steps:
3.1) calculating the error between the second output of the convolution kernel prediction part and the corresponding tele image as a first loss function;
3.2) calculating the error between the output of the image reconstruction part and the corresponding original image as a second loss function;
and 3.3) adding the first loss function and the second loss function to obtain a total loss function of the network, and training the convolution kernel prediction neural network by using the total loss function to obtain the trained convolution kernel prediction neural network.
And the resolution of the super-resolution image output by the convolution kernel prediction neural network is the same as that of the corresponding original image.
The convolution activation modules have the same structure, and particularly mainly comprise a convolution layer and an activation layer which are connected.
The invention has the following beneficial effects:
(1) the method realizes the continuous zooming of the digital image at any multiplying power by the convolution kernel prediction of the image based on the requirement of the continuous zooming of the digital image.
The method is constructed based on the mapping process from the long-focus image to the short-focus image, improves the super-resolution effect of the image by fitting the long-focus image and the short-focus image in a mapping mode, and obviously improves the visual effect of the image compared with the prior art.
Drawings
FIG. 1 is a schematic diagram of a convolutional kernel predictive neural network;
FIG. 2 is a schematic diagram of the structure of the convolution kernel prediction section;
FIG. 3 is a schematic structural diagram of an image reconstruction section;
FIG. 4 is a short focus image and a short focus image obtained after downsampling and center region cropping, respectively, of an embodiment;
fig. 5 is a graph comparing the results of bicubic interpolation, VDSR, and the method of the invention for the examples.
Detailed Description
Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
As shown in fig. 1, the present invention comprises the steps of:
step 1: selecting a plurality of original images, carrying out down-sampling on each original image to obtain a corresponding short-focus image, cutting out a region with the same size as the corresponding short-focus image at the center of each original image and using the region as a long-focus image, and using all the original images and the corresponding short-focus image and long-focus image as a training set; in a specific implementation, the down-sampling multiple is four times, and the area size is one fourth of the original image.
Step 2: establishing a convolution kernel prediction neural network, wherein the convolution kernel prediction neural network comprises a convolution kernel prediction part and an image reconstruction part; the first output of the convolution kernel prediction part is connected with the first input of the image reconstruction part, and the output of the image reconstruction part is used as the output of the convolution kernel prediction neural network; respectively inputting short-focus images in the training set into second inputs of a convolution kernel prediction part and an image reconstruction part;
as shown in fig. 2, the convolution kernel prediction section includes 15 convolution activation modules, a center region clipping module, and a first sub-pixel volume block; the input of the convolution kernel prediction part is input into a first convolution activation module, the first convolution activation module is sequentially connected with an eighth convolution activation module after passing through a second convolution activation module, a third convolution activation module, a fourth convolution activation module, a fifth convolution activation module, a sixth convolution activation module and a seventh convolution activation module, the output of the second convolution activation module is further connected with a tenth convolution activation module after passing through a ninth convolution activation module, the second convolution activation module is connected with an eleventh convolution activation module, the output of the first convolution activation module and the output of the eleventh convolution activation module are subjected to pixel-by-pixel addition to output a first pixel-by-pixel addition characteristic, the output of the first pixel-by-pixel addition characteristic after passing through the twelfth convolution activation module and the first pixel-by-pixel addition characteristic are subjected to pixel-by-pixel addition to output a second pixel-by-pixel addition characteristic, the output of the second pixel-by-pixel addition characteristic after passing through the thirteenth convolution activation module and the first pixel-by-pixel addition characteristic and the second pixel addition characteristic The pixel addition characteristics are subjected to pixel-by-pixel addition, and then a third pixel-by-pixel addition characteristic is output, the output of the eighth convolution activation module, the output of the tenth convolution activation module and the output of the third pixel-by-pixel addition characteristic after cascade connection are input into the fourteenth convolution activation module, and cascade connection means that three inputs are subjected to addition in channel dimensions; the output of the thirteenth convolution activation module and the output of the first convolution activation module are added pixel by pixel and then input to the fifteenth convolution activation module, and the output of the fifteenth convolution activation module is used as the first output of the convolution kernel prediction part;
the input of the convolution kernel prediction part is also input into a central region clipping module, the central region clipping module is connected with the first sub-pixel volume block, and the output of the first sub-pixel volume block after being convolved with the output of the fifteenth convolution activation module is used as the second output of the convolution kernel prediction part.
As shown in fig. 3, the image reconstruction part includes 8 convolution activation blocks, a second sub-pixel volume block, and a third sub-pixel volume block; the first output of the convolution kernel prediction part is input into the first input of the image reconstruction part and is input into a sixteenth convolution activation module, the sixteenth convolution activation module is sequentially connected with a nineteenth convolution activation module after passing through a seventeenth convolution activation module and an eighteenth convolution activation module, the nineteenth convolution activation module outputs the first output of the image reconstruction part, the first output of the image reconstruction part is subjected to pixel-by-pixel addition and is input into a second sub-pixel convolution block, the second sub-pixel convolution block is sequentially connected with a twenty-third convolution activation module after passing through a twentieth convolution activation module, a twenty-first convolution activation module and a twenty-second convolution activation module, and the output of the twenty-third convolution activation module and the output of the second sub-pixel convolution block are subjected to pixel-by-pixel addition and then outputs a fourth pixel-by-pixel addition characteristic; the convolution activation modules have the same structure, and particularly mainly comprise a convolution layer and an activation layer which are connected.
And the short-focus images in the training set are input into a second input of the image reconstruction part, the second input of the image reconstruction part is input into a third sub-pixel volume block, and the output of the third sub-pixel volume block is convolved with the fourth pixel-by-pixel addition characteristic to serve as the output of the image reconstruction part.
Short-focus images in the training set are input to a convolution kernel prediction part for image feature extraction, and a first output of the convolution kernel prediction part outputs a mapping convolution kernel of a long-focus image and a short-focus image;
inputting the mapping convolution kernel to a first input of an image reconstruction part for up-sampling and characteristic reconstruction to obtain a mapping reconstructed convolution kernel of a short-focus image and a real image;
inputting the short-focus images in the training set into a second input of the image reconstruction part for sub-pixel convolution to obtain an upper sampling image of the short-focus images;
and (4) convolving the reconstructed convolution kernel with the up-sampling image of the short-focus image to obtain a super-resolution image output by the image reconstruction part.
And step 3: inputting the training set into a convolution kernel prediction neural network for training to obtain a trained convolution kernel prediction neural network;
the step 3 specifically comprises the following steps:
3.1) calculating the mean square error between the second output of the convolution kernel prediction part and the corresponding tele image as a first loss function;
3.2) calculating the mean square error between the output of the image reconstruction part and the corresponding original image as a second loss function;
and 3.3) adding the first loss function and the second loss function to obtain a total loss function of the network, and training the convolution kernel prediction neural network by using the total loss function to obtain the trained convolution kernel prediction neural network.
The resolution of the super-resolution image output by the convolution kernel prediction neural network is the same as that of the corresponding original image. The resolution of the super-resolved image is higher than the resolution of the corresponding short-focus image and long-focus image.
And 4, step 4: at the same time, a long-focus lens and a short-focus lens of a bifocal camera are used for shooting respectively under the same optical axis to obtain a long-focus image and a short-focus image to be restored, the long-focus image and the short-focus image to be restored are input to a trained convolution kernel prediction neural network, and a restored super-resolution image is obtained.
And (3) the multiple of the image view field size obtained by shooting through the short-focus lens and the image view field size obtained by shooting through the long-focus lens is the down-sampling multiple in the step 1.
The short-focus image is subjected to image feature extraction through a convolution kernel prediction part, and the convolution can be represented by the following formula:
Fgi=Convs(I)
wherein, Convs(I) Denotes the convolution with step size s, FgiRepresenting features extracted by the ith convolution, wherein I represents an input image or features;
the specific embodiment of the invention is as follows:
the method comprises two parts of convolution kernel prediction and image reconstruction. In the convolution kernel prediction part and the image reconstruction part, all convolution kernels have a step size of 1 and a size of 3 × 64.
According to the resolution requirements of different zoom factors, interpolation operation of corresponding magnification can be carried out on the input image, namely, the original image is downsampled in the step 1 to obtain a corresponding short-focus image, and an area with the same size as the corresponding short-focus image is cut out at the center of each original image and is used as a long-focus image, so that super-resolution images with different magnifications are obtained.
The invention uses the network structure shown in fig. 1 to respectively carry out 4-time resolution imaging on the long-focus image and the short-focus image shown in fig. 4, and compares the method with bicubic interpolation and VDSR algorithm to illustrate the beneficial effects of the invention.
As shown in fig. 5, comparing the super-resolution image generated by the method of the present invention with bicubic interpolation and VDSR, it can be found that the texture of the image generated by the method of the present invention is richer and the details are more obvious.
And evaluating the imaging quality by using a peak signal to noise ratio (PSNR), wherein the PSNR reflects the closeness degree of the image to be evaluated and the reference image, and the closer the value is, the better the imaging quality is. The results of bicubic interpolation, VDSR and the evaluation results of the imaging results of the present invention are shown in table 1. As can be seen from table 1, the imaging results of the present invention are superior to those of bicubic interpolation and VDSR for 4 x resolution images.
TABLE 1 bicubic interpolation, comparison of VDSR imaging results with imaging results of the invention
Figure BDA0003018526250000061

Claims (6)

1. An image super-resolution method based on convolution kernel prediction is characterized by comprising the following steps:
step 1: selecting a plurality of original images, carrying out down-sampling on each original image to obtain a corresponding short-focus image, cutting out a region with the same size as the corresponding short-focus image at the center of each original image and using the region as a long-focus image, and using all the original images and the corresponding short-focus image and long-focus image as a training set;
step 2: establishing a convolution kernel prediction neural network, wherein the convolution kernel prediction neural network comprises a convolution kernel prediction part and an image reconstruction part; the convolution kernel prediction part is connected with the image reconstruction part, and the output of the image reconstruction part is used as the output of the convolution kernel prediction neural network;
and step 3: inputting the training set into a convolution kernel prediction neural network for training to obtain a trained convolution kernel prediction neural network;
and 4, step 4: at the same time, a long-focus lens and a short-focus lens of a bifocal camera are used for shooting respectively under the same optical axis to obtain a long-focus image and a short-focus image to be restored, the long-focus image and the short-focus image to be restored are input to a trained convolution kernel prediction neural network, and a restored super-resolution image is obtained.
2. The image super-resolution method based on convolution kernel prediction as claimed in claim 1, characterized in that: the convolution kernel prediction part comprises 15 convolution activation modules, a central region clipping module and a first sub-pixel volume block; the input of the convolution kernel prediction part is input into a first convolution activation module, the first convolution activation module is sequentially connected with an eighth convolution activation module after passing through a second convolution activation module, a third convolution activation module, a fourth convolution activation module, a fifth convolution activation module, a sixth convolution activation module and a seventh convolution activation module, the output of the second convolution activation module is further connected with a tenth convolution activation module after passing through a ninth convolution activation module, the second convolution activation module is connected with an eleventh convolution activation module, the output of the first convolution activation module and the output of the eleventh convolution activation module are subjected to pixel-by-pixel addition to output a first pixel-by-pixel addition characteristic, the output of the first pixel-by-pixel addition characteristic after passing through the twelfth convolution activation module and the first pixel-by-pixel addition characteristic are subjected to pixel-by-pixel addition to output a second pixel-by-pixel addition characteristic, the output of the second pixel-by-pixel addition characteristic after passing through the thirteenth convolution activation module and the first pixel-by-pixel addition characteristic and the second pixel addition characteristic The pixel addition characteristic is subjected to pixel-by-pixel addition and then a third pixel-by-pixel addition characteristic is output, the output of the eighth convolution activation module, the output of the tenth convolution activation module and the output of the third pixel-by-pixel addition characteristic after cascade connection are input into the fourteenth convolution activation module, the output of the thirteenth convolution activation module and the output of the first convolution activation module are subjected to pixel-by-pixel addition and then input into the fifteenth convolution activation module, and the output of the fifteenth convolution activation module is used as the first output of the convolution kernel prediction part;
the input of the convolution kernel prediction part is also input into a central region clipping module, the central region clipping module is connected with the first sub-pixel volume block, and the output of the first sub-pixel volume block after being convolved with the output of the fifteenth convolution activation module is used as the second output of the convolution kernel prediction part.
3. The image super-resolution method based on convolution kernel prediction as claimed in claim 1, characterized in that: the image reconstruction portion comprises 8 convolution activation modules, a second sub-pixel volume block and a third sub-pixel volume block; the convolution kernel prediction part inputs a first input of the image reconstruction part to a sixteenth convolution activation module, the sixteenth convolution activation module is connected with a nineteenth convolution activation module after sequentially passing through a seventeenth convolution activation module and an eighteenth convolution activation module, the nineteenth convolution activation module outputs a first output of the image reconstruction part, the first output and the second output are subjected to pixel-by-pixel addition and then input to a second sub-pixel convolution block, the second sub-pixel convolution block is connected with a twenty-third convolution activation module after sequentially passing through a twentieth convolution activation module, a twenty-first convolution activation module and a twenty-second convolution activation module, and an output of the twenty-third convolution activation module and an output of the second sub-pixel convolution block are subjected to pixel-by-pixel addition and then output a fourth pixel-by-pixel addition characteristic;
the second input of the image reconstruction part is input to a third sub-pixel volume block, and the output of the third sub-pixel volume block is convolved with the fourth pixel-by-pixel addition characteristic and serves as the output of the image reconstruction part.
4. The image super-resolution method based on convolution kernel prediction according to claim 1, wherein the step 3 specifically comprises:
3.1) calculating the error between the second output of the convolution kernel prediction part and the corresponding tele image as a first loss function;
3.2) calculating the error between the output of the image reconstruction part and the corresponding original image as a second loss function;
and 3.3) adding the first loss function and the second loss function to obtain a total loss function of the network, and training the convolution kernel prediction neural network by using the total loss function to obtain the trained convolution kernel prediction neural network.
5. The image super-resolution method based on convolution kernel prediction as claimed in claim 1, wherein the resolution of the super-resolution image output by the convolution kernel prediction neural network is the same as the resolution of the corresponding original image.
6. The image super-resolution method based on convolution kernel prediction according to any one of claims 2 or 3, characterized in that the convolution activation modules have the same structure, and are mainly composed of a convolution layer and an activation layer which are connected.
CN202110395719.XA 2021-04-13 2021-04-13 Image super-resolution method based on convolution kernel prediction Active CN113240583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110395719.XA CN113240583B (en) 2021-04-13 2021-04-13 Image super-resolution method based on convolution kernel prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110395719.XA CN113240583B (en) 2021-04-13 2021-04-13 Image super-resolution method based on convolution kernel prediction

Publications (2)

Publication Number Publication Date
CN113240583A true CN113240583A (en) 2021-08-10
CN113240583B CN113240583B (en) 2022-09-16

Family

ID=77128062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110395719.XA Active CN113240583B (en) 2021-04-13 2021-04-13 Image super-resolution method based on convolution kernel prediction

Country Status (1)

Country Link
CN (1) CN113240583B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976318A (en) * 2016-04-28 2016-09-28 北京工业大学 Image super-resolution reconstruction method
US20190012768A1 (en) * 2015-12-14 2019-01-10 Motion Metrics International Corp. Method and apparatus for identifying fragmented material portions within an image
WO2019192316A1 (en) * 2018-04-02 2019-10-10 腾讯科技(深圳)有限公司 Image related processing method and apparatus, device and storage medium
CN110378850A (en) * 2019-07-09 2019-10-25 浙江大学 A kind of zoom image generation method of combination Block- matching and neural network
CN111654621A (en) * 2020-05-26 2020-09-11 浙江大学 Dual-focus camera continuous digital zooming method based on convolutional neural network model
CN111652804A (en) * 2020-05-28 2020-09-11 西安电子科技大学 Super-resolution reconstruction method based on expansion convolution pyramid and bottleneck network
CN111932452A (en) * 2020-07-07 2020-11-13 浙江大学 Infrared image convolution neural network super-resolution method based on visible image enhancement
CN112102167A (en) * 2020-08-31 2020-12-18 西安工程大学 Image super-resolution method based on visual perception
CN112508956A (en) * 2020-11-05 2021-03-16 浙江科技学院 Road scene semantic segmentation method based on convolutional neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012768A1 (en) * 2015-12-14 2019-01-10 Motion Metrics International Corp. Method and apparatus for identifying fragmented material portions within an image
CN105976318A (en) * 2016-04-28 2016-09-28 北京工业大学 Image super-resolution reconstruction method
WO2019192316A1 (en) * 2018-04-02 2019-10-10 腾讯科技(深圳)有限公司 Image related processing method and apparatus, device and storage medium
CN110378850A (en) * 2019-07-09 2019-10-25 浙江大学 A kind of zoom image generation method of combination Block- matching and neural network
CN111654621A (en) * 2020-05-26 2020-09-11 浙江大学 Dual-focus camera continuous digital zooming method based on convolutional neural network model
CN111652804A (en) * 2020-05-28 2020-09-11 西安电子科技大学 Super-resolution reconstruction method based on expansion convolution pyramid and bottleneck network
CN111932452A (en) * 2020-07-07 2020-11-13 浙江大学 Infrared image convolution neural network super-resolution method based on visible image enhancement
CN112102167A (en) * 2020-08-31 2020-12-18 西安工程大学 Image super-resolution method based on visual perception
CN112508956A (en) * 2020-11-05 2021-03-16 浙江科技学院 Road scene semantic segmentation method based on convolutional neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DEFU QIU等: "An Image Super-resolution Reconstruction Method by Using of Deep Learning", 《2019 IEEE 4TH INTERNATIONAL CONFERENCE ON IMAGE,VISION AND COMPUTING》 *
J.LYN: "Image super-resolution reconstruction based on attention mechanism and feature fusion", 《COMPUTER SCIENCE》 *
SHUAI LEI等: "Content-aware Upsampling for Single Image Super-resolution", 《2020 ASIA-PACIFIC CONFERENCE ON IMAGE PROCESSING,ELECTRONICS AND COMPUTERS》 *
赫贵然等: "基于CNN特征提取的双焦相机连续数字变焦", 《浙江大学学报(工学版)》 *
金哲彦等: "基于对焦清晰度的双分辨相机变焦算法研究", 《红外与激光工程》 *

Also Published As

Publication number Publication date
CN113240583B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN109102462B (en) Video super-resolution reconstruction method based on deep learning
Cai et al. Toward real-world single image super-resolution: A new benchmark and a new model
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
JP6957197B2 (en) Image processing device and image processing method
TWI399975B (en) Fusing of images captured by a multi-aperture imaging system
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN111598778A (en) Insulator image super-resolution reconstruction method
CN111447359B (en) Digital zoom method, system, electronic device, medium, and digital imaging device
CN116071243B (en) Infrared image super-resolution reconstruction method based on edge enhancement
CN110378850B (en) Zoom image generation method combining block matching and neural network
Georgis et al. Reduced complexity superresolution for low-bitrate video compression
CN111402139A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
KR20190009588A (en) System and method for generating continuous digital zooming image using super-resolution
CN115841420A (en) Polarization image super-resolution reconstruction method based on deep learning
Deshpande et al. SURVEY OF SUPER RESOLUTION TECHNIQUES.
CN108401104B (en) Dual-focus camera digital zooming method based on frequency band repair and super-resolution
Chang et al. Beyond camera motion blur removing: How to handle outliers in deblurring
EP2948920A1 (en) Method and apparatus for performing single-image super-resolution
Zhang et al. Toward real-world panoramic image enhancement
CN113240583B (en) Image super-resolution method based on convolution kernel prediction
Liu et al. A densely connected face super-resolution network based on attention mechanism
Suda et al. Deep snapshot hdr imaging using multi-exposure color filter array
CN106709873B (en) Super-resolution method based on cubic spline interpolation and iterative updating
Saito et al. Super-resolution interpolation with a quasi blur-hypothesis
Wei et al. RSAN: Residual subtraction and attention network for single image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant