CN112200720B - Super-resolution image reconstruction method and system based on filter fusion - Google Patents

Super-resolution image reconstruction method and system based on filter fusion Download PDF

Info

Publication number
CN112200720B
CN112200720B CN202011070223.7A CN202011070223A CN112200720B CN 112200720 B CN112200720 B CN 112200720B CN 202011070223 A CN202011070223 A CN 202011070223A CN 112200720 B CN112200720 B CN 112200720B
Authority
CN
China
Prior art keywords
resolution image
low
resolution
training
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011070223.7A
Other languages
Chinese (zh)
Other versions
CN112200720A (en
Inventor
冷聪
李成华
于浩东
周波
程健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Nanjing Artificial Intelligence Innovation Research Institute
Zhongke Fangcun Zhiwei Nanjing Technology Co ltd
Original Assignee
Zhongke Nanjing Artificial Intelligence Innovation Research Institute
Zhongke Fangcun Zhiwei Nanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Nanjing Artificial Intelligence Innovation Research Institute, Zhongke Fangcun Zhiwei Nanjing Technology Co ltd filed Critical Zhongke Nanjing Artificial Intelligence Innovation Research Institute
Priority to CN202011070223.7A priority Critical patent/CN112200720B/en
Publication of CN112200720A publication Critical patent/CN112200720A/en
Application granted granted Critical
Publication of CN112200720B publication Critical patent/CN112200720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a super-resolution image reconstruction method and a super-resolution image reconstruction system based on filter fusion, wherein the super-resolution image reconstruction method and the super-resolution image reconstruction system comprise the following steps: firstly, downsampling a high-resolution image to obtain different low-resolution images, and overlapping and sampling the obtained low-resolution images to obtain overlapping low-resolution image blocks; secondly, carrying out the same overlapping sampling on the corresponding high-resolution images to obtain overlapped high-resolution label images; thirdly, carrying out same high-low overlapping sampling on other high-resolution images, and taking the obtained result as a test set of model training; secondly, putting the acquired test set into a constructed training network for training, and learning the mapping from low resolution to high resolution images; and finally, fusing the filters in the learned model to obtain a new model in a deployment stage, so that a low-resolution image is reconstructed into a high-resolution image by using the formed model.

Description

Super-resolution image reconstruction method and system based on filter fusion
Technical Field
The invention relates to a super-resolution image reconstruction method and system based on filter fusion, relates to general image data processing and machine learning image reconstruction technology, and particularly relates to the field of super-resolution image reconstruction processing analysis.
Background
The high-resolution image has higher pixel density, clearer image quality and richer detail information, and the low-resolution image is restored to the high-resolution image containing rich detail information, so that the visual feeling effect of people can be improved, and the completion of later image processing tasks is facilitated.
In the prior art, in the construction process of a convolutional network, the problems of large model parameters, large memory occupation and low training and testing speed are faced, along with the popularization of intelligent edge devices such as intelligent mobile phones, wearable devices and the like, a high-efficiency super-resolution algorithm has great demands, and the super-resolution algorithm using a complex neural network can obtain excellent reconstruction performance, but has extremely great demands on computing resources, if the complex neural network is directly deployed on low-computing-power devices such as mobile phones and the like, the complex neural network cannot be guaranteed to be capable of running, and once reasoning is started, time consumption and serious consumption are caused.
Disclosure of Invention
The invention aims to: an objective is to propose a super-resolution image reconstruction method based on filter fusion, so as to solve the above problems in the prior art. A further object is to propose a system implementing the above method.
The technical scheme is as follows: a super-resolution image reconstruction method based on filter fusion comprises the following steps:
step 1, constructing a training sample set for fusion use of a post-filter;
step 2, generating a training convolutional network;
step 3, fusing a preset filter into a filter to obtain a new deployment stage model, wherein the filter has the capability of extracting all the features learned in a training stage;
and 4, performing super-resolution reconstruction on the low-resolution image to be reconstructed by using a new deployment stage model to obtain a reconstructed high-resolution image.
In a further embodiment, the step 1 is further: a low resolution image set for training and a high resolution image tag set for comparison of the reconstruction results are constructed.
The construction mode of the low-resolution image test set is as follows: firstly, the existing high-resolution image is downsampled by N times, namely bicubic interpolation is carried out by a factor of N times to obtain a low-resolution test image set, wherein N is a natural number. Then expanding the low-resolution image, performing rotation transformation of 90 degrees, 180 degrees and 270 degrees to obtain low-resolution images with different angles, and performing overlapping sampling on the low-resolution images with different angles to obtain a low-resolution image block with the size of N multiplied by N to generate a final training set, thereby solving the problem that the number of low-resolution image samples is not large enough in the practical problem. N is preferably 4.
The construction mode of the high-resolution image tag set is as follows: the same overlap sampling is performed on the downsampled high-resolution image to obtain an n×n sized high-resolution image block. In supervised machine learning, image blocks obtained from high resolution sampling are used as a reference set for comparison results after low resolution image reconstruction, so in simulation training, high resolution images are labeled as label images.
In a further embodiment, the step 2 is further: the super-resolution reconstruction problem of the image is solved by designing an extremely light convolution network based on filter fusion, the implementation mode is that a low-resolution image LR is taken as input, shallow layer characteristics are extracted through a convolution layer, deep layer characteristics of the image are learned through a stacked CACB module, and finally the extracted shallow layer characteristics and the deep layer characteristics are fused, and a high-resolution image is obtained through up-sampling in a sub-pixel convolution mode. The CACB module consists of four fusion convolution layers, and retains one-fourth of the features of each fusion convolution layer to finally perform feature fusion for final feature fusion; the structural details of the fusion convolutional layers involved in the module are divided into a training phase and a deployment phase.
In a further embodiment, the step 3 is further: in the training stage, the number of trunk modules is reduced as much as possible in the light-weight design, and the capability of extracting features of a single square filter with k being equal to k is limited, so that a multi-branch asymmetric filter is designed to perform more powerful feature learning; an example of a fusion convolution layer in which the left side of the multi-branched asymmetric filter is a training stage is divided into three asymmetric filters with different sizes, and the construction forms are k,1 k and k 1; parameters of the plurality of filters can be obtained after training; wherein K is a positive integer.
The deployment stage is used for carrying out the fusion deployment process of the filters, and the three filters in the training stage are fused into a k-by-k filter by weighting, and the fusion convolution layer in the deployment stage is the fused filter and has all the feature extraction capability learned by the three filters in the training stage.
A super-resolution image reconstruction system based on filter fusion comprises
A first module for constructing a neural network training learning sample set; the module obtains different low-resolution images by performing N times downsampling on the high-resolution images, and expands the generated low-resolution images in a way of performing 90-degree, 180-degree and 270-degree rotation transformation on the obtained low-resolution images to obtain low-resolution images with different angles, and then performing overlapping sampling on each low-resolution image to obtain a group of overlapping low-resolution image blocks serving as a low-resolution training set. And carrying out the same overlapping sampling on the corresponding high-resolution images, and taking the obtained picture set as an image tag for comparing the low-resolution image reconstruction results in the supervised machine learning.
A second module for establishing a training convolutional network; the module establishes a light convolution network to learn the mapping from a low-resolution image to a high-resolution image, takes a low-resolution image LR as input, extracts shallow features through a convolution layer, learns deep features of the image through a stacked CACB module, fuses the extracted shallow features and deep features, and upsamples in a sub-pixel convolution mode to obtain the high-resolution image; the CACB module consists of four fusion convolution layers, the structural details of the fusion convolution layers are divided into a training stage and a deployment stage, and one-fourth of the features of each fusion convolution layer are reserved for final feature fusion.
A third module for fusing the filter deployment phase model; in the training stage, the number of trunk modules is reduced as much as possible in the light-weight design, and the capability of extracting features of a single k square filter is limited, so that a multi-branch asymmetric filter is designed to perform stronger feature learning; an example of a fusion convolution layer in which the left side of the multi-branched asymmetric filter is a training stage is divided into three asymmetric filters with different sizes, and the construction forms are k,1 k and k 1; parameters of the plurality of filters can be obtained after training; wherein K is a positive integer.
The deployment stage is used for carrying out the fusion deployment process of the filters, and the three filters in the training stage are fused into a k-by-k filter by weighting, and the fusion convolution layer in the deployment stage is the fused filter and has all the feature extraction capability learned by the three filters in the training stage.
A fourth module for implementing a low resolution image reconstruction; the module is used for inputting the low-resolution image to be reconstructed into the trained lightweight convolutional network, and reconstructing the high-resolution image by utilizing a new deployment phase model obtained by filter fusion.
The beneficial effects are that: the invention provides a super-resolution image reconstruction method and a super-resolution image reconstruction system based on filter fusion, which utilize the construction of a training convolutional network, train a low-resolution image to reconstruct into a high-resolution image in a deployment model of the filter fusion through supervised machine learning.
Drawings
Fig. 1 is a schematic flow chart of super-resolution image reconstruction according to the present invention.
Fig. 2 is a diagram of the network architecture of the present invention.
Fig. 3 is a detailed view of the CACB module in the network of the present invention.
Fig. 4 is a detailed view of the training phase to deployment phase of the present invention.
Fig. 5 is a network structure diagram of a conventional IMDN algorithm.
Fig. 6 is a detailed view of a conventional IMDB module.
Detailed Description
The applicant believes that, aiming at the problems of the method for reconstructing the low-resolution image into the high-resolution image, in the existing high-efficiency super-resolution technology, although good reconstruction performance is obtained, too many modules are stacked, so that the burden on equipment such as a mobile phone is large, and the problems of too slow running time, large calculated amount, too much parameter amount and the like are caused.
In order to solve the problems in the prior art and deploy a super-resolution algorithm into equipment such as a mobile phone and the like, the invention provides a super-resolution image reconstruction method based on filter fusion and a system for realizing the method.
The present invention will be described in more detail with reference to the following examples and the accompanying drawings.
In this application, we propose a super-resolution image reconstruction method based on filter fusion, as shown in fig. 1, by establishing a low-resolution training set and a high-resolution testing set, and training a mapping from a low-resolution image to a high-resolution image by using convolutional network learning, the reconstruction from the low-resolution image to the high-resolution image is further realized, which specifically includes the following steps:
step 1, constructing a training sample set used in fusion with a plurality of filters in the later stage; the training sample set comprises a low-resolution image training set and a high-resolution image set, and the training set is obtained by overlapping and sampling images and obtaining corresponding resolution image blocks.
The training sample set is divided into a low-resolution training set and a high-resolution image block, wherein the low-resolution training set is obtained by the following steps: firstly, performing N times downsampling on a high-resolution image to obtain different low-resolution images; then, expanding the obtained low resolution image; and finally, carrying out overlapping sampling on each obtained low-resolution image to obtain a group of overlapped low-resolution image blocks, and taking the overlapped low-resolution image blocks as a low-resolution training set.
If a high-resolution image with the size of 2K is given, 4 times of bicubic interpolation downsampling is carried out on the high-resolution image to obtain low-resolution images with different sizes, then 90 degrees, 180 degrees and 270 degrees of rotation transformation is carried out on the low-resolution images to obtain four images with different directions, and then overlapping sampling is carried out on the images to obtain a group of low-resolution image blocks with the size of 64x64 to form a training sample of the invention.
The high-resolution image block acquisition mode is as follows: first, the high-resolution image corresponding to the 4-fold downsampling operation is subjected to overlap sampling, and then a group of corresponding overlapped high-resolution image blocks is obtained as a high-resolution tag image.
Taking 100 images in the DIV2K as test images, and performing 4 times downsampling on the test images to obtain a group of low-resolution images, and taking the corresponding high-resolution images as labels to serve as a verification set of machine learning period achievements.
Step 2, generating a training convolution network which can obtain a plurality of filter parameters after training; wherein the training convolutional network is used to enable the ability to learn a low resolution image to high resolution image mapping.
The training convolutional network structure is shown in fig. 2, and the construction process is as follows: firstly, taking a low-resolution image LR as input, extracting shallow features through a convolution layer, then learning deep features of the image through a stacked CACB module, finally fusing the extracted shallow features and deep features, and up-sampling in a sub-pixel convolution mode to obtain a high-resolution image.
The structural details of the CACB module are shown in FIG. 3, and the CACB module is composed of four fusion convolution layers, and one-fourth of the characteristics of each fusion convolution layer are reserved for final characteristic fusion; the structural details of the fusion convolutional layers involved in the module are divided into a training phase and a deployment phase.
The data is put into a convolution network for training, the size of each image block is set to 64x64 in the verification of the example, the number of pictures in each batch is 64 during training, and the momentum and weight attenuation is set to 0.9 and 0.0001. The initial learning rate was set to 0.0001. The maximum iteration number set in this example is 400000, optimization is performed by using a gradient descent method, and iteration is stopped when the iteration number reaches the maximum number.
And 3, merging a plurality of filters into a filter to obtain a new deployment stage model, wherein the filter has the capability of extracting all the features learned in a training stage.
In the training stage, the number of trunk modules is reduced as much as possible in the light-weight design, and the capability of extracting features of a single square filter with k being equal to k is limited, so that a multi-branch asymmetric filter is designed to perform more powerful feature learning; an example of a fused convolution layer in which the left side of a multi-branched asymmetric filter is the training stage is divided into three asymmetric filters of different sizes, as shown in FIG. 4, constructed in the form k,1 k, k 1; parameters of the plurality of filters can be obtained after training; k is a positive integer.
The deployment stage is used for carrying out the fusion deployment process of the filters, and the three filters in the training stage are fused into a k-by-k filter by weighting, and the fusion convolution layer in the deployment stage is the fused filter and has all the feature extraction capability learned by the three filters in the training stage.
Step 4, performing super-resolution reconstruction on the low-resolution image to be reconstructed by using a new deployment stage model to obtain a reconstructed high-resolution image; the new deployment phase model is a deployment phase model which fuses a plurality of filters after machine learning.
As shown in the quantitative evaluation of peak signal-to-noise ratio and light weight index on the DIV2K verification set of the method of the present invention and the IMDN method of table 1 below, by experimental demonstration, compared with the existing IMDN method, in order to test the performance of a network, it is not difficult to find out 100 low resolution pictures for reconstruction in the formed test data, and the quality index reconstructed by the method of the present invention can be almost the same as that of the IMDN method, but the light weight index is far smaller than that of the IMDN.
TABLE 1
Method Quantity of parameters Run time Calculated amount PSNR
The invention is that 687056 0.030s 67G 29.00
IMDN 893936 0.040s 75G 29.01
Based on the above method, a system for implementing the above method may be constructed, including: a first module for constructing a neural network training learning sample set; the module obtains different low-resolution images by downsampling the high-resolution images by 4 times, and expands the generated low-resolution images in a way of rotating the obtained low-resolution images by 90 degrees, 180 degrees and 270 degrees to obtain low-resolution images with different angles, and then overlapping and sampling each low-resolution image to obtain a group of overlapping low-resolution image blocks serving as a low-resolution training set. And carrying out the same overlapping sampling on the corresponding high-resolution images, and taking the obtained picture set as an image tag for comparing the low-resolution image reconstruction results in the supervised machine learning.
A second module for establishing a training convolutional network; the module establishes a light convolution network to learn the mapping from a low-resolution image to a high-resolution image, takes a low-resolution image LR as input, extracts shallow features through a convolution layer, learns deep features of the image through a stacked CACB module, fuses the extracted shallow features and deep features, and upsamples in a sub-pixel convolution mode to obtain the high-resolution image; the CACB module consists of four fusion convolution layers, the structural details of the fusion convolution layers are divided into a training stage and a deployment stage, and one-fourth of the features of each fusion convolution layer are reserved for final feature fusion.
A third module for fusing the filter deployment phase model; in the training stage, the number of trunk modules is reduced as much as possible in the light-weight design, and the capability of extracting features of a single k square filter is limited, so that a multi-branch asymmetric filter is designed to perform stronger feature learning; an example of a fusion convolution layer in which the left side of the multi-branched asymmetric filter is a training stage is divided into three asymmetric filters with different sizes, and the construction forms are k,1 k and k 1; parameters of the plurality of filters may be obtained after training.
The deployment stage is used for carrying out the fusion deployment process of the filters, and the three filters in the training stage are fused into a k-by-k filter by weighting, and the fusion convolution layer in the deployment stage is the fused filter and has all the feature extraction capability learned by the three filters in the training stage.
A fourth module for implementing a low resolution image reconstruction; the module is used for inputting the low-resolution image to be reconstructed into the trained lightweight convolutional network, and reconstructing the high-resolution image by utilizing a new deployment phase model obtained by filter fusion.
As described above, although the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limiting the invention itself. Various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A super-resolution image reconstruction method based on filter fusion is characterized by comprising the following steps:
step 1, constructing a training sample set for fusion use of a post-filter;
the training sample set comprises a low-resolution image training set and a high-resolution image set, and the training set is obtained by overlapping and sampling images and obtaining corresponding resolution image blocks;
step 2, generating a training convolutional network;
wherein the training convolutional network is used to achieve the ability to learn a low resolution image to high resolution image mapping;
step 3, fusing a preset filter into a filter to obtain a new deployment stage model, wherein the filter has the capability of extracting all the features learned in a training stage; in the training stage, designing a multi-branch asymmetric filter form in a lightweight design to perform feature learning; wherein the left side of the multi-branched asymmetric filter is an example of a fusion convolution layer of a training stage, and is divided into three asymmetric filters with different sizes, and the construction forms are k,1 k and k 1; parameters relating to the filter can be obtained after training;
the deployment stage is used for carrying out the fusion deployment process of the filters, and the three filters in the training stage are weighted and fused into a k-by-k filter, and the fusion convolution layer in the deployment stage is the fused filter and has all the feature extraction capability learned by the three filters in the training stage; wherein k is a positive integer;
step 4, performing super-resolution reconstruction on the low-resolution image to be reconstructed by using a new deployment stage model to obtain a reconstructed high-resolution image;
the new deployment phase model is a deployment phase model which fuses a plurality of filters after machine learning.
2. The method of super-resolution image reconstruction based on filter fusion according to claim 1, wherein the step 1 further comprises:
the training sample set is divided into a low-resolution training set and a high-resolution image block, wherein the low-resolution training set is obtained by the following steps: firstly, performing N times downsampling on a high-resolution image to obtain different low-resolution images; then, expanding the obtained low resolution image; finally, carrying out overlapping sampling on each obtained low-resolution image to obtain a group of overlapped low-resolution image blocks, and taking the overlapped low-resolution image blocks as a low-resolution training set;
the high-resolution image block acquisition mode is as follows: firstly, performing overlapped sampling on high-resolution images corresponding to N times of downsampling operation, and then taking an obtained group of corresponding overlapped high-resolution image blocks as high-resolution tag images; n is a positive integer;
the expansion mode of expanding the obtained low-resolution image is to perform rotation transformation of 90 degrees, 180 degrees and 270 degrees so as to obtain low-resolution images with different angles.
3. The method of super-resolution image reconstruction based on filter fusion according to claim 1, wherein the step 2 further comprises:
the process of training the convolutional network is as follows: firstly, taking a low-resolution image LR as input, extracting shallow features through a convolution layer, then learning deep features of the image through a plurality of stacked CACB modules, finally fusing the extracted shallow features and deep features, and up-sampling in a sub-pixel convolution mode to obtain a high-resolution image;
the CACB module consists of four fusion convolution layers, and one-fourth of the characteristics of each fusion convolution layer are reserved for final characteristic fusion; the structural details of the fusion convolutional layers involved in the module are divided into a training phase and a deployment phase.
4. A system for super-resolution image reconstruction based on filter fusion, for implementing the method of any one of the preceding claims 1 to 3, characterized by comprising the following modules:
a first module for constructing a neural network training learning sample set;
a second module for establishing a training convolutional network;
a third module for fusing the filter deployment phase model;
a fourth module for implementing low resolution image reconstruction.
5. The system for reconstructing super-resolution images based on filter fusion according to claim 4, wherein said first module further performs N-times downsampling on the high-resolution images to obtain different low-resolution images, and expands the generated low-resolution images in such a manner that the obtained low-resolution images are subjected to rotation transformation of 90 degrees, 180 degrees and 270 degrees to obtain low-resolution images of different angles, and then performs overlap sampling on each of the low-resolution images to obtain a set of overlapped low-resolution image blocks as a low-resolution training set;
and carrying out the same overlapping sampling on the corresponding high-resolution images, and taking the obtained picture set as an image tag for comparing the low-resolution image reconstruction results in the supervised machine learning.
6. The system for reconstructing super-resolution image based on filter fusion according to claim 4, wherein said second module further establishes a lightweight convolutional network to perform a learning of mapping from low resolution image to high resolution image, wherein the low resolution image LR is taken as input, shallow features are extracted through a convolutional layer, deep features of the image are learned through a stacked CACB module, and finally the extracted shallow and deep features are fused, and up-sampled by means of sub-pixel convolution to obtain the high resolution image; the CACB module consists of four fusion convolution layers, the structural details of the fusion convolution layers are divided into a training stage and a deployment stage, and one-fourth of the features of each fusion convolution layer are reserved for final feature fusion.
7. The system for super-resolution image reconstruction based on filter fusion as recited in claim 4, wherein said third module further performs feature learning in the form of designing a multi-branched asymmetric filter in a lightweight design during a training phase; an example of a fusion convolution layer in which the left side of the multi-branched asymmetric filter is a training stage is divided into three asymmetric filters with different sizes, and the construction forms are k,1 k and k 1; parameters of the plurality of filters can be obtained after training;
the deployment stage is used for carrying out the fusion deployment process of the filters, and the three filters in the training stage are fused into a K-by-K filter by weighting, and the fusion convolution layer in the deployment stage is the fused filter and has all the feature extraction capability learned by the three filters in the training stage, wherein K is a positive integer.
CN202011070223.7A 2020-09-29 2020-09-29 Super-resolution image reconstruction method and system based on filter fusion Active CN112200720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011070223.7A CN112200720B (en) 2020-09-29 2020-09-29 Super-resolution image reconstruction method and system based on filter fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011070223.7A CN112200720B (en) 2020-09-29 2020-09-29 Super-resolution image reconstruction method and system based on filter fusion

Publications (2)

Publication Number Publication Date
CN112200720A CN112200720A (en) 2021-01-08
CN112200720B true CN112200720B (en) 2023-08-08

Family

ID=74013082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011070223.7A Active CN112200720B (en) 2020-09-29 2020-09-29 Super-resolution image reconstruction method and system based on filter fusion

Country Status (1)

Country Link
CN (1) CN112200720B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139899A (en) * 2021-03-31 2021-07-20 桂林电子科技大学 Design method of high-quality light-weight super-resolution reconstruction network model
CN113313691A (en) * 2021-06-03 2021-08-27 上海市第一人民医院 Thyroid color Doppler ultrasound processing method based on deep learning
CN114418863B (en) * 2022-03-31 2022-06-07 北京小蝇科技有限责任公司 Cell image restoration method, cell image restoration device, computer storage medium and electronic equipment
CN114972043B (en) * 2022-08-03 2022-10-25 江西财经大学 Image super-resolution reconstruction method and system based on combined trilateral feature filtering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105072373A (en) * 2015-08-28 2015-11-18 中国科学院自动化研究所 Bilateral-circulation convolution network-based video super-resolution method and system
NL2016285A (en) * 2016-02-19 2017-08-24 Scyfer B V Device and method for generating a group equivariant convolutional neural network.
CN108447020A (en) * 2018-03-12 2018-08-24 南京信息工程大学 A kind of face super-resolution reconstruction method based on profound convolutional neural networks
CN110858391A (en) * 2018-08-23 2020-03-03 通用电气公司 Patient-specific deep learning image denoising method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105072373A (en) * 2015-08-28 2015-11-18 中国科学院自动化研究所 Bilateral-circulation convolution network-based video super-resolution method and system
NL2016285A (en) * 2016-02-19 2017-08-24 Scyfer B V Device and method for generating a group equivariant convolutional neural network.
CN108447020A (en) * 2018-03-12 2018-08-24 南京信息工程大学 A kind of face super-resolution reconstruction method based on profound convolutional neural networks
CN110858391A (en) * 2018-08-23 2020-03-03 通用电气公司 Patient-specific deep learning image denoising method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的人体行为识别方法综述;蔡强 等;《计算机科学》;第47卷(第4期);第85-93页 *

Also Published As

Publication number Publication date
CN112200720A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN112200720B (en) Super-resolution image reconstruction method and system based on filter fusion
CN110163801B (en) Image super-resolution and coloring method, system and electronic equipment
CN110119780A (en) Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN111861961A (en) Multi-scale residual error fusion model for single image super-resolution and restoration method thereof
CN109146788A (en) Super-resolution image reconstruction method and device based on deep learning
CN111105352A (en) Super-resolution image reconstruction method, system, computer device and storage medium
CN106204449A (en) A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network
Sun et al. Lightweight image super-resolution via weighted multi-scale residual network
CN111695457B (en) Human body posture estimation method based on weak supervision mechanism
CN110033417A (en) A kind of image enchancing method based on deep learning
CN115358932B (en) Multi-scale feature fusion face super-resolution reconstruction method and system
CN112365403B (en) Video super-resolution recovery method based on deep learning and adjacent frames
CN111784623A (en) Image processing method, image processing device, computer equipment and storage medium
CN111768340A (en) Super-resolution image reconstruction method and system based on dense multi-path network
CN112288630A (en) Super-resolution image reconstruction method and system based on improved wide-depth neural network
CN115100039B (en) Lightweight image super-resolution reconstruction method based on deep learning
CN111414988B (en) Remote sensing image super-resolution method based on multi-scale feature self-adaptive fusion network
CN111028302B (en) Compressed object imaging method and system based on deep learning
CN112017116A (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
Ai et al. Single image super-resolution via residual neuron attention networks
CN114359039A (en) Knowledge distillation-based image super-resolution method
CN110458057A (en) A kind of convolutional neural networks hyperspectral image classification method kept based on edge
Sun et al. ESinGAN: Enhanced single-image GAN using pixel attention mechanism for image super-resolution
CN109272450A (en) A kind of image oversubscription method based on convolutional neural networks
CN116029905A (en) Face super-resolution reconstruction method and system based on progressive difference complementation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 203b, building 3, artificial intelligence Industrial Park, 266 Chuangyan Road, Qilin science and Technology Innovation Park, Jiangning District, Nanjing City, Jiangsu Province, 211000

Applicant after: Zhongke Fangcun Zhiwei (Nanjing) Technology Co.,Ltd.

Applicant after: Zhongke Nanjing artificial intelligence Innovation Research Institute

Address before: Room 203b, building 3, artificial intelligence Industrial Park, 266 Chuangyan Road, Qilin science and Technology Innovation Park, Jiangning District, Nanjing City, Jiangsu Province, 211000

Applicant before: Zhongke Fangcun Zhiwei (Nanjing) Technology Co.,Ltd.

Applicant before: NANJING ARTIFICIAL INTELLIGENCE CHIP INNOVATION INSTITUTE, INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant