CN112767252B - Image super-resolution reconstruction method based on convolutional neural network - Google Patents

Image super-resolution reconstruction method based on convolutional neural network Download PDF

Info

Publication number
CN112767252B
CN112767252B CN202110105967.6A CN202110105967A CN112767252B CN 112767252 B CN112767252 B CN 112767252B CN 202110105967 A CN202110105967 A CN 202110105967A CN 112767252 B CN112767252 B CN 112767252B
Authority
CN
China
Prior art keywords
image
convolution
layer
convolutional
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110105967.6A
Other languages
Chinese (zh)
Other versions
CN112767252A (en
Inventor
李劼
任春辉
王斌
付毓生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
CETC 54 Research Institute
Original Assignee
University of Electronic Science and Technology of China
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, CETC 54 Research Institute filed Critical University of Electronic Science and Technology of China
Priority to CN202110105967.6A priority Critical patent/CN112767252B/en
Publication of CN112767252A publication Critical patent/CN112767252A/en
Application granted granted Critical
Publication of CN112767252B publication Critical patent/CN112767252B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Abstract

The invention belongs to the technical field of images, and particularly relates to an image super-resolution reconstruction method based on a convolutional neural network. According to the method, the convolutional neural network is improved by utilizing the special-shaped convolutional core according to the receptive field difference principle of different convolutional cores, so that the network can extract rich characteristics of the image under the condition of less parameters. Meanwhile, the multilayer characteristic image information is fully utilized by using the tanh function as an activation function and a jump connection mode, so that the quality of a reconstructed image is improved. In addition, the method of the invention utilizes the principle of the convolutional neural network for light weight optimization of the convolutional neural network, so that the parameters of the whole network are reduced and the calculation amount is reduced under the condition that the quality of the reconstructed image of the network is slightly reduced. The method can improve the quality of the reconstructed image on the basis of not greatly improving the parameters compared with the traditional network, and meanwhile, the reconstructed image is obviously improved in human eye impression compared with the image reconstructed by the traditional method.

Description

Image super-resolution reconstruction method based on convolutional neural network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image super-resolution reconstruction method based on a convolutional neural network.
Background
Image super-resolution reconstruction is an image processing technique in which a low-resolution image is reconstructed into a high-resolution image. With the improvement of the display capability of the equipment, a user needs to have an image with higher resolution to bring better visual experience, and meanwhile, the image with higher resolution can be convenient for practitioners in more industries to carry out scientific research, so that super-resolution reconstruction needs to be carried out on the image.
The image super-resolution reconstruction method comprises image super-resolution reconstruction based on interpolation, image super-resolution reconstruction based on fitting and image super-resolution reconstruction based on learning. The learning-based image super-resolution reconstruction is a popular reconstruction method at present, has the characteristics of good reconstruction quality, rich reconstructed image details, good human eye impression and the like, and is widely applied to the fields of medical treatment, satellite and civil use, so that the method has important significance for improving the reconstruction quality of the image super-resolution reconstruction. The image super-resolution reconstruction method based on learning also comprises an image super-resolution reconstruction method based on dictionary learning and an image super-resolution reconstruction method based on a convolutional neural network, and the image super-resolution reconstruction method based on the convolutional neural network has better characteristic extraction capability and nonlinear mapping capability, so that the reconstructed image quality is better than other methods.
At present, image super-resolution reconstruction methods based on a convolutional neural network mainly focus on the following methods:
1. convolutional neural network
The convolutional neural network method utilizes the convolutional neural network on the problem of image super-resolution reconstruction, and mainly utilizes three convolutional layers to perform super-resolution reconstruction on an image: the method effectively improves the quality of the reconstructed image, but the reconstruction effect can be further improved due to the defects of few network layers, low utilization rate of image features and the like and the fact that image information is not fully used.
2. Recurrent neural networks
The cyclic neural network method is to use the cyclic neural network to carry out super-resolution reconstruction on the image, after the characteristic extraction layer, to use the cyclic convolution principle to carry out cyclic convolution on the characteristic image, and to transmit the convolution result of each time to the last layer of convolution layer, thereby realizing the extraction of the image characteristic. Although the method for reconstructing the super-resolution image by using the cyclic neural network is not changed greatly, the cyclic convolution mode can fully extract certain characteristics of the image, so that the problems that the number of layers of a part of convolutional neural network is small and the characteristics are not fully extracted are solved; however, the recurrent neural network uses the convolution layer of the same layer to carry out the recurrent convolution, so that different features of the image cannot be fully extracted, and the reconstructed image has a good reconstruction effect on certain repeated features and a common effect on other features.
3. Antagonistic neural networks
The method is based on the antagonistic neural network, the pattern with rich texture can be reconstructed by simultaneously training the generating network and the antagonistic network, but the network improves the impression of the antagonistic network on the image in the training process, so that the content of the reconstructed image and the content of the real image have larger difference, the two networks need longer time and more resources for simultaneous training, the requirement on equipment is very high, and the practicability has certain limitation.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide an image Super-Resolution reconstruction method based on an improved convolutional neural network, namely a horizontal and Vertical Super-Resolution (HVSR), which has good reconstructed image performance and fewer network parameters.
The technical scheme of the invention is that an image super-resolution reconstruction method based on a convolutional neural network, as shown in figure 1, comprises the following steps:
s1, selecting n images as a training set { P H1 ,P H2 ,...,P Hn H, subscript H indicates that the image is a high resolution image;
s2, preprocessing the training set: randomly extracting pixel regions with the size of 100 multiplied by 100 in each image of the training set, filling insufficient regions with 0 if the image pixel regions are smaller than 100, and then down-sampling the pixel regions to obtain a corresponding low-resolution image set { P } L1 ,P L2 ,...,P Ln };
S3, constructing a convolutional neural network, defining the direction from input to output of the network as a vertical direction, and then:
the winding device sequentially comprises a first winding layer, a second winding layer, a third winding layer, a fourth winding layer, a fifth winding layer, a sixth winding layer, an anti-winding layer and a seventh winding layer along the vertical direction; wherein the content of the first and second substances,
the input of the first convolution layer is a low-resolution image set, and the characteristic images output by the first convolution layer are divided into two groups with equal number;
the second convolution layer is provided with two convolution kernels of a multiplied by 1 and 1 multiplied by a, the two convolution kernels respectively take two groups of outputs of the first convolution layer as inputs, the outputs of each convolution kernel are divided into two groups with equal number, namely the second convolution layer outputs four groups of characteristic images;
the third convolutional layer is provided with four convolution kernels, and the four convolution kernels are alternately composed of a × 1 convolution kernels and 1 × a convolution kernels along the horizontal direction, the corresponding input of the first group of a × 1 convolution kernels and 1 × a convolution kernels is two groups of characteristic images output by the a × 1 convolution kernels in the second convolutional layer, the corresponding input of the second group of a × 1 convolution kernels and 1 × a convolution kernels is two groups of characteristic images output by the 1 × a convolution kernels in the second convolutional layer, the output of each convolution kernel is divided into two groups with equal number, namely eight groups of characteristic images are output by the third convolutional layer;
the fourth convolutional layer is provided with eight convolution kernels, the eight convolution kernels are formed by a multiplied by 1 and a multiplied by 1 in a horizontal direction in an alternating mode, the input of each convolution kernel is the same as that of the convolution kernel in the third convolutional layer, and each convolution kernel outputs a group of characteristic images;
the fifth convolutional layer is a convolution kernel of 1 multiplied by 1, all the characteristic images output by the second convolutional layer, the third convolutional layer and the fourth convolutional layer are input, and the number of the output characteristic images is half of that of the first layer of characteristic images;
after each convolution in the network, adopting a tanh function as an activation function;
s4, training the constructed convolutional neural network, specifically: setting a target image as a residual image, and amplifying a low-resolution image set to be as large as a corresponding image in a training set by using a bilinear interpolation method to obtain an amplified low-resolution image set { P IL1 ,P IL2 ,...,P ILn The enlarged low-resolution image set is combined with the extracted imageSubtracting the pixel values of the corresponding positions of the corresponding images in the pixel area to obtain a residual image set { P R1 ,P R2 ,...,P Rn }; will input a low resolution image P Li Output image P obtained after going into the network Oi With corresponding residual image P Ri Comparing to obtain the mean square error
Figure BDA0002917421610000031
Updating network parameters by taking the mean square error as a loss function of the network so as to obtain a trained convolutional neural network;
s5, performing super-resolution reconstruction of the image by using the trained convolutional neural network, as shown in fig. 2, specifically:
s51 image P needing super-resolution reconstruction L As input image, the convolution neural network is input to obtain an output image P O
S52, for image P L Magnifying to image P by bicubic interpolation O Is equally large, an enlarged image P is obtained IL
S53, mixing P IL And P O Adding the pixel values of the corresponding pixel points to obtain a super-resolution image P S
Further, the first convolutional layer is a convolution kernel of 3 × 3, the second convolutional layer is a convolution kernel of 1 × 5 and 5 × 1, the sixth convolutional layer is a convolution kernel of 1 × 1, and the seventh convolutional layer is a convolution kernel of 3 × 3.
The invention has the beneficial effects that:
(1) by adopting 1 × 5 and 5 × 1 convolution kernels different from the traditional n × n convolution kernels, the receptive field is richer; meanwhile, each convolution kernel has fewer parameters than the commonly used 3 × 3 convolution kernel, but the finally obtainable receptive field is the same.
(2) The convolution kernels of 1 x 5 and 5 x 1 are simultaneously used in one layer of convolution layer, so that the characteristics in the horizontal direction and the vertical direction are ensured to be extracted, different receptive field shapes are obtained through the combination of different convolution kernels in multiple layers of convolution layers, and the capability of a convolution neural network for extracting the characteristics in different shapes in an image is enriched.
(3) The invention adopts the idea of light weight, utilizes the idea of grouping convolution, reduces the operation amount of the network on the basis of not changing the reconstruction performance greatly, and leads the application range of the convolution neural network to be wider.
Drawings
FIG. 1 is a flow chart of the method training of the present invention.
FIG. 2 is a flow chart of image super-resolution reconstruction by the method of the present invention.
FIG. 3 is a diagram of a convolutional neural network architecture of the present invention.
Fig. 4 is a graph of the performance of different models on a test set after different training cycles.
FIG. 5 is a super-resolution reconstruction effect diagram of different models on the same image.
Detailed Description
The technical scheme of the invention is described in detail in the following with the accompanying drawings:
the convolutional neural network is defined as:
Y=a(w*X+b)
y represents the output, a (-) represents the activation function, w represents the convolution kernel, and b represents the bias term.
Compared with the conventional convolutional neural network for super-resolution reconstruction, which uses an n × n convolutional kernel for super-resolution reconstruction, the method for super-resolution reconstruction defines that the irregular convolutional kernel uses two types of convolutional kernels, namely 1 × 5 convolutional kernels and 5 × 1 convolutional kernels, instead of the original convolutional kernel, and the convolutional kernels can use fewer parameters to achieve the same effect of the receptive field as the conventional convolutional kernels, for example, when the 5 × 5 convolutional kernels are used as the convolutional kernels of the convolutional neural network, the size of the receptive field is 5 × 5, and the number of the used parameters is 25. When a 3 × 3 convolution kernel is used as the convolution kernel of the convolutional neural network, two 3 × 3 convolution kernels of the same size are required to reach a 5 × 5 receptive field, and the number of parameters used is 18. When convolution is performed by using two convolution kernels of 1 × 5 and 5 × 1, a 1 × 5 convolution kernel and a 5 × 1 convolution kernel are required to achieve a 5 × 5 receptive field, and at this time, the number of parameters used is 10, which is much smaller than the number of conventional convolution kernels.
When the traditional convolutional neural network carries out super-resolution reconstruction, each convolutional layer only uses one convolution kernel for convolution, after multilayer convolution, the shape of a receptive field of a certain point in a characteristic image is single, and the region of the characteristic extracted from the original image is relatively fixed, but the method of the invention places two convolution kernels of 1 × 5 and 5 × 1 in the same convolutional layer, so that the extraction of the image information of the upper layer in the horizontal direction and the vertical direction can be ensured, the problem that the image characteristic information of the other direction is not extracted when only single direction is used for extracting the image information is avoided, and meanwhile, the combination of multiple convolution kernels can enrich the receptive field shape of the whole convolutional neural network and is more beneficial to extracting the image characteristics of certain shapes; after the multilayer convolution layer is formed, the receptive fields in different shapes are combined with one another to obtain the receptive fields in different shapes, the receptive fields in different shapes have different extraction capabilities for the image features in different shapes, and the receptive fields in different shapes can better capture different features of an image compared with the receptive field in a single shape.
The network of the invention adopts a grouping convolution mode to group the extracted characteristic images of each layer, each group is respectively convoluted by a convolution kernel of the next layer, for example, the images extracted by the convolution kernel of the second layer are divided into two groups, and each group is respectively convoluted by convolution kernels of 1 × 5 and 5 × 1. In doing so, the number of convolution kernels per layer is constant, but by grouping, the amount of operations required per layer is greatly reduced. For example, after the feature images extracted from the second layer are divided into two groups, the computation amount required for the convolution layer of the third layer is only 50% of that of the second layer, while when the feature images extracted from the third layer are divided into four groups, the computation amount is changed to 25% of that of the second layer, and so on. Before the subsequent convolution, all the characteristic images are collected by a jump connection method, all the characteristic images are subjected to 'mixing' by using a convolution kernel of 1 multiplied by 1, the number of the characteristic images is changed, and the operation amount of a post-layer convolution layer is reduced.
The tanh function is adopted as the activation function after each convolution, because in the image super-resolution reconstruction problem, the required output result is image information, objects in the image are smooth and continuous, the value range of the processed target image is [ -1,1], and the expression of the tanh function is as follows:
Figure BDA0002917421610000051
the output value of the tanh function is also between [ -1,1], unlike the point at which the ReLU and PReLU functions are unguided at a point of 0. the output value of the negative number cannot be directly set to be 0 by the tanh function, so that the image feature is not activated, a part of image information in the training process is lost, the output value of the tanh function close to a point 0 has a large change rate, small changes can be well captured, the output value far away from the point 0 has a small change rate, the output range is limited, the increase of a large convolution result can be well inhibited, and the problems of gradient disappearance and gradient explosion are solved. Therefore, the characteristic image after the function is activated is closer to the target image, and the subsequent processing of super-resolution reconstruction of the image is more facilitated.
A dropout method is used for training in training, after a certain period number is trained, a part of convolution kernels are randomly shielded and trained again, new convolution kernels are selected again for shielding after a period of training, and finally all the convolution kernels are combined, so that overfitting of the whole convolution neural network can be prevented, and the robustness of the network is improved.
The convolutional neural network of the invention is subjected to lightweight processing, and is embodied as follows:
when the conventional convolutional neural network carries out image super-resolution reconstruction, the mode is adopted to carry out convolution on all the characteristic images of the upper layer for the convolutional cores of the lower layer, so that the traditional convolutional neural network has large calculation amount and more required parameters on each convolutional layer, and the calculation amount of the standard convolution is as follows:
Q=HWNKLM
h × W denotes a spatial size of the input image (H denotes a height of the image, W denotes a width of the image), N denotes the number of channels of the input image, K × L denotes a size of the convolution kernel (K denotes a length of the convolution kernel, L denotes a width of the convolution kernel), and M denotes the number of convolution kernels (the number of channels of the output image).
The invention adopts a grouping convolution mode to group the characteristic images of the previous layer, the convolution kernel of the next layer only convolutes one group of the grouped characteristic images, and the calculation amount of the convolution is as follows:
Figure BDA0002917421610000061
g is the number of groups of the feature images, and it can be seen that the more the groups are, the smaller the computation amount of the convolution operation is, but at the same time, the fewer the number of feature images of the previous layer extracted from each feature image in the feature image of the next layer is, and when G takes the maximum value N, the whole network becomes a single-channel convolution neural network.
As shown in fig. 3, the method of the present invention performs grouping processing on the extracted feature images at the 2 nd to 4 th layers of the convolutional neural network, wherein the 2 nd layer is divided into two groups, which are a feature image group extracted by a 1 × 5 convolutional kernel and a feature image group extracted by a 5 × 1 convolutional kernel, and each group of feature images includes convolutional kernels with two shapes, namely 1 × 5 and 5 × 1, in the convolutional layer of the next layer, which corresponds to each group of feature images, and this is done to ensure the richness of the receptive field, so that the network can extract rich image features. The layer 3 divides the characteristic image into 4 groups according to 4 groups of convolution kernels, and so on, and the layer 4 divides the characteristic image into 8 groups, so that the number of the convolution neural networks after using the grouping convolution is reduced by 24 percent compared with the number of unused parameters, and the calculated amount is greatly saved.
Meanwhile, after the grouping convolution is used, the characteristic images in the previous layers are combined in a jumping connection mode, and the images are mixed by utilizing a convolution core of 1 multiplied by 1, so that the number of the characteristic images in the next layer is reduced, and the operation amount and the network parameter number of the convolution operation in the next layer are reduced.
The effectiveness of the invention is demonstrated below by a simulation example in combination with fig. 4 and 5:
in the training process, 10000 pictures in the ImageNet2012 data set are used as a training set, each picture randomly selects 100 × 100 pixel blocks as a training area, and the image magnification is set to be 4 times. The learning rate was set to 0.0001.
FIG. 4 is a comparison of reconstruction performance of different network models in a test set during different training periods. The abscissa is the number of cycles trained by the model, and the ordinate is the peak signal-to-noise ratio, which is an index for objectively evaluating the image reconstruction quality. As can be seen from the figure, the network of the invention obtains good image reconstruction quality after more than ten training periods, which is superior to other networks. This example demonstrates that the present invention has good feature extraction capability and image super-resolution reconstruction capability, and is easy to train.
Fig. 5 shows the reconstruction effect of different network models, and it can be seen from the figure that the image reconstructed by the network of the present invention has a certain improvement in human eye impression compared with other networks, and has a lower blurring degree, and has no jaggy feeling during reconstruction compared with other networks. This example demonstrates that the images reconstructed by the present invention have a very good human eye look and feel.

Claims (2)

1. An image super-resolution reconstruction method based on a convolutional neural network is characterized by comprising the following steps:
s1, selecting n images as a training set { P H1 ,P H2 ,...,P Hn H, subscript H indicates that the image is a high resolution image;
s2, preprocessing the training set: randomly extracting pixel regions with the size of 100 multiplied by 100 in each image of the training set, filling insufficient regions with 0 if the image pixel regions are smaller than 100, and then down-sampling the pixel regions to obtain a corresponding low-resolution image set { P } L1 ,P L2 ,...,P Ln };
S3, constructing a convolutional neural network, defining the direction from input to output of the network as a vertical direction, and then:
the winding device sequentially comprises a first winding layer, a second winding layer, a third winding layer, a fourth winding layer, a fifth winding layer, a sixth winding layer, a seventh winding layer and an anti-winding layer along the vertical direction; wherein the content of the first and second substances,
the input of the first convolution layer is a low-resolution image set, and the characteristic images output by the first convolution layer are divided into two groups with equal number;
the second convolution layer is provided with two convolution kernels of a multiplied by 1 and 1 multiplied by a, the two convolution kernels respectively take two groups of outputs of the first convolution layer as inputs, the outputs of each convolution kernel are divided into two groups with equal number, namely the second convolution layer outputs four groups of characteristic images;
the third convolutional layer is provided with four convolution kernels, and the four convolution kernels are alternately composed of a × 1 convolution kernels and 1 × a convolution kernels along the horizontal direction, the corresponding input of the first group of a × 1 convolution kernels and 1 × a convolution kernels is two groups of characteristic images output by the a × 1 convolution kernels in the second convolutional layer, the corresponding input of the second group of a × 1 convolution kernels and 1 × a convolution kernels is two groups of characteristic images output by the 1 × a convolution kernels in the second convolutional layer, the output of each convolution kernel is divided into two groups with equal number, namely eight groups of characteristic images are output by the third convolutional layer;
the fourth convolutional layer is provided with eight convolution kernels, the eight convolution kernels are formed by a multiplied by 1 and a multiplied by 1 in a horizontal direction in an alternating mode, the input of each convolution kernel is the same as that of the convolution kernel in the third convolutional layer, and each convolution kernel outputs a group of characteristic images;
the fifth convolutional layer is a convolution kernel of 1 multiplied by 1, and all characteristic images output by the second convolutional layer, the third convolutional layer and the fourth convolutional layer are input;
after each convolution in the network, adopting a tanh function as an activation function;
s4, training the constructed convolutional neural network, specifically: setting a target image as a residual image, and amplifying the low-resolution image set to be as large as the corresponding image in the training set by using a bilinear interpolation method to obtain an amplified low-resolution image set { P IL1 ,P IL2 ,...,P ILn Subtracting the pixel value of the corresponding position of the corresponding image in the extracted pixel area from the amplified low-resolution image set to obtain a residual image set { P } R1 ,P R2 ,...,P Rn }; will input a low resolution image P Li Output image P obtained after going into the network Oi With corresponding residual image P Ri Comparing to obtain the mean square error
Figure FDA0003695254320000021
Updating network parameters by taking the mean square error as a loss function of the network, and obtaining a trained convolutional neural network after training with a preset training period number;
s5, performing super-resolution reconstruction on the image by using the trained convolutional neural network, specifically:
s51 reconstructed image P needing super-resolution reconstruction L As an input image, inputting the convolution neural network to obtain a reconstructed output image P O
S52, for image P L Magnifying to image P by bicubic interpolation O Equi-large, obtaining a magnified reconstructed image P IL
S53, mixing P IL And P O Adding the pixel values of the corresponding pixel points to obtain a super-resolution image P S
2. The method for reconstructing image super resolution based on convolutional neural network of claim 1, wherein the first convolutional layer is a convolution kernel of 3 x 3, the second convolutional layer has convolution kernels of 1 x 5 and 5 x 1, the sixth convolutional layer is a convolution kernel of 1 x 1, and the seventh convolutional layer is a convolution kernel of 3 x 3.
CN202110105967.6A 2021-01-26 2021-01-26 Image super-resolution reconstruction method based on convolutional neural network Expired - Fee Related CN112767252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110105967.6A CN112767252B (en) 2021-01-26 2021-01-26 Image super-resolution reconstruction method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110105967.6A CN112767252B (en) 2021-01-26 2021-01-26 Image super-resolution reconstruction method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN112767252A CN112767252A (en) 2021-05-07
CN112767252B true CN112767252B (en) 2022-08-02

Family

ID=75705858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110105967.6A Expired - Fee Related CN112767252B (en) 2021-01-26 2021-01-26 Image super-resolution reconstruction method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112767252B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284112B (en) * 2021-05-27 2023-11-10 中国科学院国家空间科学中心 Method and system for extracting molten drop image contour based on deep neural network
CN113409195A (en) * 2021-07-06 2021-09-17 中国标准化研究院 Image super-resolution reconstruction method based on improved deep convolutional neural network
CN113610706A (en) * 2021-07-19 2021-11-05 河南大学 Fuzzy monitoring image super-resolution reconstruction method based on convolutional neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019032304A1 (en) * 2017-08-07 2019-02-14 Standard Cognition Corp. Subject identification and tracking using image recognition
CN110136067A (en) * 2019-05-27 2019-08-16 商丘师范学院 A kind of real-time imaging generation method for super-resolution B ultrasound image
CN110427922A (en) * 2019-09-03 2019-11-08 陈�峰 One kind is based on machine vision and convolutional neural networks pest and disease damage identifying system and method

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696848A (en) * 1995-03-09 1997-12-09 Eastman Kodak Company System for creating a high resolution image from a sequence of lower resolution motion images
US9392166B2 (en) * 2013-10-30 2016-07-12 Samsung Electronics Co., Ltd. Super-resolution in processing images such as from multi-layer sensors
CN106548449A (en) * 2016-09-18 2017-03-29 北京市商汤科技开发有限公司 Generate method, the apparatus and system of super-resolution depth map
TWI624804B (en) * 2016-11-07 2018-05-21 盾心科技股份有限公司 A method and system for providing high resolution image through super-resolution reconstrucion
CN106600538A (en) * 2016-12-15 2017-04-26 武汉工程大学 Human face super-resolution algorithm based on regional depth convolution neural network
EP3596660A1 (en) * 2017-03-24 2020-01-22 Huawei Technologies Co., Ltd. Neural network data processing apparatus and method
CN107194893A (en) * 2017-05-22 2017-09-22 西安电子科技大学 Depth image ultra-resolution method based on convolutional neural networks
CN107730474B (en) * 2017-11-09 2022-02-22 京东方科技集团股份有限公司 Image processing method, processing device and processing equipment
CN108492249B (en) * 2018-02-08 2020-05-12 浙江大学 Single-frame super-resolution reconstruction method based on small convolution recurrent neural network
FR3089332B1 (en) * 2018-11-29 2021-05-21 Commissariat Energie Atomique Super-resolution device and method
CN109544457A (en) * 2018-12-04 2019-03-29 电子科技大学 Image super-resolution method, storage medium and terminal based on fine and close link neural network
KR102272409B1 (en) * 2019-06-04 2021-07-02 국방과학연구소 Learning method and inference method based on convolutional neural network for tonal frequency analysis
CN111402131B (en) * 2020-03-10 2022-04-01 北京师范大学 Method for acquiring super-resolution land cover classification map based on deep learning
CN111402138A (en) * 2020-03-24 2020-07-10 天津城建大学 Image super-resolution reconstruction method of supervised convolutional neural network based on multi-scale feature extraction fusion
CN111612799A (en) * 2020-05-15 2020-09-01 中南大学 Face data pair-oriented incomplete reticulate pattern face repairing method and system and storage medium
CN111754400B (en) * 2020-06-01 2023-12-26 杭州电子科技大学 Efficient picture super-resolution reconstruction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019032304A1 (en) * 2017-08-07 2019-02-14 Standard Cognition Corp. Subject identification and tracking using image recognition
CN110136067A (en) * 2019-05-27 2019-08-16 商丘师范学院 A kind of real-time imaging generation method for super-resolution B ultrasound image
CN110427922A (en) * 2019-09-03 2019-11-08 陈�峰 One kind is based on machine vision and convolutional neural networks pest and disease damage identifying system and method

Also Published As

Publication number Publication date
CN112767252A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112767252B (en) Image super-resolution reconstruction method based on convolutional neural network
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
Hui et al. Fast and accurate single image super-resolution via information distillation network
CN110310227B (en) Image super-resolution reconstruction method based on high-low frequency information decomposition
CN109671023A (en) A kind of secondary method for reconstructing of face image super-resolution
CN109389556A (en) The multiple dimensioned empty convolutional neural networks ultra-resolution ratio reconstructing method of one kind and device
CN109064396A (en) A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN108259994B (en) Method for improving video spatial resolution
CN108921786A (en) Image super-resolution reconstructing method based on residual error convolutional neural networks
CN108537733A (en) Super resolution ratio reconstruction method based on multipath depth convolutional neural networks
CN107730451A (en) A kind of compressed sensing method for reconstructing and system based on depth residual error network
CN107464216A (en) A kind of medical image ultra-resolution ratio reconstructing method based on multilayer convolutional neural networks
CN109685716A (en) A kind of image super-resolution rebuilding method of the generation confrontation network based on Gauss encoder feedback
CN109035146A (en) A kind of low-quality image oversubscription method based on deep learning
CN111242846A (en) Fine-grained scale image super-resolution method based on non-local enhancement network
CN112837224A (en) Super-resolution image reconstruction method based on convolutional neural network
CN111507462A (en) End-to-end three-dimensional medical image super-resolution reconstruction method and system
CN110490804A (en) A method of based on the generation super resolution image for generating confrontation network
CN113837946A (en) Lightweight image super-resolution reconstruction method based on progressive distillation network
CN110136067A (en) A kind of real-time imaging generation method for super-resolution B ultrasound image
Ai et al. Single image super-resolution via residual neuron attention networks
CN112017116A (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
Sui et al. Gcrdn: Global context-driven residual dense network for remote sensing image super-resolution
CN112907446B (en) Image super-resolution reconstruction method based on packet connection network
Zafeirouli et al. Efficient, lightweight, coordinate-based network for image super resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220802

CF01 Termination of patent right due to non-payment of annual fee