CN115063297A - Image super-resolution reconstruction method and system based on parameter reconstruction - Google Patents

Image super-resolution reconstruction method and system based on parameter reconstruction Download PDF

Info

Publication number
CN115063297A
CN115063297A CN202210760864.8A CN202210760864A CN115063297A CN 115063297 A CN115063297 A CN 115063297A CN 202210760864 A CN202210760864 A CN 202210760864A CN 115063297 A CN115063297 A CN 115063297A
Authority
CN
China
Prior art keywords
reconstruction
image
resolution
super
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210760864.8A
Other languages
Chinese (zh)
Inventor
屈丹
柳聪
杨绪魁
牛铜
郝朝龙
李�真
贺晓年
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Zhengzhou Xinda Institute of Advanced Technology
Original Assignee
Information Engineering University of PLA Strategic Support Force
Zhengzhou Xinda Institute of Advanced Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force , Zhengzhou Xinda Institute of Advanced Technology filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202210760864.8A priority Critical patent/CN115063297A/en
Publication of CN115063297A publication Critical patent/CN115063297A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention belongs to the technical field of super-resolution image reconstruction, and particularly relates to an image super-resolution reconstruction method and system based on parameter reconstruction, wherein a super-resolution reconstruction network for extracting the characteristics of a low-resolution image in input image data and reconstructing the image according to the extracted characteristics is constructed, wherein in the super-resolution reconstruction network, firstly, the shallow layer characteristics of the input image data are extracted by utilizing standard convolution, then, the deep layer characteristics in the shallow layer characteristics are extracted by utilizing depth separable convolution of parameter reconstruction, and the image reconstruction is carried out by utilizing the deep layer characteristics; performing network training by using the collected sample data; and aiming at the image data to be reconstructed, the trained super-resolution reconstruction network is utilized to extract the characteristics and reconstruct the image. The method carries out feature extraction through parameter reconstruction, can realize deeper feature extraction while reducing the network parameter quantity and the calculated quantity, improves the quality of reconstructed images, and is convenient for practical scene application.

Description

Image super-resolution reconstruction method and system based on parameter reconstruction
Technical Field
The invention belongs to the technical field of super-resolution image reconstruction, and particularly relates to an image super-resolution reconstruction method and system based on parameter reconstruction.
Background
Super Resolution (SR) is an image processing technique for restoring a High Resolution (HR) image by processing a Low Resolution (LR) image or an image sequence with a computer. A high resolution image HR means that the image has a high pixel density and can provide more details that tend to play a critical role in the application. The existing image super-resolution reconstruction method has the severe problems of huge network parameters, heavy calculation amount and the like, and in addition, the existing method is mostly a large network model, and the number of network layers is deep, so that the problem of slow reconstruction speed is caused, and the image reproduction effect is influenced.
Disclosure of Invention
Therefore, the invention provides a method and a system for reconstructing image super-resolution based on parameter reconstruction, which can realize deeper feature extraction while reducing the number of network parameters and calculated amount by performing feature extraction through parameter reconstruction, improve the quality of reconstructed images and facilitate the application of actual scenes.
According to the design scheme provided by the invention, the image super-resolution reconstruction method based on parameter reconstruction is provided, and comprises the following contents:
constructing a super-resolution reconstruction network for extracting features of a low-resolution image in input image data and reconstructing the image according to the extracted features, wherein in the super-resolution reconstruction network, firstly, a shallow layer feature of the input image data is extracted by utilizing standard convolution, then, a deep layer feature in the shallow layer feature is extracted by utilizing depth separable convolution of parameter reconstruction, and the image reconstruction is carried out by utilizing the deep layer feature;
performing network training by using the collected sample data; and aiming at the image data to be reconstructed, the trained super-resolution reconstruction network is utilized to extract the characteristics and reconstruct the image.
As the image super-resolution reconstruction method based on parameter reconstruction, further, a feature extraction part in a super-resolution reconstruction network is composed of a shallow feature extraction unit, a deep feature extraction unit and a feature fusion unit, wherein the shallow feature extraction unit utilizes standard convolution to extract shallow features of input image data; the deep feature extraction unit gradually extracts the deep features in the shallow features by using the depth separable convolution reconstructed by the plurality of parameters; and the feature fusion unit splices the deep features of different levels according to the channel dimension and performs channel fusion through convolution operation to obtain the deep features.
The image super-resolution reconstruction method based on parameter reconstruction is characterized in that sample data adopts a pair data set consisting of a high-resolution image and a corresponding low-resolution image obtained by down-sampling the high-resolution image; and taking the low-resolution images as image data input by the network, and performing network training by using the loss between the super-resolution images output by the network and the corresponding super-resolution images in the data set.
As the image super-resolution reconstruction method based on parameter reconstruction, further, the target function of network training is expressed as follows:
Figure BDA0003724235730000021
wherein H SR Expressed as a super-resolution reconstruction network, theta expresses a network learning parameter, | |. the calculation of the optimal calculation of the 1 Representing the loss, LR, of the objective function L1 i And HR i Respectively representing a low resolution image and a corresponding high resolution image in sample data, and N represents the sample data volume.
As the image super-resolution reconstruction method based on parameter reconstruction, further, in the network training process, the feature extraction is gradually carried out by using the depth separable convolution of the parameter reconstruction, firstly, the feature extraction is carried out on the input data by using a first branch structure, and the features extracted in the branch structures are added; then, inputting the added features into the next branch structure for feature extraction again, and adding the extracted features to obtain final output; wherein, the first branch structure comprises: the data processing device comprises a first BN branch for performing normalization processing on data after input data convolution operation processing, a second BN branch for performing depth convolution operation processing and normalization processing on the input data after the input data is processed in the same way as the first BN branch, a third BN branch for performing normalization processing again after the input data is subjected to average pooling operation processing, and a fourth BN branch for performing normalization processing again on the result of the input data after the input data is subjected to depth convolution operation processing; the next branch structure comprises: the device comprises a first branch circuit, a second branch circuit, a third branch circuit and a fourth branch circuit, wherein the first branch circuit is used for performing convolution operation processing on input characteristics and then performing normalization processing, the second branch circuit is used for performing the same processing on the input characteristics as the first branch circuit and then sequentially performing convolution operation processing and normalization processing, the third branch circuit is used for performing the same processing on the input characteristics as the first branch circuit and then sequentially performing average pooling operation and normalization processing, the fourth branch circuit is used for performing the same processing on the input characteristics as the first branch circuit, the deep convolution operation is used for performing convolution operation by adopting a deep convolution kernel with a preset size, and the size of the deep convolution kernel is larger than 1x 1.
As the image super-resolution reconstruction method based on parameter reconstruction, further, when extracting features in a super-resolution reconstruction network, a channel attention mechanism is adopted to carry out weight distribution on the extracted features, firstly, the average value of each branch channel is obtained by utilizing global average pooling operation, then, the average value is subjected to scaling processing, the weight coefficients of different branch channels are obtained by utilizing a Sigmoid activation function, and the weight coefficients of the branch channels are multiplied by the extracted features to obtain feature data for fusion processing.
The image super-resolution reconstruction method based on parameter reconstruction further comprises the steps of splicing extracted features of each branch channel in extracted feature fusion processing, and sequentially performing convolution operation and deep convolution operation on the spliced feature data to output deep features.
As the image super-resolution reconstruction method based on parameter reconstruction, further, in the image reconstruction of the super-resolution reconstruction network, firstly, shallow layer characteristics and deep layer characteristics are added, and an up-sampling layer is utilized to amplify the added characteristic data; then, further extracting the amplified feature data by utilizing convolution operation, and distributing weight to each pixel point in the amplified feature data by utilizing a pixel attention mechanism; and finally, completing super-resolution reconstruction by utilizing full-connection operation and fusing the input low-resolution images.
As the image super-resolution reconstruction method based on parameter reconstruction, further, the process of performing weight distribution on each pixel point by using a pixel attention mechanism is represented as follows: x 1 =Sigmoid(conv 1x1 (X 0 ))×X 0 Wherein X is 1 Representing output characteristics, Sigmoid representing Sigmoid activation function, C 1x1 Represents a 1X1 convolution operation, X 0 Is an input feature.
Further, the present invention also provides an image super-resolution reconstruction system based on parameter reconstruction, comprising: a model construction module and an image reconstruction module, wherein,
the model construction module is used for constructing a super-resolution reconstruction network for extracting the features of the low-resolution images in the input image data and reconstructing the images according to the extracted features, wherein in the super-resolution reconstruction network, the shallow features of the input image data are extracted by utilizing standard convolution firstly, then the deep features in the shallow features are extracted by utilizing depth separable convolution of parameter reconstruction, and the images are reconstructed by utilizing the deep features;
the image reconstruction module is used for carrying out network training by utilizing the collected sample data; and aiming at the image data to be reconstructed, the trained super-resolution reconstruction network is utilized to extract the characteristics and reconstruct the image.
The invention has the beneficial effects that:
the method is designed for solving the practical problems of huge network parameters, heavy calculated amount, slow reconstruction speed and the like of the current image super-resolution reconstruction, and can effectively reduce the parameter amount and the calculated amount of the network and realize deeper feature extraction by utilizing a feature extraction structure of parameter reconstruction; a channel attention mechanism and a pixel attention mechanism are further adopted, the weight is redistributed to the characteristics, the texture details of the reconstructed image are enhanced, and the image reconstruction quality is improved; the number of network layers can be increased through widening in the training stage, huge video memory capacity and computing resources of special equipment are better utilized to obtain better network performance, a huge network structure can be converted into a single 3x3 deep convolution +1x1 convolution single branch structure through parameter reconstruction in the reasoning stage, on the premise that the network performance is kept unchanged, small parameters and calculated amount are obtained, the network reconstruction speed is effectively improved, the network is more suitable for being deployed in terminal equipment, and the network has better application prospect.
Description of the drawings:
FIG. 1 is a schematic flow of image super-resolution reconstruction based on parameter reconstruction in an embodiment;
fig. 2 is a schematic diagram of a super-resolution reconstruction network structure in the embodiment.
The specific implementation mode is as follows:
in order to make the objects, technical solutions and advantages of the present invention clearer and more obvious, the present invention is further described in detail below with reference to the accompanying drawings and technical solutions.
Aiming at the problems of huge network parameters, heavy calculation amount, slow reconstruction speed and the like of the current image super-resolution reconstruction, the embodiment of the invention, as shown in figure 1, provides an image super-resolution reconstruction method based on parameter reconstruction, which comprises the following contents:
s101, constructing a super-resolution reconstruction network for extracting features of low-resolution images in input image data and reconstructing the images according to the extracted features, wherein in the super-resolution reconstruction network, firstly, shallow features of the input image data are extracted by utilizing standard convolution, then, deep features in the shallow features are extracted by utilizing depth separable convolution of parameter reconstruction, and the images are reconstructed by utilizing the deep features;
s102, performing network training by using the collected sample data; and aiming at the image data to be reconstructed, the trained super-resolution reconstruction network is utilized to extract the characteristics and reconstruct the image.
By using the feature extraction structure of parameter reconstruction, the parameter quantity and the calculated quantity of the network can be effectively reduced, deeper feature extraction is realized, and the deployment in practical application is facilitated.
Further, in the embodiment of the scheme, a feature extraction part in the super-resolution reconstruction network is composed of a shallow feature extraction unit, a deep feature extraction unit and a feature fusion unit, wherein the shallow feature extraction unit utilizes standard convolution to extract shallow features of input image data; the deep feature extraction unit gradually extracts the deep features in the shallow features by using the depth separable convolution reconstructed by the plurality of parameters; the feature fusion unit splices the deep features of different levels according to the channel dimension and performs channel fusion through convolution operation to obtain the deep features.
The sample data may be a paired data set consisting of a high resolution image and a corresponding low resolution image obtained by down-sampling the high resolution image; and taking the low-resolution images as image data input by the network, and performing network training by using the loss between the super-resolution images output by the network and the corresponding super-resolution images in the data set.
Referring to fig. 2, a high resolution image is obtained, and downsampling is performed to obtain a corresponding low resolution image, which forms a pair data set; in the feature extraction module, shallow feature extraction is carried out on the low-resolution image to obtain shallow features, and the shallow features are extracted and fused through parameter reconstruction to obtain deep features; and reconstructing the extracted deep features through an image reconstruction module to obtain a super-resolution image. And calculating loss of the super-resolution image and the corresponding real high-resolution image, performing network training, and repeatedly training the network to obtain the final trained super-resolution reconstruction network. Wherein, the loss function is L1 loss, and the calculation formula is:
Figure BDA0003724235730000041
wherein H SR Expressed as an image super-resolution reconstruction network based on parameter reconstruction, theta is expressed as a parameter capable of learning, | | | luminance 1 Expressed as L1 loss, LR i And HR i Respectively expressed as a low resolution image and a corresponding high resolution image, and N is the sample data size.
The shallow feature extraction part in the embodiment of the present disclosure is mainly used for performing feature extraction on a low-resolution image to obtain a shallow feature. The network input is low resolution image characteristic X and the output is deep layer characteristic X 0 . The following can be specifically described: when inputting low resolution image feature X, extracting shallow feature X through 3X3 standard convolution 0 The corresponding calculation formula is: x 0 =C 3×3 (X) wherein C 3×3 Denoted as the 3x3 standard convolution operation.
Further, in the network training process in the embodiment of the present disclosure, feature extraction is performed step by using depth separable convolution of parameter reconstruction, first, feature extraction is performed on input data by using a first branch structure, and features extracted in the branch structures are added; then, inputting the added features into the next branch structure for feature extraction again, and adding the extracted features to obtain final output; wherein, the first branch structure comprises: the data processing device comprises a first BN branch for performing normalization processing on data after input data convolution operation processing, a second BN branch for performing depth convolution operation processing and normalization processing on the input data after the input data is processed in the same way as the first BN branch, a third BN branch for performing normalization processing again after the input data is subjected to average pooling operation processing, and a fourth BN branch for performing normalization processing again on a result obtained after the input data is subjected to depth convolution operation processing; the next branch structure comprises: the device comprises a first branch circuit, a second branch circuit, a third branch circuit and a fourth branch circuit, wherein the first branch circuit is used for performing convolution operation processing on input characteristics and then performing normalization processing, the second branch circuit is used for performing the same processing on the input characteristics as the first branch circuit and then sequentially performing convolution operation processing and normalization processing, the third branch circuit is used for performing the same processing on the input characteristics as the first branch circuit and then sequentially performing average pooling operation and normalization processing, the fourth branch circuit is used for performing the same processing on the input characteristics as the first branch circuit, the deep convolution operation is used for performing convolution operation by adopting a deep convolution kernel with a preset size, and the size of the deep convolution kernel is larger than 1x 1.
The feature extraction part based on parameter reconstruction can mainly extract the features [ X ] of each layer step by 6 feature extraction modules based on parameter reconstruction 1 ,…,X n ,…,X k ]And K is 6. The module consists of 4 depth separable convolutions based on parameter reconstruction and a channel attention mechanism, deep layer features are extracted mainly through the 4 depth separable convolutions based on parameter reconstruction, different channel weights in the deep layer features are distributed through the channel attention mechanism, and the module feature extraction capacity is further improved. The depth separable convolution based on parameter reconstruction mainly comprises a training stage and an inference stage, and the specific process is as follows:
firstly, in a training stage, a structure similar to an inclusion network is designed, and a specific flow can be designed as follows:
input feature X i Feature extraction is carried out simultaneously through four branches to obtain feature X i1 、X i2 、X i3 、X i4 Then adding the extracted features to obtain an output feature X i5 . The calculation process is as follows:
X i1 =BN(conv 1x1 (X i ))
X i2 =BN(Dconv 3x3 (BN(conv 1x1 (X i ))))
X i3 =BN(P avg (X i ))
X i4 =BN(Dconv 3x3 (X i ))
X i5 =X i1 +X i2 +X i3 +X i4
wherein BN is represented by Batch Normalization layer, conv 1x1 Denoted as 1x1 convolution operation, Dconv 3x3 Denoted as 3x3 deep convolution operation, P avg Expressed as an average pooling operation.
Thirdly, the extracted feature X is i5 Inputting into the next branch structure to obtain the feature X i6 、X i7 、X i8 、X i9 Then adding the features extracted from each branch to obtain the final output feature X out . The calculation process is as follows:
X i6 =BN(conv 1x1 (X i5 ))
X i7 =BN(conv 1x1 (BN(conv 1x1 (X i5 ))))
X i8 =BN(P avg (BN(conv 1x1 (X i5 ))))
X i9 =BN(conv 1x1 (X i5 ))
X out =X i6 +X i7 +X i8 +X i9
wherein: BN is denoted as Batch Normalization layer, conv 1x1 Denoted as 1x1 convolution operation, P avg Expressed as an average pooling operation.
In the inference stage, parameters learned in the training stage are equivalently converted into parameters of 3X3 deep convolution +1X1 convolution of a single branch through parameter reconstruction, and the same output characteristic X as that in the training stage can be obtained out The calculation process is as follows:
X out =conv 1x1 (Dconv 3x3 (X i ))
wherein, conv 1x1 Denoted as 1x1 convolution operation, Dconv 3x3 Denoted as a 3x3 deep convolution operation.
When the characteristics are extracted in the super-resolution reconstruction network, a channel attention mechanism can be adopted to carry out weight distribution on the extracted characteristics, firstly, the average value of each branch channel is obtained by utilizing global average pooling operation, then, the average value is subjected to scaling processing, the weight coefficients of different branch channels are obtained by utilizing a Sigmoid activation function, and the weight coefficients of the branch channels and the extracted characteristics are multiplied to obtain characteristic data for fusion processing.
The attention allocation is to the extracted feature X out And a channel attention mechanism method is adopted for weight distribution, more weights are given to important channels, and the feature extraction capability is further improved. When X is present out Has C channels and has the size of H multiplied by W. The specific implementation process can be designed to include the following contents:
first, the average value of each channel is obtained through the global average pooling operation. Then the average calculation formula of the c channel is as follows:
Figure BDA0003724235730000061
wherein C represents a channel, and C is 1,2, …, C, (i, j) is a pixel point of a corresponding position,
Figure BDA0003724235730000062
and (3) representing the characteristics of the c-th channel pixel point (i, j). T is c Represents the mean of the c-th channel.
And then, carrying out scaling processing on the obtained average value, and adopting a Sigmoid activation function to obtain the weight coefficients of different channels. The computing formula of the c channel coefficient is:
W c =Sigmoid(conv 1x1 (ReLU(conv 1x1 (T c ))))
wherein, conv 1x1 Representing 1x1 convolution operations with Sigmoid and ReLU as the activation functions, W c Representing the c-th channel weight coefficient.
Finally, the channel weight coefficients and the input features X out Multiplying to obtain the output characteristics after final weight distribution
Figure BDA0003724235730000071
The calculation formula is as follows:
Figure BDA0003724235730000072
and splicing the extracted features of each branch channel, and sequentially performing convolution operation and deep convolution operation on the spliced feature data to output deep features. The method comprises the following specific steps: in the feature fusion module, the features extracted by the 6 feature extraction modules based on parameter reconstruction are spliced and carried out by adopting 1 multiplied by 1 convolution operation; finally, extracting deep layer characteristic X through 3 multiplied by 3 standard convolution operation D
Further, in the embodiment of the present invention, in the image reconstruction of the super-resolution reconstruction network, first, the shallow feature and the deep feature are added, and the added feature data is amplified by using the upsampling layer; secondly, further extracting the amplified feature data by utilizing convolution operation, and distributing weight to each pixel point in the amplified feature data by utilizing a pixel attention mechanism; and finally, performing super-resolution reconstruction by utilizing full-connection operation and fusing the input low-resolution images.
In the image reconstruction part, shallow feature X is firstly combined 0 And deep layer characteristic X D Adding, inputting into nearest neighbor up-sampling layer, and amplifying image feature with fixed times. And then, the features can be further extracted through two 3 multiplied by 3 standard convolutions, a pixel attention mechanism is introduced, the weight is distributed to each pixel point of the amplified features again, more useful information is learned, and the quality of the reconstructed image is improved. And finally, by introducing global connection operation, low-resolution image features are integrated, the quality of the reconstructed image is further enhanced, and the image reconstruction process is completed. Assume an input characteristic of X 0 Then, the specific process of the pixel attention mechanism can be expressed as follows:
X 1 =Sigmoid(conv 1x1 (X 0 ))×X 0
wherein, X 1 Representing output characteristics, Sigmoid representing Sigmoid activation function, C 1x1 Representing a 1x1 convolution operation.
Further, based on the above method, an embodiment of the present invention further provides an image super-resolution reconstruction system based on parameter reconstruction, including: a model construction module and an image reconstruction module, wherein,
the model construction module is used for constructing a super-resolution reconstruction network for extracting features of a low-resolution image in the input image data and reconstructing the image according to the extracted features, wherein in the super-resolution reconstruction network, firstly, the shallow layer features of the input image data are extracted by using standard convolution, then, the deep layer features in the shallow layer features are extracted by using depth separable convolution of parameter reconstruction, and the image reconstruction is performed by using the deep layer features;
the image reconstruction module is used for carrying out network training by utilizing the collected sample data; and aiming at the image data to be reconstructed, the trained super-resolution reconstruction network is utilized to extract the characteristics and reconstruct the image.
To verify the validity of the protocol, the following further explanation is made with reference to the test data:
on five reference test sets of Set5, Set14, BSD100, Urban100 and Manga109, comprehensive comparison is carried out on the image super-resolution reconstruction network with light-weight images such as VDSR, LapSRN, MemNet and IDN, and the reconstruction index of the image super-resolution reconstruction network based on parameter reconstruction in the scheme is highest, and the parameters and the calculated amount are reduced by times. The network baseline system is built by adopting a basic SR open-source super-resolution reconstruction library and a pytorch deep learning library. Data set: adopting DF2K as training set, cutting into blocks with size of 128X128, 255X255 and 256X256 according to scaling factors X2, X3 and X4, and the corresponding low resolution picture blocks with size of 64X64, 75X75 and 64X 64. Data processing: and carrying out random rotation and turnover operation on the image to enhance data.
Network training: using Adam optimizer, where parameters β 1 ═ 0.9 and β 2 ═ 0.999, initial learning rate was 5 × 10 -4 The total number of channels is 48, and the number of channels in the reconstruction module is reduced by half.
The results are shown in tables 1 and 2.
Table 1 PSNR value comparison of different networks
Figure BDA0003724235730000081
TABLE 2 comparison of the comprehensive Performance of different networks at times X4
Figure BDA0003724235730000082
The data further show that the scheme can obtain the highest reconstruction index on the premise of keeping the minimum number of network parameters and multiplication and addition, can realize excellent network lightweight, and is favorable for being deployed in embedded equipment with small memory capacity.
Unless specifically stated otherwise, the relative steps, numerical expressions and values of the components and steps set forth in these embodiments do not limit the scope of the present invention.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The elements of each example, and method steps, described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and the components and steps of each example have been described in a functional generic sense in the foregoing description for the purpose of illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Those skilled in the art will appreciate that all or part of the steps of the above methods may be implemented by instructing the relevant hardware through a program, which may be stored in a computer-readable storage medium, such as: read-only memory, magnetic or optical disk, and the like. Alternatively, all or part of the steps of the foregoing embodiments may also be implemented by using one or more integrated circuits, and accordingly, each module/unit in the foregoing embodiments may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image super-resolution reconstruction method based on parameter reconstruction is characterized by comprising the following contents:
constructing a super-resolution reconstruction network for extracting features of a low-resolution image in input image data and reconstructing the image according to the extracted features, wherein in the super-resolution reconstruction network, firstly, a shallow layer feature of the input image data is extracted by utilizing standard convolution, then, a deep layer feature in the shallow layer feature is extracted by utilizing depth separable convolution of parameter reconstruction, and the image reconstruction is carried out by utilizing the deep layer feature;
performing network training by using the collected sample data; and aiming at the image data to be reconstructed, the trained super-resolution reconstruction network is utilized to extract the characteristics and reconstruct the image.
2. The image super-resolution reconstruction method based on parameter reconstruction according to claim 1, wherein the feature extraction part in the super-resolution reconstruction network is composed of a shallow feature extraction unit, a deep feature extraction unit and a feature fusion unit, wherein the shallow feature extraction unit extracts shallow features of the input image data by using standard convolution; the deep feature extraction unit gradually extracts the deep features in the shallow features by using the depth separable convolution reconstructed by the plurality of parameters; the feature fusion unit splices the deep features of different levels according to the channel dimension and performs channel fusion through convolution operation to obtain the deep features.
3. The image super-resolution reconstruction method based on parameter reconstruction of claim 1, wherein the sample data is a pair data set consisting of a high resolution image and a corresponding low resolution image obtained by down-sampling the high resolution image; the low-resolution images are used as image data input by a network, and network training is carried out by utilizing the loss between the super-resolution images output by the network and the corresponding high-resolution images in the data set.
4. The image super-resolution reconstruction method based on parameter reconstruction as claimed in claim 1 or 3, wherein the objective function of network training is represented as:
Figure FDA0003724235720000011
wherein H SR Expressed as a super-resolution reconstruction network, theta expresses a network learning parameter, | |. the calculation of the optimal calculation of the 1 Representing the loss, LR, of the objective function L1 i And HR i Respectively representing a low resolution image and a corresponding high resolution image in sample data, and N represents the sample data volume.
5. The image super-resolution reconstruction method based on parameter reconstruction of claim 1, wherein in the network training process, the feature extraction is performed step by using the depth separable convolution of parameter reconstruction, first, the feature extraction is performed on the input data by using the first branch structure, and the features extracted in the branch structures are added; then, inputting the added features into the next branch structure for feature extraction again, and adding the extracted features to obtain final output; wherein, the first branch structure comprises: the data processing device comprises a first BN branch for performing normalization processing on data after input data convolution operation processing, a second BN branch for performing depth convolution operation processing and normalization processing on the input data after the input data is processed in the same way as the first BN branch, a third BN branch for performing normalization processing twice after the input data is subjected to average pooling operation processing, and a fourth BN branch for performing normalization processing on the result of the input data after the input data is subjected to depth convolution operation processing; the next branch structure comprises: the device comprises a first branch circuit, a second branch circuit, a third branch circuit and a fourth branch circuit, wherein the first branch circuit is used for performing convolution operation processing on input characteristics and then performing normalization processing, the second branch circuit is used for performing the same processing on the input characteristics as the first branch circuit and then sequentially performing convolution operation processing and normalization processing, the third branch circuit is used for performing the same processing on the input characteristics as the first branch circuit and then sequentially performing average pooling operation and normalization processing, the fourth branch circuit is used for performing the same processing on the input characteristics as the first branch circuit, the deep convolution operation is used for performing convolution operation by adopting a deep convolution kernel with a preset size, and the size of the deep convolution kernel is larger than 1x 1.
6. The image super-resolution reconstruction method based on parameter reconstruction according to claim 1 or 5, wherein when extracting features in the super-resolution reconstruction network, a channel attention mechanism is used for weight distribution of the extracted features, first, a global average pooling operation is used to obtain a mean value of each branch channel, then, the mean values are scaled, a Sigmoid activation function is used to obtain weight coefficients of different branch channels, and the weight coefficients of the branch channels are multiplied by the extracted features to obtain feature data for fusion processing.
7. The image super-resolution reconstruction method based on parameter reconstruction of claim 6, wherein in the extraction feature fusion process, the extraction features of each branch channel are spliced, and the spliced feature data is sequentially subjected to convolution operation and deep convolution operation to output deep features.
8. The image super-resolution reconstruction method based on parameter reconstruction according to claim 1, wherein in the image reconstruction of the super-resolution reconstruction network, first, the shallow feature and the deep feature are added, and the added feature data is amplified by the up-sampling layer; secondly, further extracting the amplified feature data by utilizing convolution operation, and distributing weight to each pixel point in the amplified feature data by utilizing a pixel attention mechanism; and finally, performing super-resolution reconstruction by utilizing full-connection operation and fusing the input low-resolution images.
9. The image super-resolution reconstruction method based on parameter reconstruction of claim 8, wherein the process of assigning the weight to each pixel point by using the pixel attention mechanism is represented as follows: x 1 =Sigmoid(conv 1x1 (X 0 ))×X 0 Wherein X is 1 Representing output characteristics, Sigmoid representing Sigmoid activation function, C 1x1 Represents a 1X1 convolution operation, X 0 Is an input feature.
10. An image super-resolution reconstruction system based on parameter reconstruction is characterized by comprising: a model construction module and an image reconstruction module, wherein,
the model construction module is used for constructing a super-resolution reconstruction network for extracting the features of the low-resolution images in the input image data and reconstructing the images according to the extracted features, wherein in the super-resolution reconstruction network, the shallow features of the input image data are extracted by utilizing standard convolution firstly, then the deep features in the shallow features are extracted by utilizing depth separable convolution of parameter reconstruction, and the images are reconstructed by utilizing the deep features;
the image reconstruction module is used for carrying out network training by utilizing the collected sample data; and aiming at the image data to be reconstructed, the trained super-resolution reconstruction network is utilized to extract the characteristics and reconstruct the image.
CN202210760864.8A 2022-06-30 2022-06-30 Image super-resolution reconstruction method and system based on parameter reconstruction Pending CN115063297A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210760864.8A CN115063297A (en) 2022-06-30 2022-06-30 Image super-resolution reconstruction method and system based on parameter reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210760864.8A CN115063297A (en) 2022-06-30 2022-06-30 Image super-resolution reconstruction method and system based on parameter reconstruction

Publications (1)

Publication Number Publication Date
CN115063297A true CN115063297A (en) 2022-09-16

Family

ID=83204392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210760864.8A Pending CN115063297A (en) 2022-06-30 2022-06-30 Image super-resolution reconstruction method and system based on parameter reconstruction

Country Status (1)

Country Link
CN (1) CN115063297A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861081A (en) * 2023-02-27 2023-03-28 耕宇牧星(北京)空间科技有限公司 Image super-resolution reconstruction method based on stepped multi-level wavelet network
CN116205284A (en) * 2023-05-05 2023-06-02 北京蔚领时代科技有限公司 Super-division network, method, device and equipment based on novel re-parameterized structure
CN116385272A (en) * 2023-05-08 2023-07-04 南京信息工程大学 Image super-resolution reconstruction method, system and equipment
CN116664409A (en) * 2023-08-01 2023-08-29 北京智芯微电子科技有限公司 Image super-resolution reconstruction method, device, computer equipment and storage medium
CN117132472A (en) * 2023-10-08 2023-11-28 兰州理工大学 Forward-backward separable self-attention-based image super-resolution reconstruction method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861081A (en) * 2023-02-27 2023-03-28 耕宇牧星(北京)空间科技有限公司 Image super-resolution reconstruction method based on stepped multi-level wavelet network
CN116205284A (en) * 2023-05-05 2023-06-02 北京蔚领时代科技有限公司 Super-division network, method, device and equipment based on novel re-parameterized structure
CN116385272A (en) * 2023-05-08 2023-07-04 南京信息工程大学 Image super-resolution reconstruction method, system and equipment
CN116385272B (en) * 2023-05-08 2023-12-19 南京信息工程大学 Image super-resolution reconstruction method, system and equipment
CN116664409A (en) * 2023-08-01 2023-08-29 北京智芯微电子科技有限公司 Image super-resolution reconstruction method, device, computer equipment and storage medium
CN116664409B (en) * 2023-08-01 2023-10-31 北京智芯微电子科技有限公司 Image super-resolution reconstruction method, device, computer equipment and storage medium
CN117132472A (en) * 2023-10-08 2023-11-28 兰州理工大学 Forward-backward separable self-attention-based image super-resolution reconstruction method

Similar Documents

Publication Publication Date Title
CN115063297A (en) Image super-resolution reconstruction method and system based on parameter reconstruction
Zhao et al. Efficient image super-resolution using pixel attention
JP7417747B2 (en) Super resolution reconstruction method and related equipment
CN109118432B (en) Image super-resolution reconstruction method based on rapid cyclic convolution network
CN110415170A (en) A kind of image super-resolution method based on multiple dimensioned attention convolutional neural networks
CN111105352A (en) Super-resolution image reconstruction method, system, computer device and storage medium
CN111598778B (en) Super-resolution reconstruction method for insulator image
CN108921910B (en) JPEG coding compressed image restoration method based on scalable convolutional neural network
CN110675321A (en) Super-resolution image reconstruction method based on progressive depth residual error network
CN111028150A (en) Rapid space-time residual attention video super-resolution reconstruction method
CN111640060A (en) Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module
CN111696033A (en) Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide
CN111932461A (en) Convolutional neural network-based self-learning image super-resolution reconstruction method and system
CN111242173A (en) RGBD salient object detection method based on twin network
CN113066034A (en) Face image restoration method and device, restoration model, medium and equipment
CN116091313A (en) Image super-resolution network model and reconstruction method
CN112907448A (en) Method, system, equipment and storage medium for super-resolution of any-ratio image
CN115393191A (en) Method, device and equipment for reconstructing super-resolution of lightweight remote sensing image
CN115984747A (en) Video saliency target detection method based on dynamic filter
CN115713462A (en) Super-resolution model training method, image recognition method, device and equipment
CN113096015B (en) Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network
Li et al. High-resolution network for photorealistic style transfer
Liu et al. A deep recursive multi-scale feature fusion network for image super-resolution
CN114926336A (en) Video super-resolution reconstruction method and device, computer equipment and storage medium
CN113850721A (en) Single image super-resolution reconstruction method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination