CN110838087A - Image super-resolution reconstruction method and system - Google Patents

Image super-resolution reconstruction method and system Download PDF

Info

Publication number
CN110838087A
CN110838087A CN201911105707.8A CN201911105707A CN110838087A CN 110838087 A CN110838087 A CN 110838087A CN 201911105707 A CN201911105707 A CN 201911105707A CN 110838087 A CN110838087 A CN 110838087A
Authority
CN
China
Prior art keywords
image
neural network
network model
elman neural
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911105707.8A
Other languages
Chinese (zh)
Inventor
唐英干
孙树腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN201911105707.8A priority Critical patent/CN110838087A/en
Publication of CN110838087A publication Critical patent/CN110838087A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses an image super-resolution reconstruction method and system, and relates to the technical field of image super-resolution. The method comprises the following steps: constructing an Elman neural network model; the Elman neural network model includes: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer; acquiring an image to be reconstructed and importing the image to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image; the construction of the Elman neural network model comprises the following steps: acquiring a training set and a test set; determining parameters of the Elman neural network model according to the training set; training the Elman neural network model by using a training set to obtain a primary Elman neural network model; and adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model. According to the method, the Elman neural network is applied to the field of image super-resolution, so that the accuracy of the reconstructed image is improved; the constructed Elman neural network model only comprises four layers, the structure is simple, and the training speed is improved.

Description

Image super-resolution reconstruction method and system
Technical Field
The invention relates to the technical field of image super-resolution, in particular to an image super-resolution reconstruction method and system.
Background
Digital images are widely used in the fields of monitoring, traffic, medical treatment and the like. However, the resolution of the acquired image is too low to be practical due to limitations of the imaging apparatus, the imaging environment, and the like. In order to improve the resolution of the picture and meet the requirements of practical application, super-resolution reconstruction of the picture gradually becomes a classic problem in the field of machine vision. At present, most of image super-resolution methods with good reconstruction effect adopt an artificial neural network algorithm. As a simulation of the biological neural network, the artificial neural network is an important research field of machine learning and artificial intelligence, and is also the most commonly used algorithm in the image processing field.
The existing image super-resolution methods are many and can be roughly divided into three categories: interpolation-based methods, reconstruction-based methods, learning-based methods. Reconstruction-based methods rely on a degradation model and various image priors, the use of which imparts corresponding natural image statistics to the reconstructed image. Existing reconstruction-based methods mostly utilize convolutional neural networks. However, the training process of the shallow convolutional neural network is simple, but the reconstruction accuracy is not high; the deep convolutional neural network has a long training time, but the reconstruction accuracy is higher than that of the shallow convolutional neural network. Therefore, the existing image super-resolution method has the problem of incompatibility of reconstruction precision and training efficiency.
Disclosure of Invention
The invention aims to provide an image super-resolution reconstruction method and system, which apply an Elman network to the field of image super-resolution, improve the accuracy of the reconstructed image and reduce the training time.
In order to achieve the purpose, the invention provides the following scheme:
an image super-resolution reconstruction method, comprising:
constructing an Elman neural network model to obtain the constructed Elman neural network model; the Elman neural network model comprises: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer;
acquiring an image to be reconstructed;
importing the image to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image;
the specific process for constructing the Elman neural network model comprises the following steps:
acquiring a training set and a test set; the training set includes: the image processing method comprises the following steps that a first image block and a second image block are arranged, the number of the first image block is equal to that of the second image block, the resolution of the first image block is higher than that of the second image block, and the content of the first image block is consistent with that of the second image block; the test set includes: the image processing method comprises a third image block and a fourth image block, wherein the number of the third image block is equal to that of the fourth image block, the resolution of the third image block is higher than that of the fourth image block, and the content of the third image block is consistent with that of the fourth image block;
determining parameters of an Elman neural network model according to the training set;
training the Elman neural network model by using the training set to obtain a preliminary Elman neural network model;
and adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model.
Optionally, the acquiring a training set and a test set specifically includes:
acquiring a first image;
converting the first image into a second image; the resolution of the first image is higher than the resolution of the second image;
dividing the first image into a plurality of first image blocks, and dividing the second image into a plurality of second image blocks;
acquiring a third image;
converting the third image into a fourth image; the resolution of the third image is higher than the resolution of the fourth image;
and dividing the third image into a plurality of third image blocks, and dividing the fourth image into a plurality of fourth image blocks.
Optionally, the determining parameters of the Elman neural network model according to the training set specifically includes:
determining the number of neurons of the input layer according to the dimension of the second image block;
determining the number of neurons of the output layer according to the dimension of the first image block;
initializing the number of feature extraction blocks in the feature extraction layer and the number of hidden layers in the feature extraction blocks; the feature extraction layer includes a plurality of the feature extraction blocks, and the feature extraction blocks include a feedback layer and the hidden layer.
Optionally, the training of the Elman neural network model by using the training set to obtain a preliminary Elman neural network model specifically includes:
initializing a loss function comprising a mean square error loss function and L2A loss function;
initializing a weight value of the feedback layer and a weight value of the hidden layer;
calculating the weight value of the output layer according to the initialized loss function, the weight value of the feedback layer and the weight value of the hidden layer to obtain an Elman neural network model with initialized weight;
and inputting the training set into the Elman neural network model initialized by the weight, and further adjusting the weight value of each layer to obtain a trained preliminary Elman neural network model.
Optionally, the adjusting the preliminary Elman neural network model by using the test set to obtain a constructed Elman neural network model specifically includes:
importing the test set into the preliminary Elman neural network model to obtain a reconstructed fourth image block;
evaluating the first reconstruction precision of the reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio;
adjusting the number of the feature extraction blocks, the number of the hidden layers and the loss function according to the first reconstruction precision to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers and the adjusted loss function;
calculating to obtain an updated weight value of the output layer according to the adjusted loss function, the weight value of the feedback layer and the weight value of the hidden layer;
obtaining an updated preliminary Elman neural network model according to the adjusted number of the feature extraction blocks, the number of the hidden layers, the weight value of the feedback layer, the weight value of the hidden layer and the updated weight value of the output layer;
importing the test set into an updated preliminary Elman neural network model to obtain an updated reconstructed fourth image block, and evaluating second reconstruction accuracy of the updated reconstructed fourth image block by utilizing a peak signal-to-noise ratio, a multi-scale structure similarity and a weighted peak signal-to-noise ratio;
judging whether the second reconstruction precision is greater than the first reconstruction precision or not to obtain a first judgment result;
if so, assigning the value of the second reconstruction accuracy to the first reconstruction accuracy, and returning to the step of adjusting the number of the feature extraction blocks, the number of the hidden layers, and the loss function according to the first reconstruction accuracy to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers, and the adjusted loss function;
if the first judgment result is negative, judging whether the second reconstruction precision is equal to the first reconstruction precision or not to obtain a second judgment result;
determining a preliminary Elman neural network model corresponding to the second reconstruction accuracy as the constructed Elman neural network model as a result of the second judgment;
and if the second judgment result is negative, determining the preliminary Elman neural network model corresponding to the first reconstruction precision as the constructed Elman neural network model.
An image super-resolution reconstruction system, comprising:
the building module is used for building the Elman neural network model to obtain the built Elman neural network model; the Elman neural network model comprises: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer;
the module for acquiring the image to be reconstructed is used for acquiring the image to be reconstructed;
the image reconstruction module is used for leading the image to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image;
the building module comprises:
the acquisition unit is used for acquiring a training set and a test set; the training set includes: the image processing method comprises the following steps that a first image block and a second image block are arranged, the number of the first image block is equal to that of the second image block, the resolution of the first image block is higher than that of the second image block, and the content of the first image block is consistent with that of the second image block; the test set includes: the image processing method comprises a third image block and a fourth image block, wherein the number of the third image block is equal to that of the fourth image block, the resolution of the third image block is higher than that of the fourth image block, and the content of the third image block is consistent with that of the fourth image block;
the parameter unit is used for determining parameters of the Elman neural network model according to the training set;
the primary Elman neural network model training unit is used for training the Elman neural network model by utilizing the training set to obtain a primary Elman neural network model;
and the neural network model unit is used for adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model.
Optionally, the obtaining unit includes:
a first image subunit for acquiring a first image;
the first conversion subunit is used for converting the first image into a second image; the resolution of the first image is higher than the resolution of the second image;
a first dividing subunit, configured to divide the first image into a plurality of first image blocks, and divide the second image into a plurality of second image blocks;
a third image subunit for acquiring a third image;
a second transformation unit for transforming the third image into a fourth image; the resolution of the third image is higher than the resolution of the fourth image;
and the second dividing subunit is configured to divide the third image into a plurality of the third image blocks and divide the fourth image into a plurality of the fourth image blocks.
Optionally, the parameter unit includes:
the input layer subunit is used for determining the number of the neurons of the input layer according to the dimension of the second image block;
the output layer subunit is used for determining the number of the neurons of the output layer according to the dimension of the first image block;
an initialization subunit, configured to initialize the number of feature extraction blocks in the feature extraction layer and the number of hidden layers in the feature extraction block; the feature extraction layer includes a plurality of the feature extraction blocks, and the feature extraction blocks include a feedback layer and the hidden layer.
Optionally, the preliminary Elman neural network model training unit includes:
a loss function subunit for initializing a loss function comprising a mean square error loss function and L2A loss function;
the initialization weight value subunit is used for initializing the weight value of the feedback layer and the weight value of the hidden layer;
the weight value subunit is used for calculating the weight value of the output layer according to the initialized loss function, the weight value of the feedback layer and the weight value of the hidden layer to obtain a weight initialized Elman neural network model;
and the primary subunit is used for inputting the training set into the Elman neural network model initialized by the weight, so that the weight values of all layers are further adjusted to obtain the trained primary Elman neural network model.
Optionally, the neural network model unit includes:
a fourth image block subunit, configured to import the test set into the preliminary Elman neural network model to obtain a reconstructed fourth image block;
the reconstruction precision subunit is used for evaluating the first reconstruction precision of the reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio;
an adjusting subunit, configured to adjust the number of the feature extraction blocks, the number of the hidden layers, and the loss function according to the first reconstruction accuracy, so as to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers, and the adjusted loss function;
the weight updating subunit is configured to calculate a weight value of the updated output layer according to the adjusted loss function, the weight value of the feedback layer, and the weight value of the hidden layer;
an updating initial sub-unit, configured to obtain an updated initial Elman neural network model according to the adjusted number of the feature extraction blocks, the number of the hidden layers, the weight value of the feedback layer, the weight value of the hidden layer, and the updated weight value of the output layer;
the second reconstruction precision subunit is used for importing the test set into the updated preliminary Elman neural network model to obtain an updated reconstructed fourth image block, and evaluating the second reconstruction precision of the updated reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio;
the judging subunit is used for judging whether the second reconstruction precision is greater than the first reconstruction precision or not to obtain a first judgment result; if the first judgment result is yes, executing a yes subunit; if the first judgment result is negative, executing a negative subunit;
the sub-unit is used for assigning the value of the second reconstruction precision to the first reconstruction precision and returning to the adjusting sub-unit;
a non-subunit, configured to determine whether the second reconstruction accuracy is equal to the first reconstruction accuracy, so as to obtain a second determination result; if the second judgment result is yes, the execution is to determine the subunit; if the second judgment result is negative, executing a negative determining subunit;
the determining subunit is used for determining the preliminary Elman neural network model corresponding to the second reconstruction accuracy as the constructed Elman neural network model;
and the non-determination subunit is used for determining the preliminary Elman neural network model corresponding to the first reconstruction accuracy as the constructed Elman neural network model.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a super-resolution reconstruction method and a super-resolution reconstruction system for an image. The method comprises the following steps: constructing an Elman neural network model to obtain the constructed Elman neural network model; the Elman neural network model includes: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer; acquiring an image to be reconstructed; importing an image to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image; the specific process for constructing the Elman neural network model comprises the following steps: acquiring a training set and a test set; determining parameters of the Elman neural network model according to the training set; training the Elman neural network model by using a training set to obtain a primary Elman neural network model; and adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model. According to the method, the Elman neural network is applied to the field of image super-resolution, so that the accuracy of the reconstructed image is improved; the Elman neural network model constructed by the method only comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer, has a simple structure, does not need back propagation, and improves the model training speed and the training efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of an image super-resolution reconstruction method according to an embodiment of the present invention;
fig. 2 is a system diagram of an image super-resolution reconstruction system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The invention provides an image super-resolution reconstruction method, and fig. 1 is a flowchart of the image super-resolution reconstruction method provided by the embodiment of the invention. Referring to fig. 1, the image super-resolution reconstruction method includes:
step 101, constructing an Elman neural network model to obtain the constructed Elman neural network model; the Elman neural network model includes: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer; the feature extraction layer includes a plurality of feature extraction blocks, each of which includes a feedback layer and a hidden layer.
Step 101 specifically includes: the specific process for constructing the Elman neural network model comprises the following steps:
step 1011, acquiring a training set and a test set; the training set comprises: the image processing device comprises a first image block and a second image block, wherein the number of the first image block is equal to that of the second image block, the resolution of the first image block is higher than that of the second image block, and the content of the first image block is consistent with that of the second image block; the test set includes: the image comprises a third image block and a fourth image block, wherein the number of the third image block is equal to that of the fourth image block, the resolution of the third image block is higher than that of the fourth image block, and the content of the third image block is consistent with that of the fourth image block.
Step 1011 specifically includes:
a first image is acquired.
Converting the first image into a second image; the resolution of the first image is higher than the resolution of the second image.
The first image is divided into a plurality of first image blocks, and the second image is divided into a plurality of second image blocks. In practical application, after a first image is obtained, the first image is preprocessed by adopting bicubic interpolation, and then the preprocessed first image is converted into a second image; and dividing the preprocessed first image into a plurality of first image blocks.
A third image is acquired.
Converting the third image into a fourth image; the resolution of the third image is higher than the resolution of the fourth image.
And dividing the third image into a plurality of third image blocks and dividing the fourth image into a plurality of fourth image blocks. In practical application, after the third image is obtained, the third image is preprocessed by adopting bicubic interpolation, and then the preprocessed third image is converted into a fourth image; and dividing the preprocessed third image into a plurality of third image blocks.
And step 1012, determining parameters of the Elman neural network model according to the training set.
Step 1012 specifically includes:
and determining the number of the neurons of the input layer according to the dimension of the second image block. The method specifically comprises the following steps: determining the number of neurons of an input layer according to the dimension of the second image block, namely the size of the second image block, wherein the dimension of the second image block corresponds to the number of neurons of the input layer; the size of the second image block is h × w, and the number of input layer neurons is h × w × m, where h denotes the height of the second image block, w denotes the width of the second image block, and m denotes the number of feature extraction blocks.
And determining the number of neurons of an output layer according to the dimension of the first image block. The method specifically comprises the following steps: the number of neurons of the output layer is determined from the dimension of the first image block, i.e. the size of the first image block, the dimension of the first image block corresponding to the number of neurons of the output layer, the size of the first image block being H × W1, the number of neurons of the output layer being H × W1, where H denotes the height of the first image block and W1 denotes the width of the first image block.
Initializing the number of feature extraction blocks in the feature extraction layer and the number of hidden layers in the feature extraction block.
And 1013, training the Elman neural network model by using the training set to obtain a preliminary Elman neural network model.
Step 1013 specifically includes:
initializing a loss function, the loss function comprising a mean square error loss function and L2A loss function. In practical application, the mean square error loss function and L are selected randomly first2Any one of the loss functions is used as a loss function, and then a final loss function is determined according to the first determination result in step 1014, that is, the final loss function is a loss function corresponding to high reconstruction accuracy.
And initializing the weight value of the feedback layer and the weight value of the hidden layer. The method specifically comprises the following steps: and initializing a network weight by using a random number between-1 and 1, namely, a weight value of a feedback layer and a weight value of a hidden layer, so that the network weight has a random characteristic as a whole, and is more favorable for extracting the characteristics of the image block and training an Elman neural network model.
And calculating the weight value of the output layer according to the initialized loss function, the weight value of the feedback layer and the weight value of the hidden layer to obtain the weight initialized Elman neural network model. The method specifically comprises the following steps: and obtaining the weight value of the output layer according to the initialized formula of the loss function. The formula of the mean square error loss function is: w ═ ZTZ)-1ZTY, where W represents the weight value of the output layer, Z represents the image feature matrix composed of the features extracted by all the feature extraction blocks, and ZTRepresenting the transpose of the image feature matrix and Y representing the first image block. L is2The formula for the loss function is: w ═ ZTZ+λI)-1ZTY, where λ represents the regularization parameter and I represents the identity matrix.
And inputting the training set into the Elman neural network model with initialized weights, and further adjusting the weight values of all layers to obtain the trained preliminary Elman neural network model.
And 1014, adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model.
Step 1014 specifically comprises:
and importing the test set into the preliminary Elman neural network model to obtain a reconstructed fourth image block.
And evaluating the first reconstruction precision of the reconstructed fourth image block by using Peak Signal to noise Ratio (PSNR), multi-scale structural similarity (MSSSIM) and Weighted Peak Signal to noise Ratio (WPSNR).
And adjusting the number of the feature extraction blocks, the number of the hidden layers and the loss function according to the first reconstruction precision to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers and the adjusted loss function. The method specifically comprises the following steps: and increasing the number of the feature extraction blocks, the number of the hidden layers and the size of the regularization parameter according to the first reconstruction precision.
And calculating to obtain the updated weight value of the output layer according to the adjusted loss function, the weight value of the feedback layer and the weight value of the hidden layer.
And obtaining an updated preliminary Elman neural network model according to the number of the adjusted feature extraction blocks, the number of the hidden layers, the weight value of the feedback layer, the weight value of the hidden layer and the updated weight value of the output layer.
And importing the test set into the updated preliminary Elman neural network model to obtain an updated reconstructed fourth image block, and evaluating the second reconstruction precision of the updated reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio.
And judging whether the second reconstruction precision is greater than the first reconstruction precision or not to obtain a first judgment result.
And the first judgment result is that the value of the second reconstruction precision is assigned to the first reconstruction precision, and the step of adjusting the number of the feature extraction blocks, the number of the hidden layers and the loss function according to the first reconstruction precision is returned to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers and the adjusted loss function. In practical application, a preliminary Elman neural network model corresponding to the second reconstruction accuracy and a preliminary Elman neural network model corresponding to the first reconstruction accuracy are saved.
And if the first judgment result is negative, judging whether the second reconstruction precision is equal to the first reconstruction precision or not to obtain a second judgment result.
As a result of the second determination, the preliminary Elman neural network model corresponding to the second reconstruction accuracy is determined as the constructed Elman neural network model.
And if the second judgment result is negative, determining the preliminary Elman neural network model corresponding to the first reconstruction precision as the constructed Elman neural network model. And determining the constructed Elman neural network model through the network hyper-parameters of the preliminary Elman neural network model corresponding to the first reconstruction precision. The network hyper-parameters comprise: the number of the feature extraction blocks, the number of the hidden layers, the weight value of the feedback layer, the weight value of the hidden layer, and the weight value of the output layer. In practical applications, if the fluctuation of the reconstruction accuracy is large, it indicates that the Elman neural network is over-fitted or under-fitted, and the adjustment can be performed by reducing or increasing the regularization parameter.
The constructed Elman neural network model comprises the following components: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer.
The input layer is used for acquiring an image block to be reconstructed.
The feature extraction layer is used for extracting the image features of the image blocks to be reconstructed through each feature extraction block in the feature extraction layer, and the formula of the feature extraction layer is as follows:
fi(k)=Φ(vifi(k-1)+wix(k)+βi),k=1,...,n,i=1,...,m;
Fi=[fi(1);fi(2);...;fi(n)],i=1,...,m;
wherein f isi(k) Representing the image characteristics of the kth image block to be reconstructed extracted by the ith characteristic extraction block; Φ (-) represents the activation function of the Elman neural network; v. ofiRepresenting a weight value of a feedback layer in the ith feature extraction block; f. ofi(k-1) extracting the image features of the (k-1) th image block to be reconstructed from the ith feature extraction block; w is aiRepresenting concealment in the ith feature extraction blockLayer weight value, x (k) representing the kth image block to be reconstructed βiIndicating an offset value, offset value βiAre present for better fitting the data. k represents the serial number of the image blocks to be reconstructed, n represents the number of the image blocks to be reconstructed, i represents the serial number of the feature extraction blocks, and m represents the number of the feature extraction blocks. FiRepresenting all image features extracted by the i-th feature extraction block, FiIs fi(k) A collection of (a).
The cascade operation layer is used for carrying out cascade operation on the image features extracted by the feature extraction blocks, namely, the image features obtained by all the feature extraction blocks are collected to form an image feature matrix, and the formula of the cascade operation layer is as follows:
Z=[F1,F2,...,Fm];
wherein Z represents an image feature matrix, FmAll the image features extracted by the m-th feature extraction block are represented.
The output layer is used for obtaining expected output, namely a reconstructed image;
Y1=ZW;
where W represents the weight value of the output layer and Y1 represents the desired output.
And 102, acquiring an image to be reconstructed.
And 103, importing the image to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image. Step 103 includes preprocessing the acquired image to be reconstructed by adopting bicubic interpolation, dividing the preprocessed image to be reconstructed into a plurality of image blocks to be reconstructed, and importing the plurality of image blocks to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image.
The Elman neural network is applied to the field of image super-resolution, and is further improved on the basis of the Elman neural network; the Elman neural network comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer; the characteristic extraction layer comprises a plurality of characteristic extraction blocks, each characteristic extraction block comprises a feedback layer and a hidden layer, and the characteristic extraction layer can obtain richer data characteristics, so that the performance of the Elman neural network is further improved. In the training process, the weight value of the feedback layer and the weight value of the hidden layer are initialized randomly, the weight value of the output layer is solved by using the loss function, and the training time of the Elman neural network is shortened greatly. The invention realizes the super-resolution reconstruction of the low-resolution image by utilizing the constructed Elman neural network model, has short training time, is simple and convenient to analyze and has wide industrial application prospect.
The invention provides an image super-resolution reconstruction system, and fig. 2 is a system diagram of the image super-resolution reconstruction system provided by the embodiment of the invention. Referring to fig. 2, the image super-resolution reconstruction system includes:
the building module 201 is used for building an Elman neural network model to obtain a built Elman neural network model; the Elman neural network model includes: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer.
The building block 201 includes:
the acquisition unit is used for acquiring a training set and a test set; the training set comprises: the image processing device comprises a first image block and a second image block, wherein the number of the first image block is equal to that of the second image block, the resolution of the first image block is higher than that of the second image block, and the content of the first image block is consistent with that of the second image block; the test set includes: the image comprises a third image block and a fourth image block, wherein the number of the third image block is equal to that of the fourth image block, the resolution of the third image block is higher than that of the fourth image block, and the content of the third image block is consistent with that of the fourth image block.
The acquisition unit includes:
a first image subunit for acquiring a first image.
The first conversion subunit is used for converting the first image into a second image; the resolution of the first image is higher than the resolution of the second image.
And the first dividing subunit is used for dividing the first image into a plurality of first image blocks and dividing the second image into a plurality of second image blocks. In practical application, after a first image is obtained, the first image is preprocessed by adopting bicubic interpolation, and then the preprocessed first image is converted into a second image; and dividing the preprocessed first image into a plurality of first image blocks.
And the third image subunit is used for acquiring a third image.
A second transformation unit for transforming the third image into a fourth image; the resolution of the third image is higher than the resolution of the fourth image.
And the second division subunit is used for dividing the third image into a plurality of third image blocks and dividing the fourth image into a plurality of fourth image blocks. In practical application, after the third image is obtained, the third image is preprocessed by adopting bicubic interpolation, and then the preprocessed third image is converted into a fourth image; and dividing the preprocessed third image into a plurality of third image blocks.
And the parameter unit is used for determining the parameters of the Elman neural network model according to the training set.
The parameter unit includes:
and the input layer subunit is used for determining the number of the neurons of the input layer according to the dimension of the second image block. The input layer subunit specifically includes: determining the number of neurons of an input layer according to the dimension of the second image block, namely the size of the second image block, wherein the dimension of the second image block corresponds to the number of neurons of the input layer; the size of the second image block is h × w, and the number of input layer neurons is h × w × m, where h denotes the height of the second image block, w denotes the width of the second image block, and m denotes the number of feature extraction blocks.
And the output layer subunit is used for determining the number of the neurons of the output layer according to the dimension of the first image block. The output layer subunit specifically includes: the number of neurons of the output layer is determined from the dimension of the first image block, i.e. the size of the first image block, the dimension of the first image block corresponding to the number of neurons of the output layer, the size of the first image block being H × W1, the number of neurons of the output layer being H × W1, where H denotes the height of the first image block and W1 denotes the width of the first image block.
And the initialization subunit is used for initializing the number of the feature extraction blocks in the feature extraction layer and the number of the hidden layers in the feature extraction blocks.
And the primary Elman neural network model training unit is used for training the Elman neural network model by utilizing a training set to obtain the primary Elman neural network model.
The preliminary Elman neural network model training unit comprises:
a loss function subunit for initializing a loss function comprising a mean square error loss function and L2A loss function.
And the initialization weight value subunit is used for initializing the weight value of the feedback layer and the weight value of the hidden layer. The initializing weight value subunit specifically includes: and initializing a network weight by using a random number between-1 and 1, namely, a weight value of a feedback layer and a weight value of a hidden layer, so that the network weight has a random characteristic as a whole, and is more favorable for extracting the characteristics of the image block and training an Elman neural network model.
And the weight value subunit is used for calculating the weight value of the output layer according to the initialized loss function, the weight value of the feedback layer and the weight value of the hidden layer to obtain the weight initialized Elman neural network model. The weight subunit specifically includes: and obtaining the weight value of the output layer according to the initialized formula of the loss function. The formula of the mean square error loss function is: w ═ ZTZ)- 1ZTY, where W represents the weight value of the output layer, Z represents the image feature matrix composed of the features extracted by all the feature extraction blocks, and ZTRepresenting the transpose of the image feature matrix and Y representing the first image block. L is2The formula for the loss function is: w ═ ZTZ+λI)- 1ZTY, where λ represents the regularization parameter and I represents the identity matrix.
And the primary subunit is used for inputting the training set into the Elman neural network model with initialized weights, so that the weight values of all layers are further adjusted to obtain the trained primary Elman neural network model.
And the neural network model unit is used for adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model.
The neural network model unit comprises:
and the fourth image block subunit is used for importing the test set into the preliminary Elman neural network model to obtain a reconstructed fourth image block.
And the reconstruction precision subunit is used for evaluating the first reconstruction precision of the reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio.
And the adjusting subunit is used for adjusting the number of the feature extraction blocks, the number of the hidden layers and the loss function according to the first reconstruction precision to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers and the adjusted loss function. The adjusting subunit specifically includes: and increasing the number of the feature extraction blocks, the number of the hidden layers and the size of the regularization parameter according to the first reconstruction precision.
And the weight updating subunit is used for calculating the weight value of the updated output layer according to the adjusted loss function, the weight value of the feedback layer and the weight value of the hidden layer.
And the updating initial sub-unit is used for obtaining an updated preliminary Elman neural network model according to the number of the adjusted feature extraction blocks, the number of the hidden layers, the weight value of the feedback layer, the weight value of the hidden layer and the weight value of the updated output layer.
And the second reconstruction precision subunit is used for importing the test set into the updated preliminary Elman neural network model to obtain an updated reconstructed fourth image block, and evaluating the second reconstruction precision of the updated reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio.
The judging subunit is used for judging whether the second reconstruction precision is greater than the first reconstruction precision or not to obtain a first judgment result; if the first judgment result is yes, executing a yes subunit; and if the first judgment result is negative, executing a negative subunit.
And the sub-unit is used for assigning the value of the second reconstruction precision to the first reconstruction precision and returning to the adjusting sub-unit. In practical application, a preliminary Elman neural network model corresponding to the second reconstruction accuracy and a preliminary Elman neural network model corresponding to the first reconstruction accuracy are saved.
A non-subunit, configured to determine whether the second reconstruction accuracy is equal to the first reconstruction accuracy, so as to obtain a second determination result; if the second judgment result is yes, the execution is to determine the subunit; and if the second judgment result is negative, executing the negative determining subunit.
And the determining subunit is used for determining the preliminary Elman neural network model corresponding to the second reconstruction precision as the constructed Elman neural network model.
And the non-determination subunit is used for determining the preliminary Elman neural network model corresponding to the first reconstruction accuracy as the constructed Elman neural network model. The determining whether the subunit specifically includes: and determining the constructed Elman neural network model through the network hyper-parameters of the preliminary Elman neural network model corresponding to the first reconstruction precision. The network hyper-parameters comprise: the number of the feature extraction blocks, the number of the hidden layers, the weight value of the feedback layer, the weight value of the hidden layer, and the weight value of the output layer. In practical applications, if the fluctuation of the reconstruction accuracy is large, it indicates that the Elman neural network is over-fitted or under-fitted, and the adjustment can be performed by reducing or increasing the regularization parameter.
And an image to be reconstructed obtaining module 202, configured to obtain an image to be reconstructed.
And the image reconstruction module 203 is used for leading the image to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image. The reconstructed image module 203 specifically includes: preprocessing the acquired image to be reconstructed by adopting bicubic interpolation, dividing the preprocessed image to be reconstructed into a plurality of image blocks to be reconstructed, and introducing the plurality of image blocks to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. An image super-resolution reconstruction method, comprising:
constructing an Elman neural network model to obtain the constructed Elman neural network model; the Elman neural network model comprises: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer;
acquiring an image to be reconstructed;
importing the image to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image;
the specific process for constructing the Elman neural network model comprises the following steps:
acquiring a training set and a test set; the training set includes: the image processing method comprises the following steps that a first image block and a second image block are arranged, the number of the first image block is equal to that of the second image block, the resolution of the first image block is higher than that of the second image block, and the content of the first image block is consistent with that of the second image block; the test set includes: the image processing method comprises a third image block and a fourth image block, wherein the number of the third image block is equal to that of the fourth image block, the resolution of the third image block is higher than that of the fourth image block, and the content of the third image block is consistent with that of the fourth image block;
determining parameters of an Elman neural network model according to the training set;
training the Elman neural network model by using the training set to obtain a preliminary Elman neural network model;
and adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model.
2. The image super-resolution reconstruction method according to claim 1, wherein the acquiring of the training set and the test set specifically includes:
acquiring a first image;
converting the first image into a second image; the resolution of the first image is higher than the resolution of the second image;
dividing the first image into a plurality of first image blocks, and dividing the second image into a plurality of second image blocks;
acquiring a third image;
converting the third image into a fourth image; the resolution of the third image is higher than the resolution of the fourth image;
and dividing the third image into a plurality of third image blocks, and dividing the fourth image into a plurality of fourth image blocks.
3. The image super-resolution reconstruction method according to claim 1, wherein the determining parameters of the Elman neural network model according to the training set specifically comprises:
determining the number of neurons of the input layer according to the dimension of the second image block;
determining the number of neurons of the output layer according to the dimension of the first image block;
initializing the number of feature extraction blocks in the feature extraction layer and the number of hidden layers in the feature extraction blocks; the feature extraction layer includes a plurality of the feature extraction blocks, and the feature extraction blocks include a feedback layer and the hidden layer.
4. The image super-resolution reconstruction method according to claim 3, wherein the training of the Elman neural network model by the training set to obtain a preliminary Elman neural network model specifically comprises:
initializing a loss function comprising a mean square error loss function and L2A loss function;
initializing a weight value of the feedback layer and a weight value of the hidden layer;
calculating the weight value of the output layer according to the initialized loss function, the weight value of the feedback layer and the weight value of the hidden layer to obtain an Elman neural network model with initialized weight;
and inputting the training set into the Elman neural network model initialized by the weight, and further adjusting the weight value of each layer to obtain a trained preliminary Elman neural network model.
5. The image super-resolution reconstruction method according to claim 4, wherein the adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model specifically comprises:
importing the test set into the preliminary Elman neural network model to obtain a reconstructed fourth image block;
evaluating the first reconstruction precision of the reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio;
adjusting the number of the feature extraction blocks, the number of the hidden layers and the loss function according to the first reconstruction precision to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers and the adjusted loss function;
calculating to obtain an updated weight value of the output layer according to the adjusted loss function, the weight value of the feedback layer and the weight value of the hidden layer;
obtaining an updated preliminary Elman neural network model according to the adjusted number of the feature extraction blocks, the number of the hidden layers, the weight value of the feedback layer, the weight value of the hidden layer and the updated weight value of the output layer;
importing the test set into an updated preliminary Elman neural network model to obtain an updated reconstructed fourth image block, and evaluating second reconstruction accuracy of the updated reconstructed fourth image block by utilizing a peak signal-to-noise ratio, a multi-scale structure similarity and a weighted peak signal-to-noise ratio;
judging whether the second reconstruction precision is greater than the first reconstruction precision or not to obtain a first judgment result;
if so, assigning the value of the second reconstruction accuracy to the first reconstruction accuracy, and returning to the step of adjusting the number of the feature extraction blocks, the number of the hidden layers, and the loss function according to the first reconstruction accuracy to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers, and the adjusted loss function;
if the first judgment result is negative, judging whether the second reconstruction precision is equal to the first reconstruction precision or not to obtain a second judgment result;
determining a preliminary Elman neural network model corresponding to the second reconstruction accuracy as the constructed Elman neural network model as a result of the second judgment;
and if the second judgment result is negative, determining the preliminary Elman neural network model corresponding to the first reconstruction precision as the constructed Elman neural network model.
6. An image super-resolution reconstruction system, comprising:
the building module is used for building the Elman neural network model to obtain the built Elman neural network model; the Elman neural network model comprises: the device comprises an input layer, a feature extraction layer, a cascade operation layer and an output layer;
the module for acquiring the image to be reconstructed is used for acquiring the image to be reconstructed;
the image reconstruction module is used for leading the image to be reconstructed into the constructed Elman neural network model to obtain a reconstructed image;
the building module comprises:
the acquisition unit is used for acquiring a training set and a test set; the training set includes: the image processing method comprises the following steps that a first image block and a second image block are arranged, the number of the first image block is equal to that of the second image block, the resolution of the first image block is higher than that of the second image block, and the content of the first image block is consistent with that of the second image block; the test set includes: the image processing method comprises a third image block and a fourth image block, wherein the number of the third image block is equal to that of the fourth image block, the resolution of the third image block is higher than that of the fourth image block, and the content of the third image block is consistent with that of the fourth image block;
the parameter unit is used for determining parameters of the Elman neural network model according to the training set;
the primary Elman neural network model training unit is used for training the Elman neural network model by utilizing the training set to obtain a primary Elman neural network model;
and the neural network model unit is used for adjusting the preliminary Elman neural network model by using the test set to obtain the constructed Elman neural network model.
7. The image super-resolution reconstruction system according to claim 6, wherein the acquisition unit comprises:
a first image subunit for acquiring a first image;
the first conversion subunit is used for converting the first image into a second image; the resolution of the first image is higher than the resolution of the second image;
a first dividing subunit, configured to divide the first image into a plurality of first image blocks, and divide the second image into a plurality of second image blocks;
a third image subunit for acquiring a third image;
a second transformation unit for transforming the third image into a fourth image; the resolution of the third image is higher than the resolution of the fourth image;
and the second dividing subunit is configured to divide the third image into a plurality of the third image blocks and divide the fourth image into a plurality of the fourth image blocks.
8. The image super-resolution reconstruction system according to claim 6, wherein the parameter unit comprises:
the input layer subunit is used for determining the number of the neurons of the input layer according to the dimension of the second image block;
the output layer subunit is used for determining the number of the neurons of the output layer according to the dimension of the first image block;
an initialization subunit, configured to initialize the number of feature extraction blocks in the feature extraction layer and the number of hidden layers in the feature extraction block; the feature extraction layer includes a plurality of the feature extraction blocks, and the feature extraction blocks include a feedback layer and the hidden layer.
9. The image super-resolution reconstruction system according to claim 8, wherein the preliminary Elman neural network model training unit comprises:
a loss function subunit for initializing a loss function comprising a mean square error loss function and L2A loss function;
the initialization weight value subunit is used for initializing the weight value of the feedback layer and the weight value of the hidden layer;
the weight value subunit is used for calculating the weight value of the output layer according to the initialized loss function, the weight value of the feedback layer and the weight value of the hidden layer to obtain a weight initialized Elman neural network model;
and the primary subunit is used for inputting the training set into the Elman neural network model initialized by the weight, so that the weight values of all layers are further adjusted to obtain the trained primary Elman neural network model.
10. The image super-resolution reconstruction system according to claim 9, wherein the neural network model unit comprises:
a fourth image block subunit, configured to import the test set into the preliminary Elman neural network model to obtain a reconstructed fourth image block;
the reconstruction precision subunit is used for evaluating the first reconstruction precision of the reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio;
an adjusting subunit, configured to adjust the number of the feature extraction blocks, the number of the hidden layers, and the loss function according to the first reconstruction accuracy, so as to obtain the adjusted number of the feature extraction blocks, the adjusted number of the hidden layers, and the adjusted loss function;
the weight updating subunit is configured to calculate a weight value of the updated output layer according to the adjusted loss function, the weight value of the feedback layer, and the weight value of the hidden layer;
an updating initial sub-unit, configured to obtain an updated initial Elman neural network model according to the adjusted number of the feature extraction blocks, the number of the hidden layers, the weight value of the feedback layer, the weight value of the hidden layer, and the updated weight value of the output layer;
the second reconstruction precision subunit is used for importing the test set into the updated preliminary Elman neural network model to obtain an updated reconstructed fourth image block, and evaluating the second reconstruction precision of the updated reconstructed fourth image block by utilizing the peak signal-to-noise ratio, the multi-scale structure similarity and the weighted peak signal-to-noise ratio;
the judging subunit is used for judging whether the second reconstruction precision is greater than the first reconstruction precision or not to obtain a first judgment result; if the first judgment result is yes, executing a yes subunit; if the first judgment result is negative, executing a negative subunit;
the sub-unit is used for assigning the value of the second reconstruction precision to the first reconstruction precision and returning to the adjusting sub-unit;
a non-subunit, configured to determine whether the second reconstruction accuracy is equal to the first reconstruction accuracy, so as to obtain a second determination result; if the second judgment result is yes, the execution is to determine the subunit; if the second judgment result is negative, executing a negative determining subunit;
the determining subunit is used for determining the preliminary Elman neural network model corresponding to the second reconstruction accuracy as the constructed Elman neural network model;
and the non-determination subunit is used for determining the preliminary Elman neural network model corresponding to the first reconstruction accuracy as the constructed Elman neural network model.
CN201911105707.8A 2019-11-13 2019-11-13 Image super-resolution reconstruction method and system Pending CN110838087A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911105707.8A CN110838087A (en) 2019-11-13 2019-11-13 Image super-resolution reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911105707.8A CN110838087A (en) 2019-11-13 2019-11-13 Image super-resolution reconstruction method and system

Publications (1)

Publication Number Publication Date
CN110838087A true CN110838087A (en) 2020-02-25

Family

ID=69576378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911105707.8A Pending CN110838087A (en) 2019-11-13 2019-11-13 Image super-resolution reconstruction method and system

Country Status (1)

Country Link
CN (1) CN110838087A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844332A (en) * 2016-03-10 2016-08-10 中国石油大学(华东) Fast recursive Elman neural network modeling and learning algorithm
EP3173983A1 (en) * 2015-11-26 2017-05-31 Siemens Aktiengesellschaft A method and apparatus for providing automatically recommendations concerning an industrial system
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3173983A1 (en) * 2015-11-26 2017-05-31 Siemens Aktiengesellschaft A method and apparatus for providing automatically recommendations concerning an industrial system
CN105844332A (en) * 2016-03-10 2016-08-10 中国石油大学(华东) Fast recursive Elman neural network modeling and learning algorithm
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
应自炉 等: "多尺度密集残差网络的单幅图像超分辨率重建", 《中国图象图形学报》 *
磐创AI: "TensorFlow系列专题(七):一文综述RNN循环", 《CSDN博客》 *
陈潇雅 等: "基于长期负荷预测和联络分析的配电网规划", 《广东电力》 *
韩明钰: "循环神经网络在图像超分辨率中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN110378844B (en) Image blind motion blur removing method based on cyclic multi-scale generation countermeasure network
CN102156875B (en) Image super-resolution reconstruction method based on multitask KSVD (K singular value decomposition) dictionary learning
CN112818764B (en) Low-resolution image facial expression recognition method based on feature reconstruction model
CN110135386B (en) Human body action recognition method and system based on deep learning
CN111667445B (en) Image compressed sensing reconstruction method based on Attention multi-feature fusion
CN110288526B (en) Optimization method for improving imaging quality of single-pixel camera by image reconstruction algorithm based on deep learning
CN110533591B (en) Super-resolution image reconstruction method based on codec structure
CN112950480A (en) Super-resolution reconstruction method integrating multiple receptive fields and dense residual attention
CN116168067B (en) Supervised multi-modal light field depth estimation method based on deep learning
CN115484410B (en) Event camera video reconstruction method based on deep learning
CN111489305B (en) Image enhancement method based on reinforcement learning
CN109949217A (en) Video super-resolution method for reconstructing based on residual error study and implicit motion compensation
CN112767277B (en) Depth feature sequencing deblurring method based on reference image
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN116612339B (en) Construction device and grading device of nuclear cataract image grading model
CN117557856A (en) Pathological full-slice feature learning method based on self-supervision learning
CN110838087A (en) Image super-resolution reconstruction method and system
CN116503499A (en) Sketch drawing generation method and system based on cyclic generation countermeasure network
CN112215241B (en) Image feature extraction device based on small sample learning
CN114022362A (en) Image super-resolution method based on pyramid attention mechanism and symmetric network
CN113269702A (en) Low-exposure vein image enhancement method based on cross-scale feature fusion
CN114565511A (en) Lightweight image registration method, system and device based on global homography estimation
CN110381313B (en) Video compression sensing reconstruction method based on LSTM network and image group quality blind evaluation
CN110427892B (en) CNN face expression feature point positioning method based on depth-layer autocorrelation fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200225

RJ01 Rejection of invention patent application after publication