CN112085655B - Face super-resolution method based on dense residual error attention face priori network - Google Patents

Face super-resolution method based on dense residual error attention face priori network Download PDF

Info

Publication number
CN112085655B
CN112085655B CN202010847791.7A CN202010847791A CN112085655B CN 112085655 B CN112085655 B CN 112085655B CN 202010847791 A CN202010847791 A CN 202010847791A CN 112085655 B CN112085655 B CN 112085655B
Authority
CN
China
Prior art keywords
face
attention
resolution
network
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010847791.7A
Other languages
Chinese (zh)
Other versions
CN112085655A (en
Inventor
路小波
张杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010847791.7A priority Critical patent/CN112085655B/en
Publication of CN112085655A publication Critical patent/CN112085655A/en
Application granted granted Critical
Publication of CN112085655B publication Critical patent/CN112085655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face super-resolution method based on a dense residual error attention face priori network, which comprises the following steps: respectively constructing a concentrated residual error attention module, a face structure prior prediction module and an up-sampling module which are connected by jumper wires; the dense residual attention module and the face structure prior prediction module are connected in parallel by the jumper wire, and then the up-sampling module and the image reconstruction layer are cascaded to construct a dense residual attention face prior network; preprocessing the published data set, and dividing the processed data into a training set and a testing set; training a dense residual error attention face prior network; and inputting the images in the test set into a trained dense residual error attention face prior network, and outputting the reconstructed high-resolution face images. According to the invention, the dense residual error is trained to pay attention to the face priori network, so that super-resolution processing of the tested face image is realized, the super-resolution face image with face high-frequency details can be effectively recovered, and meanwhile, the identity information is reserved.

Description

Face super-resolution method based on dense residual error attention face priori network
Technical Field
The invention belongs to the technical field of image processing and face super-resolution, and particularly relates to a face super-resolution method based on a dense residual error attention face prior network.
Background
In face recognition tasks, it is often desirable to obtain high-quality images with high resolution and clear noise-free points. This is because a high quality face image not only has a good visual effect, but also contains a lot of detail information required in the subsequent processing. However, in an actual acquisition and transmission system, the acquired face image is often smaller in resolution due to the limitation of an acquisition environment, imaging hardware, network bandwidth and the like, so that the accuracy of a subsequent face recognition task is greatly affected. Improving the hardware conditions of the imaging system and controlling the acquisition environment are the most direct way to improve the imaging quality. But this approach not only increases the cost, but also presents real-world problems that are difficult to overcome in many application scenarios (e.g., monitoring face analysis, etc.). Compared with the method, the face super-resolution technology is used as a software-level method, is simple to realize, has low cost and has very important application value.
The face super-resolution algorithm, also known as the face phantom algorithm, aims to reconstruct a clear high-resolution face image with a low-resolution image. The common object super-resolution technology is an important research branch in the image field, however, for the super-resolution of a human face, the human face image has similar geometric structure and complex texture information, and the traditional super-resolution method of the human face cannot obtain a good human face reconstruction effect. In consideration of the special condition of the face image, the face super-resolution generally optimizes the reconstruction result by utilizing the inherent attribute of the face image, so as to recover the subjective true and natural face image. The existing face image super-resolution reconstruction algorithm mainly can be divided into two types: interpolation-based and learning-based methods.
The existing face super-resolution method generally fails to fully utilize the association between the prior information and the non-local information of the face, so that the high-frequency information of the face is lost, and a high-resolution face image with excessive artifacts is generated. Meanwhile, most face super-resolution algorithms only consider the mean square error loss in design, and the loss can obtain better objective indexes, but cannot consider the inherent information of the face. Therefore, how to design a novel loss function and recover accurate face structure and identity information is also a problem of the face super-resolution algorithm.
Disclosure of Invention
The invention aims to: in order to overcome the defects in the prior art, the face super-resolution method based on the dense residual error attention face priori network is provided, and face high-frequency details can be effectively recovered, so that a high-quality face image super-resolution result is obtained, an identity invariant feature loss function is provided, and a supervision network generates a face image with real identity information.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a face super-resolution method based on dense residual attention face prior network, comprising the following steps:
S1: respectively constructing a concentrated residual error attention module, a face structure prior prediction module and an up-sampling module which are connected by jumper wires;
S2: the dense residual attention module and the face structure prior prediction module are connected in parallel by the jumper wire, and then the up-sampling module and the image reconstruction layer are cascaded to construct a dense residual attention face prior network;
s3: preprocessing the published data set, and dividing the processed data into a training set and a testing set;
S4: training a dense residual error attention face prior network by using data in a training set;
S5: and inputting the images in the test set into a trained dense residual error attention face prior network, and outputting the reconstructed high-resolution face images.
Further, the construction process of the intensive residual attention module for jumper connection in the step S1 specifically includes:
A1: constructing a cascade residual error unit: the cascade residual unit consists of an inner convolution layer, a batch processing layer, an activation function and jumper connection;
A2: constructing a non-local attention unit;
a3: forming a residual attention module by using a cascade residual unit and a non-local attention unit;
A4: and (3) constructing a concentrated residual attention module of jumper connection: the concentrated residual attention module of jumper connection is made up of four residual attention modules and jumper connection, the residual attention modules are connected together end to end, then the input of the first residual attention module is combined with the output of the last residual attention module through jumper connection as the final output of the concentrated residual attention module of jumper connection.
Further, the construction process of the facial structure prior prediction module in the step S1 specifically includes:
b1: constructing a space transformer module;
b2: constructing a traditional hourglass network unit;
B3: stacking 1 space transformer module and four continuous hourglass network units, and respectively arranging intermediate supervision to form a face structure priori prediction module.
Further, the construction process of the up-sampling module in the step S1 specifically includes:
c1: introducing a sub-pixel convolution layer;
C2: an 8-fold upsampling parameter was set for the incoming subpixel convolutional layer.
Further, in the step S2, the construction process of the dense residual attention face prior network specifically includes:
d1: the method comprises the steps of connecting a concentrated residual attention module connected by a jumper with a face structure priori prediction module in parallel, and splicing the feature channel dimension of the output features;
D2: and (3) on the basis of the step D1, continuously cascading an up-sampling module and an image reconstruction layer to complete the construction of the dense residual error attention face prior network.
Further, the specific process of step S3 is as follows:
E1: preprocessing the published data set, normalizing the pixel value of each image matrix element to be between 0 and 1 to obtain a normalized image matrix;
E2: randomly rotating the image matrix to realize the misalignment processing of the image data;
e3: performing bicubic interpolation downsampling on the enhanced image data, and proportionally reducing the length and width of each image by 8 times according to the magnification factor to obtain a low-resolution original image and a high-resolution original image;
E4: the low-resolution original image and the high-resolution original image data are subjected to random scrambling in pairs, 80% of the data are selected as training data sets, and the rest are selected as test data sets.
Further, in the step S4, the training process of the dense residual attention face prior network is as follows:
F1: constructing a mean square error function as a loss function;
F2: constructing an identity-invariant feature loss function;
F3: constructing a face structure feature loss function;
F4: initializing parameters of a dense residual error attention face prior network and setting training parameters;
And F5: initializing parameters in a dense residual error attention face prior network to be Gaussian distribution with mean value of 0 and standard deviation of 0.001, and setting bias initialization to be 0; setting a learning rate, iteration times and the number of batch training samples;
f6: using the low-resolution image and the corresponding high-resolution image in the training data set to train the network, and updating parameters of the dense residual attention face prior network through an optimization algorithm;
f7: training dense residual errors to pay attention to a face prior network until the integral loss value e <10 -3 or the iteration number t >120, and storing a trained network model.
Further, the step A2 specifically includes:
The non-local attention unit is composed of three sub-branches, wherein each sub-branch is respectively connected with g, h and z convolution layers, an image matrix obtained after matrix deformation is carried out on output results of the two sub-branches after the g and h convolution layers are connected is subjected to matrix multiplication operation, a result obtained by the matrix multiplication operation is input into a classifier, the result obtained after the classifier and the output result of the sub-branch connected with the z convolution layers are subjected to matrix multiplication operation again, then the result obtained by the matrix multiplication operation is connected with a u convolution layer again, and the result obtained after the u convolution layer is added with corresponding elements of the original input of the non-local module after matrix deformation.
Further, the step B3 specifically includes: the face structure priori prediction module consists of 1 space transformer and 4 traditional hourglass networks; the space transformer realizes the alignment processing of the input unaligned low-resolution face image; four traditional hourglass networks are connected together end to end, so that more accurate priori prediction of the face of the person is sequentially realized; the output of each hourglass network is supervised by a true value of facial priors.
Further, the identity-invariant feature loss function in the step F2 is:
Wherein phi (-) represents the feature vector extracted by the average pooling layer of the trained Resnet network, Representing the generated super-resolution face/>And the original high resolution face picture (h).
Further, the face structure loss function in the step F3 is:
Where H k(fi) represents a thermodynamic diagram of the kth face landmark point predicted on the intermediately generated feature by the face structure prior prediction module, H k(hi) represents a true value of the kth face landmark point thermodynamic diagram calculated on the original high resolution face picture by the trained FACE ALIGNMENT Network (FAN), and P represents the total number of face landmark points selected.
The beneficial effects are that: compared with the prior art, the method introduces a residual attention module considering non-local characteristics into the neural network model, and builds the neural network model by stacking the residual attention modules and designing the prior prediction module of the face structure, thereby extracting the non-local attention characteristics and the prior information of the face structure; and then, the extracted non-local attention features and the prior features of the face structure are spliced in the channel dimension, and then are sent to up-sampling. The design ensures that the neural network model has better high-frequency information recovery capability, thereby obtaining a high-quality face image super-resolution result; and the neural network model has only medium size, so that the execution speed is very high.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a detailed structural schematic diagram of a dense residual attention face prior network;
FIG. 3 is a detailed schematic diagram of a dense residual attention module for jumper connection;
FIG. 4 is a detailed schematic diagram of the residual unit;
FIG. 5 is a detailed schematic of a non-local attention unit;
FIG. 6 is a schematic diagram showing the comparison of test results with those of other similar methods.
Detailed Description
The invention is further elucidated below in connection with the drawings and the specific embodiments.
The invention provides a face super-resolution method based on a dense residual attention face prior network, which refers to fig. 1 and comprises the following steps:
Step 1: the method for constructing the concentrated residual attention module of the jumper connection specifically comprises the following steps 11-14:
Step 11: constructing a residual error unit: the residual unit consists of an inner winding lamination layer, a batch processing layer, an activation function and jumper connection, and is particularly shown in fig. 4;
Step 12: building a non-local attention unit: the non-local attention unit is composed of three sub-branches, wherein each sub-branch is respectively connected with g, h and z convolution layers, an image matrix obtained after matrix deformation is carried out on output results of the two sub-branches after the g and h convolution layers are connected is subjected to matrix multiplication operation, a result obtained by the matrix multiplication operation is input into a classifier, the result obtained after the classifier and the output result of the sub-branch connected with the z convolution layers are subjected to matrix multiplication operation again, then the result obtained by the matrix multiplication operation is connected with a u convolution layer again, and the result obtained after the u convolution layer is added with corresponding elements of original input of a non-local module after the matrix deformation, wherein the specific steps are shown in fig. 5;
Step 13: cascading the residual error unit and the non-local attention unit to form a residual error attention module;
Step 14: and (3) constructing a concentrated residual attention module of jumper connection: the jumper-connected dense residual attention module is composed of four residual attention modules and jumper connection, wherein the residual attention modules are connected together end to end, then the input of the first residual attention module is combined with the output of the last residual attention module through the jumper connection to serve as the final output of the jumper-connected dense residual attention module, and the method is particularly shown in fig. 3.
Step 2: constructing a facial structure prior prediction module, which comprises the steps of 21-23:
Step 21: constructing a space transformer module;
Step 22: constructing a traditional hourglass network unit;
Step 23: stacking 1 space transformer module and four continuous hourglass network units, and respectively arranging intermediate supervision to form a face structure priori prediction module; the method comprises the following steps: the face structure priori prediction module consists of 1 space transformer and 4 traditional hourglass networks; the space transformer can realize the alignment treatment of the input unaligned low-resolution face image; four traditional hourglass networks are connected together end to end, so that more accurate priori prediction of the face of the person is sequentially realized; the output of each hourglass network is supervised by a truth value of a face priori (68 thermodynamic diagram truth values of key feature points of a face), and a supervision function can refer to the step (64);
step 3: an upsampling module is constructed comprising steps 31 and 32:
Step 31: introducing a sub-pixel convolution layer;
step 32: an 8-fold upsampling parameter was set for the incoming subpixel convolutional layer.
Step 4: the dense residual attention module and the face structure prior prediction module connected in parallel by the jumper are connected in series, and then the up-sampling module and the image reconstruction layer are cascaded to construct a dense residual attention face prior network, which comprises the steps 41 and 42:
step 41: the intensive residual attention module connected by the jumper and the facial structure prior prediction module are connected in parallel (the characteristic channel dimension splicing of the output characteristics is realized);
Step 42: the up-sampling module and the image reconstruction layer continue to be cascaded on the basis of step 41, as shown in particular in fig. 2.
Step 5: preprocessing the published data set and dividing the processed data into a training set and a test set, which comprises the steps 51-54:
Step 51: preprocessing the published CelebFaces Attributes (CelebA) dataset, normalizing the pixel value of each image matrix element to be between [0,1] to obtain a normalized image matrix;
step 52: randomly rotating the image matrix to realize the misalignment processing of the image data;
Step 53: performing bicubic interpolation downsampling on the enhanced image data, and proportionally reducing the length and width of each image by 8 times according to the magnification factor to obtain a low-resolution original image and a high-resolution original image;
Step 54: the low-resolution original image and the high-resolution original image data are subjected to random scrambling in pairs, 80% of the data are selected as training data sets, and the rest are selected as test data sets.
Step 6: training a dense residual attention face a priori network using training data, comprising steps 61-67:
Step 61: constructing a mean square error function as a loss function;
Step 62: building an identity-invariant feature loss function:
the method for calculating the identity-invariant feature loss function comprises the following steps:
wherein phi (-) represents the feature vector extracted by the average pooling layer of the trained Resnet network, Representing the generated super-resolution face/>And the original high-resolution face picture (h);
step 63: constructing a face structural feature loss function:
the face structure loss function calculation method comprises the following steps:
where H k(fi) represents the thermodynamic diagram of the kth face landmark point predicted on the intermediately generated feature by the face structure prior prediction module, H k(hi) represents the true value of the thermodynamic diagram of the kth face landmark point calculated on the original high resolution face picture by the trained FACE ALIGNMENT Network (FAN). P represents the total number of the selected face landmark points, and the embodiment is set to 68;
step 64: initializing parameters of a dense residual error attention face prior network and setting training parameters;
Step 65: initializing parameters in a dense residual attention face prior network to be Gaussian distribution with mean value of 0 and standard deviation of 0.001, and setting bias initialization to be 0; setting a learning rate, iteration times and the number of batch training samples;
Step 66: using the low-resolution image and the corresponding high-resolution image in the training data set to train the network, and updating parameters of the dense residual attention face prior network through an optimization algorithm;
Step 67: training dense residual errors to pay attention to a face prior network until the integral loss value e <10 -3 or the iteration number t >120, and storing a trained network model.
Step 7: and inputting the images in the test data set into a trained dense residual error attention face prior network, and outputting the reconstructed high-resolution face image.
Through the steps, the dense residual error attention face prior network is trained, and then super-resolution processing of the tested face image is realized.
In this embodiment, the face image finally output by the method is compared with the face images obtained by other methods, and the comparison result is shown in fig. 6, where c is a standard image, a, b, d, e, f, g is an image obtained by LR, difference and TDAE, CBN, SRGAN, VDSR methods, and j is an image obtained by the method, so that the face image obtained by the method has high resolution, can effectively recover the face high-frequency details, and retains the super-resolution face image of identity information.

Claims (8)

1. A face super-resolution method based on dense residual error attention face priori network is characterized in that: the method comprises the following steps:
S1: respectively constructing a concentrated residual error attention module, a face structure prior prediction module and an up-sampling module which are connected by jumper wires;
S2: the dense residual attention module and the face structure prior prediction module are connected in parallel by the jumper wire, and then the up-sampling module and the image reconstruction layer are cascaded to construct a dense residual attention face prior network;
s3: preprocessing the published data set, and dividing the processed data into a training set and a testing set;
S4: training a dense residual error attention face prior network by using data in a training set;
s5: inputting the images in the test set into a trained dense residual error attention face prior network, and outputting a reconstructed high-resolution face image;
the training process of the dense residual error attention face prior network in the step S4 is as follows:
F1: constructing a mean square error function as a loss function;
F2: constructing an identity-invariant feature loss function;
F3: constructing a face structure feature loss function;
F4: initializing parameters of a dense residual error attention face prior network and setting training parameters;
And F5: initializing parameters in a dense residual error attention face prior network to be Gaussian distribution with mean value of 0 and standard deviation of 0.001, and setting bias initialization to be 0; setting a learning rate, iteration times and the number of batch training samples;
f6: using the low-resolution image and the corresponding high-resolution image in the training data set to train the network, and updating parameters of the dense residual attention face prior network through an optimization algorithm;
F7: training a dense residual error attention face prior network until the integral loss value e <10 -3 or the iteration number t >120, and storing a trained network model;
The identity-invariant feature loss function in the step F2 is as follows:
Wherein phi (-) represents the feature vector extracted by the average pooling layer of the trained Resnet network, Representing the generated super-resolution face/>And the original high-resolution face picture (h);
the face structure loss function in the step F3 is as follows:
Where H k(fi) represents the thermodynamic diagram of the kth face landmark point predicted on the intermediate generated feature by the face structure prior prediction module, H k(hi) represents the true value of the thermodynamic diagram of the kth face landmark point calculated on the original high resolution face picture by the trained FACE ALIGNMENT Network, and P represents the total number of face landmark points selected.
2. The face super-resolution method based on dense residual attention face prior network of claim 1, wherein the face super-resolution method is characterized by: the construction process of the intensive residual attention module connected by the jumper in the step S1 specifically comprises the following steps:
A1: constructing a cascade residual error unit: the cascade residual unit consists of an inner convolution layer, a batch processing layer, an activation function and jumper connection;
A2: constructing a non-local attention unit;
a3: forming a residual attention module by using a cascade residual unit and a non-local attention unit;
A4: and (3) constructing a concentrated residual attention module of jumper connection: the concentrated residual attention module of jumper connection is made up of four residual attention modules and jumper connection, the residual attention modules are connected together end to end, then the input of the first residual attention module is combined with the output of the last residual attention module through jumper connection as the final output of the concentrated residual attention module of jumper connection.
3. The face super-resolution method based on dense residual attention face prior network of claim 1, wherein the face super-resolution method is characterized by: the construction process of the facial structure priori prediction module in the step S1 specifically comprises the following steps:
b1: constructing a space transformer module;
b2: constructing a traditional hourglass network unit;
B3: stacking 1 space transformer module and four continuous hourglass network units, and respectively arranging intermediate supervision to form a face structure priori prediction module.
4. The face super-resolution method based on dense residual attention face prior network of claim 1, wherein the face super-resolution method is characterized by: the construction process of the up-sampling module in the step S1 specifically comprises the following steps:
c1: introducing a sub-pixel convolution layer;
C2: an 8-fold upsampling parameter was set for the incoming subpixel convolutional layer.
5. The face super-resolution method based on dense residual attention face prior network of claim 1, wherein the face super-resolution method is characterized by: the construction process of the dense residual error attention face priori network in the step S2 specifically comprises the following steps:
d1: the method comprises the steps of connecting a concentrated residual attention module connected by a jumper with a face structure priori prediction module in parallel, and splicing the feature channel dimension of the output features;
D2: and (3) on the basis of the step D1, continuously cascading an up-sampling module and an image reconstruction layer to complete the construction of the dense residual error attention face prior network.
6. The face super-resolution method based on dense residual attention face prior network of claim 1, wherein the face super-resolution method is characterized by: the specific process of the step S3 is as follows:
E1: preprocessing the published data set, normalizing the pixel value of each image matrix element to be between 0 and 1 to obtain a normalized image matrix;
E2: randomly rotating the image matrix to realize the misalignment processing of the image data;
e3: performing bicubic interpolation downsampling on the enhanced image data, and proportionally reducing the length and width of each image by 8 times according to the magnification factor to obtain a low-resolution original image and a high-resolution original image;
E4: the low-resolution original image and the high-resolution original image data are subjected to random scrambling in pairs, 80% of the data are selected as training data sets, and the rest are selected as test data sets.
7. The face super-resolution method based on dense residual attention face prior network of claim 2, wherein the face super-resolution method is characterized by: the step A2 specifically comprises the following steps:
The non-local attention unit is composed of three sub-branches, wherein each sub-branch is respectively connected with g, h and z convolution layers, an image matrix obtained after matrix deformation is carried out on output results of the two sub-branches after the g and h convolution layers are connected is subjected to matrix multiplication operation, a result obtained by the matrix multiplication operation is input into a classifier, the result obtained after the classifier and the output result of the sub-branch connected with the z convolution layers are subjected to matrix multiplication operation again, then the result obtained by the matrix multiplication operation is connected with a u convolution layer again, and the result obtained after the u convolution layer is added with corresponding elements of the original input of the non-local module after matrix deformation.
8. A face super-resolution method based on dense residual attention face prior network as recited in claim 3, wherein: the step B3 specifically comprises the following steps: the face structure priori prediction module consists of 1 space transformer and 4 traditional hourglass networks; the space transformer realizes the alignment processing of the input unaligned low-resolution face image; four traditional hourglass networks are connected together end to end, so that more accurate priori prediction of the face of the person is sequentially realized; the output of each hourglass network is supervised by a true value of facial priors.
CN202010847791.7A 2020-08-21 2020-08-21 Face super-resolution method based on dense residual error attention face priori network Active CN112085655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010847791.7A CN112085655B (en) 2020-08-21 2020-08-21 Face super-resolution method based on dense residual error attention face priori network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010847791.7A CN112085655B (en) 2020-08-21 2020-08-21 Face super-resolution method based on dense residual error attention face priori network

Publications (2)

Publication Number Publication Date
CN112085655A CN112085655A (en) 2020-12-15
CN112085655B true CN112085655B (en) 2024-04-26

Family

ID=73728482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010847791.7A Active CN112085655B (en) 2020-08-21 2020-08-21 Face super-resolution method based on dense residual error attention face priori network

Country Status (1)

Country Link
CN (1) CN112085655B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034408B (en) * 2021-04-30 2022-08-12 广东工业大学 Infrared thermal imaging deep learning image denoising method and device
CN113034370A (en) * 2021-05-26 2021-06-25 之江实验室 Face super-resolution method combined with 3D face structure prior
CN113344783B (en) * 2021-06-08 2022-10-21 哈尔滨工业大学 Pyramid face super-resolution network for thermodynamic diagram perception

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080513A (en) * 2019-10-24 2020-04-28 天津中科智能识别产业技术研究院有限公司 Human face image super-resolution method based on attention mechanism

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080513A (en) * 2019-10-24 2020-04-28 天津中科智能识别产业技术研究院有限公司 Human face image super-resolution method based on attention mechanism

Also Published As

Publication number Publication date
CN112085655A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN109903228B (en) Image super-resolution reconstruction method based on convolutional neural network
CN112085655B (en) Face super-resolution method based on dense residual error attention face priori network
CN110136063B (en) Single image super-resolution reconstruction method based on condition generation countermeasure network
CN109087273B (en) Image restoration method, storage medium and system based on enhanced neural network
Jia et al. Ddunet: Dense dense u-net with applications in image denoising
CN111932461B (en) Self-learning image super-resolution reconstruction method and system based on convolutional neural network
CN109636721B (en) Video super-resolution method based on countermeasure learning and attention mechanism
CN114283158A (en) Retinal blood vessel image segmentation method and device and computer equipment
CN110349087B (en) RGB-D image high-quality grid generation method based on adaptive convolution
CN113298716B (en) Image super-resolution reconstruction method based on convolutional neural network
CN105488759B (en) A kind of image super-resolution rebuilding method based on local regression model
CN114494022B (en) Model training method, super-resolution reconstruction method, device, equipment and medium
CN115082306A (en) Image super-resolution method based on blueprint separable residual error network
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
Li Image super-resolution using attention based densenet with residual deconvolution
CN118134779A (en) Infrared and visible light image fusion method based on multi-scale reconstruction transducer and multi-dimensional attention
CN117830900A (en) Unsupervised video object segmentation method
CN116977387A (en) Deformable medical image registration method based on deformation field fusion
CN116228576A (en) Image defogging method based on attention mechanism and feature enhancement
CN109447900A (en) A kind of image super-resolution rebuilding method and device
CN113191947B (en) Image super-resolution method and system
Wang et al. Information purification network for remote sensing image super-resolution
Bera et al. A lightweight convolutional neural network for image denoising with fine details preservation capability
Xu et al. Single Image Super-Resolution Based on Capsule Network
CN115631115B (en) Dynamic image restoration method based on recursion transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant