CN112085655A - Face super-resolution method based on dense residual attention face prior network - Google Patents
Face super-resolution method based on dense residual attention face prior network Download PDFInfo
- Publication number
- CN112085655A CN112085655A CN202010847791.7A CN202010847791A CN112085655A CN 112085655 A CN112085655 A CN 112085655A CN 202010847791 A CN202010847791 A CN 202010847791A CN 112085655 A CN112085655 A CN 112085655A
- Authority
- CN
- China
- Prior art keywords
- face
- resolution
- module
- dense residual
- residual attention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012549 training Methods 0.000 claims abstract description 32
- 238000012360 testing method Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 10
- 238000005070 sampling Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 38
- 230000006870 function Effects 0.000 claims description 24
- 238000010586 diagram Methods 0.000 claims description 11
- 238000010276 construction Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 2
- 238000003062 neural network model Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- CBXRMKZFYQISIV-UHFFFAOYSA-N 1-n,1-n,1-n',1-n',2-n,2-n,2-n',2-n'-octamethylethene-1,1,2,2-tetramine Chemical compound CN(C)C(N(C)C)=C(N(C)C)N(C)C CBXRMKZFYQISIV-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face super-resolution method based on a dense residual attention face prior network, which comprises the following steps: respectively constructing a dense residual error attention module, a face structure prior prediction module and an up-sampling module which are connected by a jumper; a dense residual attention module and a face structure prior prediction module which are connected in parallel by jumper wires are connected, and then an up-sampling module and an image reconstruction layer are cascaded to construct a dense residual attention face prior network; preprocessing the published data set, and dividing the processed data into a training set and a testing set; training a dense residual attention face prior network; and inputting the images in the test set into a trained dense residual attention face prior network, and outputting a reconstructed high-resolution face image. According to the method, the super-resolution processing of the tested face image is realized by training the dense residual attention face prior network, so that the high-frequency details of the face can be effectively recovered, and meanwhile, the super-resolution face image with the identity information is reserved.
Description
Technical Field
The invention belongs to the technical field of image processing and face super-resolution, and particularly relates to a face super-resolution method based on a dense residual attention face prior network.
Background
In face recognition tasks, it is often desirable to obtain high-resolution, clear, noise-free, high-quality images. This is because a high-quality face image not only has a good visual effect but also contains much detailed information required in the subsequent processing. However, in an actual acquisition and transmission system, due to the limitations of an acquisition environment, imaging hardware, network bandwidth and the like, the resolution of an acquired face image is often relatively low, and the accuracy of a subsequent face recognition task is greatly influenced. Improving the hardware condition of the imaging system and controlling the acquisition environment are the most direct ways to improve the imaging quality. However, this method not only increases the cost, but also has a practical problem that is difficult to overcome in many application scenarios (such as monitoring human face analysis). Compared with the prior art, the face super-resolution technology is used as a software level method, is simple to implement and low in cost, and has very important application value.
The face super-resolution algorithm, also known as the face illusion algorithm, aims to reconstruct a sharp high-resolution face image using a low-resolution image. The common object super-resolution technology is an important research branch in the image field, however, for face super-resolution, face images have similar geometric structures and complex texture information, and a traditional face super-resolution method cannot achieve a good face reconstruction effect. In consideration of the special condition of the face image, the face super-resolution generally utilizes the inherent property of the face image to optimize the reconstruction result, and then the subjectively real and natural face image is restored. At present, face image super-resolution reconstruction algorithms can be mainly divided into two types: interpolation-based and learning-based methods.
The existing face super-resolution method usually causes the loss of face high-frequency information due to the fact that the correlation between face prior information and non-local information cannot be fully utilized, and generates high-resolution face images with too many artificial workpieces. Meanwhile, most face super-resolution algorithms only consider the mean square error loss during design, and although the loss can obtain a better objective index, the inherent information of the face cannot be considered. Therefore, how to design a novel loss function and recover accurate face structure and identity information is also a problem of the face super-resolution algorithm.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the face super-resolution method based on the dense residual attention face prior network is provided, the high-frequency details of the face can be effectively recovered, so that a high-quality face image super-resolution result is obtained, an identity invariant feature loss function is provided, and a supervision network generates a face image with real identity information.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a face super-resolution method based on a dense residual attention face prior network, comprising the following steps:
s1: respectively constructing a dense residual error attention module, a face structure prior prediction module and an up-sampling module which are connected by a jumper;
s2: a dense residual attention module and a face structure prior prediction module which are connected in parallel by jumper wires are connected, and then an up-sampling module and an image reconstruction layer are cascaded to construct a dense residual attention face prior network;
s3: preprocessing the published data set, and dividing the processed data into a training set and a testing set;
s4: training a dense residual attention face prior network by using data in a training set;
s5: and inputting the images in the test set into a trained dense residual attention face prior network, and outputting a reconstructed high-resolution face image.
Further, the construction process of the jumper-connected dense residual attention module in step S1 specifically includes:
a1: constructing a cascade residual error unit: the cascade residual error unit consists of an inner convolution layer, a batch processing layer, an activation function and a jumper wire;
a2: constructing a non-local attention unit;
a3: a residual error attention module is formed by utilizing the cascade residual error unit and the non-local attention unit;
a4: constructing a dense residual attention module of jumper connection: the jumper-connected dense residual attention module consists of four residual attention modules and a jumper connection, the residual attention modules are connected together end to end, and then the input of the first residual attention module is combined with the output of the last residual attention module through the jumper connection to be used as the final output of the jumper-connected dense residual attention module.
Further, the construction process of the face structure prior prediction module in step S1 is specifically as follows:
b1: constructing a space transformer module;
b2: constructing a traditional hourglass network unit;
b3: and stacking 1 space transformer module and four continuous hourglass network units, and respectively setting middle supervision to form a face structure prior prediction module.
Further, the construction process of the upsampling module in the step S1 specifically includes:
c1: introducing a sub-pixel convolution layer;
c2: an 8-fold upsampling parameter is set for the incoming sub-pixel convolution layer.
Further, the process of constructing the dense residual attention face prior network in step S2 specifically includes:
d1: the dense residual attention module and the face structure prior prediction module which are connected by the jumper wires are connected in parallel, and the dense residual attention module and the face structure prior prediction module are used for realizing the splicing of feature channel dimensions of output features;
d2: and D1, continuously cascading the upsampling module and the image reconstruction layer to complete the construction of the dense residual attention face prior network.
Further, the specific process of step S3 is as follows:
e1: preprocessing the published data set, and normalizing the pixel value of each image matrix element to be between [0,1] to obtain a normalized image matrix;
e2: randomly rotating the image matrix to realize the non-alignment processing of image data;
e3: carrying out double-cubic interpolation downsampling on the enhanced image data, and reducing the length and the width of each image by 8 times according to the magnification factor to obtain a low-resolution original image and a high-resolution original image;
e4: the low-resolution original image and the high-resolution original image data are randomly scrambled in pairs, and 80% of the data are selected as a training data set, and the rest are selected as a test data set.
Further, the training process of the dense residual attention face prior network in step S4 is as follows:
f1: constructing a mean square error function as a loss function;
f2: constructing an identity invariant feature loss function;
f3: constructing a face structure characteristic loss function;
f4: initializing parameters of a dense residual attention face prior network and setting training parameters;
f5: initializing parameters in a dense residual attention face prior network into Gaussian distribution with the average value of 0 and the standard deviation of 0.001, and initializing the deviation to be 0; setting a learning rate, iteration times and the number of batch training samples;
f6: training a network by using a low-resolution image in a training data set and a corresponding high-resolution image, and updating parameters of a dense residual attention face prior network through an optimization algorithm;
f7: training a dense residual attention face prior network until a global loss value e<10-3Or the number of iterations t>And 120, storing the trained network model.
Further, the step a2 is specifically:
the non-local attention unit is composed of three sub-branches, wherein each sub-branch is respectively connected with g, h and z convolution layers, matrix multiplication operation is carried out on an image matrix obtained after matrix deformation is carried out on output results of two sub-branches connected with the g and h convolution layers, the result obtained by the matrix multiplication operation is input into a classifier, matrix multiplication operation is carried out on the result obtained after the matrix multiplication operation is carried out on the result obtained by the classifier and the image matrix obtained after the matrix deformation is carried out on the output result of the sub-branch connected with the z convolution layer again, then the result obtained by the matrix multiplication operation is connected with the u convolution layer again, and the result obtained after the u convolution layer is subjected to matrix deformation and then is added with corresponding elements of the original input of the non-local module.
Further, the step B3 is specifically: the face structure prior prediction module consists of 1 space transformer and 4 traditional hourglass networks; the space transformer realizes the alignment treatment of the input non-aligned low-resolution face image; the four traditional hourglass networks are connected together through end-to-end, so that more accurate face prior prediction is sequentially realized; the output of each hourglass network is supervised by the true value of the face prior.
Further, the identity-invariant feature loss function in step F2 is:
where φ () represents the feature vectors extracted by the average pooling layer of the trained Resnet50 network,representing generated super-resolution facesAnd the original high resolution face picture (h).
Further, the face structure loss function in step F3 is:
wherein Hk(fi) Thermodynamic diagrams, H, representing k-th personal face landmark points predicted on intermediate generation features by a facial structure a priori prediction modulek(hi) Represents the true value of the kth personal Face landmark point thermodynamic diagram computed on the original high resolution Face picture by a trained Face Alignment Network (FAN), and P represents the total number of selected Face landmark points.
Has the advantages that: compared with the prior art, the invention introduces a residual attention module considering non-local characteristics into a neural network model, constructs the neural network model by stacking the residual attention module and designing a face structure prior prediction module, thereby extracting the non-local attention characteristics and the face structure prior information; and then splicing the channel dimensions of the extracted non-local attention features and the face structure prior features, and then sending the spliced channel dimensions into an upsampling mode. The neural network model has better high-frequency information recovery capability due to the design, so that a high-quality face image super-resolution result is obtained; and the neural network model only has medium size, and the execution speed is very high.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a detailed structural diagram of a dense residual attention face prior network;
FIG. 3 is a detailed structural diagram of a jumper connection dense residual attention module;
FIG. 4 is a detailed structural diagram of a residual error unit;
FIG. 5 is a detailed structural diagram of a non-local attention unit;
FIG. 6 is a graph showing the comparison between the test results and those of other similar methods.
Detailed Description
The invention is further elucidated with reference to the drawings and the embodiments.
The invention provides a face super-resolution method based on a dense residual attention face prior network, which comprises the following steps with reference to FIG. 1:
step 1: constructing a dense residual error attention module of jumper connection, specifically comprising the following steps 11-14:
step 11: constructing a residual error unit: the residual error unit is composed of an inner convolution layer, a batch processing layer, an activation function and jumper connection, and is specifically shown in fig. 4;
step 12: constructing a non-local attention unit: the non-local attention unit is composed of three sub-branches, wherein each sub-branch is respectively connected with g, h and z convolution layers, matrix multiplication operation is carried out on an image matrix obtained after matrix deformation is carried out on output results of two sub-branches connected with the g and h convolution layers, the result obtained by the matrix multiplication operation is input into a classifier, matrix multiplication operation is carried out on the result after the classifier and the image matrix obtained after matrix deformation is carried out on the output result of the sub-branch connected with the z convolution layer again, then the result obtained by the matrix multiplication operation is connected with the u convolution layer again, the result obtained after the u convolution layer is subjected to matrix deformation and then is added with corresponding elements of original input of a non-local module, and the specific steps are shown in fig. 5;
step 13: cascading a residual error unit and a non-local attention unit to form a residual error attention module;
step 14: constructing a dense residual attention module of jumper connection: the jumper-connected dense residual attention module is composed of four residual attention modules and a jumper connection, the residual attention modules are connected together end to end, and then the input of the first residual attention module is combined with the output of the last residual attention module through the jumper connection to be used as the final output of the jumper-connected dense residual attention module, which is specifically shown in fig. 3.
Step 2: constructing a face structure prior prediction module comprising steps 21-23:
step 21: constructing a space transformer module;
step 22: constructing a traditional hourglass network unit;
step 23: stacking 1 space transformer module and four continuous hourglass network units, and respectively arranging middle supervision to form a face structure prior prediction module; the method specifically comprises the following steps: the face structure prior prediction module consists of 1 space transformer and 4 traditional hourglass networks; the space transformer can realize the alignment treatment of the input non-aligned low-resolution face image; the four traditional hourglass networks are connected together through end-to-end, so that more accurate face prior prediction is sequentially realized; the output of each hourglass network is supervised by the truth value of face prior (thermodynamic truth value of key feature points of 68 human faces), and the step (64) can be referred to by a supervision function;
and step 3: constructing an upsampling module comprising steps 31 and 32:
step 31: introducing a sub-pixel convolution layer;
step 32: an 8-fold upsampling parameter is set for the incoming sub-pixel convolution layer.
And 4, step 4: a dense residual attention module and a face structure prior prediction module which are connected in parallel by a jumper are connected, then an up-sampling module and an image reconstruction layer are cascaded, and a dense residual attention face prior network is constructed, which comprises steps 41 and 42:
step 41: the dense residual attention module and the face structure prior prediction module which are connected by the jumper wire are connected in parallel (splicing the feature channel dimension of the output feature is realized);
step 42: on the basis of step 41, the cascade of upsampling modules and image reconstruction layers continues, as shown in fig. 2.
And 5: preprocessing the published data set and separating the processed data into a training set and a test set, which includes steps 51-54:
step 51: preprocessing a published CelebFaces Attributes (CelebA) data set, and normalizing the pixel value of each image matrix element to be between [0,1] to obtain a normalized image matrix;
step 52: randomly rotating the image matrix to realize the non-alignment processing of image data;
step 53: carrying out double-cubic interpolation downsampling on the enhanced image data, and reducing the length and the width of each image by 8 times according to the magnification factor to obtain a low-resolution original image and a high-resolution original image;
step 54: the low-resolution original image and the high-resolution original image data are randomly scrambled in pairs, and 80% of the data are selected as a training data set, and the rest are selected as a test data set.
Step 6: training a dense residual attention face prior network with training data, comprising steps 61-67:
step 61: constructing a mean square error function as a loss function;
step 62: constructing an identity-invariant feature loss function:
the identity invariant feature loss function is calculated as follows:
where phi denotes Re trainedThe average pooled layer extracted feature vectors of the snet50 network,representing generated super-resolution facesAnd the original high-resolution face picture (h);
and step 63: constructing a face structure characteristic loss function:
the calculation method of the face structure loss function is as follows:
wherein Hk(fi) Thermodynamic diagrams, H, representing k-th personal face landmark points predicted on intermediate generation features by a facial structure a priori prediction modulek(hi) Represents the true value of the kth personal Face landmark thermodynamic diagram computed by a trained Face Alignment Network (FAN) on the original high resolution Face picture. P represents the total number of selected face landmark points, which is set to 68 in this embodiment;
step 64: initializing parameters of a dense residual attention face prior network and setting training parameters;
step 65: initializing parameters in a dense residual attention face prior network into Gaussian distribution with the mean value of 0 and the standard deviation of 0.001, and initializing the deviation to be 0; setting a learning rate, iteration times and the number of batch training samples;
and step 66: training a network by using a low-resolution image in a training data set and a corresponding high-resolution image, and updating parameters of a dense residual attention face prior network through an optimization algorithm;
step 67: training a dense residual attention face prior network until a global loss value e<10-3Or the number of iterations t>And 120, storing the trained network model.
And 7: and inputting the images in the test data set into a trained dense residual attention face prior network, and outputting reconstructed high-resolution face images.
Through the steps, the dense residual attention face prior network is trained, and then super-resolution processing of the tested face image is achieved.
In this embodiment, the face image finally output by the method is compared with the face images obtained by other methods, and the comparison result is shown in fig. 6, in the figure, c is a standard image, a, b, d, e, f and g are images obtained by LR, difference, TDAE, CBN, SRGAN and VDSR methods, respectively, and j is an image obtained by the method of the present invention.
Claims (10)
1. A face super-resolution method based on a dense residual attention face prior network is characterized in that: the method comprises the following steps:
s1: respectively constructing a dense residual error attention module, a face structure prior prediction module and an up-sampling module which are connected by a jumper;
s2: a dense residual attention module and a face structure prior prediction module which are connected in parallel by jumper wires are connected, and then an up-sampling module and an image reconstruction layer are cascaded to construct a dense residual attention face prior network;
s3: preprocessing the published data set, and dividing the processed data into a training set and a testing set;
s4: training a dense residual attention face prior network by using data in a training set;
s5: and inputting the images in the test set into a trained dense residual attention face prior network, and outputting a reconstructed high-resolution face image.
2. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 1, wherein: the process of constructing the jumper-connected dense residual attention module in step S1 specifically includes:
a1: constructing a cascade residual error unit: the cascade residual error unit consists of an inner convolution layer, a batch processing layer, an activation function and a jumper wire;
a2: constructing a non-local attention unit;
a3: a residual error attention module is formed by utilizing the cascade residual error unit and the non-local attention unit;
a4: constructing a dense residual attention module of jumper connection: the jumper-connected dense residual attention module consists of four residual attention modules and a jumper connection, the residual attention modules are connected together end to end, and then the input of the first residual attention module is combined with the output of the last residual attention module through the jumper connection to be used as the final output of the jumper-connected dense residual attention module.
3. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 1, wherein: the construction process of the face structure prior prediction module in step S1 specifically includes:
b1: constructing a space transformer module;
b2: constructing a traditional hourglass network unit;
b3: and stacking 1 space transformer module and four continuous hourglass network units, and respectively setting middle supervision to form a face structure prior prediction module.
4. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 1, wherein: the construction process of the up-sampling module in step S1 specifically includes:
c1: introducing a sub-pixel convolution layer;
c2: an 8-fold upsampling parameter is set for the incoming sub-pixel convolution layer.
5. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 1, wherein: the construction process of the dense residual attention face prior network in step S2 specifically includes:
d1: the dense residual attention module and the face structure prior prediction module which are connected by the jumper wires are connected in parallel, and the dense residual attention module and the face structure prior prediction module are used for realizing the splicing of feature channel dimensions of output features;
d2: and D1, continuously cascading the upsampling module and the image reconstruction layer to complete the construction of the dense residual attention face prior network.
6. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 1, wherein: the specific process of step S3 is as follows:
e1: preprocessing the published data set, and normalizing the pixel value of each image matrix element to be between [0,1] to obtain a normalized image matrix;
e2: randomly rotating the image matrix to realize the non-alignment processing of image data;
e3: carrying out double-cubic interpolation downsampling on the enhanced image data, and reducing the length and the width of each image by 8 times according to the magnification factor to obtain a low-resolution original image and a high-resolution original image;
e4: the low-resolution original image and the high-resolution original image data are randomly scrambled in pairs, and 80% of the data are selected as a training data set, and the rest are selected as a test data set.
7. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 1, wherein: the training process of the dense residual attention face prior network in step S4 is as follows:
f1: constructing a mean square error function as a loss function;
f2: constructing an identity invariant feature loss function;
f3: constructing a face structure characteristic loss function;
f4: initializing parameters of a dense residual attention face prior network and setting training parameters;
f5: initializing parameters in a dense residual attention face prior network into Gaussian distribution with the average value of 0 and the standard deviation of 0.001, and initializing the deviation to be 0; setting a learning rate, iteration times and the number of batch training samples;
f6: training a network by using a low-resolution image in a training data set and a corresponding high-resolution image, and updating parameters of a dense residual attention face prior network through an optimization algorithm;
f7: training a dense residual attention face prior network until a global loss value e<10-3Or the number of iterations t>And 120, storing the trained network model.
8. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 2, wherein: the step a2 specifically includes:
the non-local attention unit is composed of three sub-branches, wherein each sub-branch is respectively connected with g, h and z convolution layers, matrix multiplication operation is carried out on an image matrix obtained after matrix deformation is carried out on output results of two sub-branches connected with the g and h convolution layers, the result obtained by the matrix multiplication operation is input into a classifier, matrix multiplication operation is carried out on the result obtained after the matrix multiplication operation is carried out on the result obtained by the classifier and the image matrix obtained after the matrix deformation is carried out on the output result of the sub-branch connected with the z convolution layer again, then the result obtained by the matrix multiplication operation is connected with the u convolution layer again, and the result obtained after the u convolution layer is subjected to matrix deformation and then is added with corresponding elements of the original input of the non-local module.
9. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 3, wherein: the step B3 specifically includes: the face structure prior prediction module consists of 1 space transformer and 4 traditional hourglass networks; the space transformer realizes the alignment treatment of the input non-aligned low-resolution face image; the four traditional hourglass networks are connected together through end-to-end, so that more accurate face prior prediction is sequentially realized; the output of each hourglass network is supervised by the true value of the face prior.
10. The method for super-resolution of human face based on dense residual attention face prior network as claimed in claim 7, wherein: the identity-invariant feature loss function in step F2 is:
where φ () represents the feature vectors extracted by the average pooling layer of the trained Resnet50 network,representing generated super-resolution facesAnd the original high-resolution face picture (h);
the face structure loss function in step F3 is:
wherein Hk(fi) Thermodynamic diagrams, H, representing k-th personal face landmark points predicted on intermediate generation features by a facial structure a priori prediction modulek(hi) Represents the true value of the k-th personal Face landmark point thermodynamic diagram computed on the original high resolution Face picture by the trained Face Alignment Network, and P represents the total number of selected Face landmark points.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010847791.7A CN112085655B (en) | 2020-08-21 | 2020-08-21 | Face super-resolution method based on dense residual error attention face priori network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010847791.7A CN112085655B (en) | 2020-08-21 | 2020-08-21 | Face super-resolution method based on dense residual error attention face priori network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112085655A true CN112085655A (en) | 2020-12-15 |
CN112085655B CN112085655B (en) | 2024-04-26 |
Family
ID=73728482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010847791.7A Active CN112085655B (en) | 2020-08-21 | 2020-08-21 | Face super-resolution method based on dense residual error attention face priori network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112085655B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113034408A (en) * | 2021-04-30 | 2021-06-25 | 广东工业大学 | Infrared thermal imaging deep learning image denoising method and device |
CN113034370A (en) * | 2021-05-26 | 2021-06-25 | 之江实验室 | Face super-resolution method combined with 3D face structure prior |
CN113344783A (en) * | 2021-06-08 | 2021-09-03 | 哈尔滨工业大学 | Pyramid face super-resolution network for thermodynamic diagram perception |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111080513A (en) * | 2019-10-24 | 2020-04-28 | 天津中科智能识别产业技术研究院有限公司 | Human face image super-resolution method based on attention mechanism |
-
2020
- 2020-08-21 CN CN202010847791.7A patent/CN112085655B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111080513A (en) * | 2019-10-24 | 2020-04-28 | 天津中科智能识别产业技术研究院有限公司 | Human face image super-resolution method based on attention mechanism |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113034408A (en) * | 2021-04-30 | 2021-06-25 | 广东工业大学 | Infrared thermal imaging deep learning image denoising method and device |
CN113034408B (en) * | 2021-04-30 | 2022-08-12 | 广东工业大学 | Infrared thermal imaging deep learning image denoising method and device |
CN113034370A (en) * | 2021-05-26 | 2021-06-25 | 之江实验室 | Face super-resolution method combined with 3D face structure prior |
CN113344783A (en) * | 2021-06-08 | 2021-09-03 | 哈尔滨工业大学 | Pyramid face super-resolution network for thermodynamic diagram perception |
Also Published As
Publication number | Publication date |
---|---|
CN112085655B (en) | 2024-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109903228B (en) | Image super-resolution reconstruction method based on convolutional neural network | |
CN110136063B (en) | Single image super-resolution reconstruction method based on condition generation countermeasure network | |
Ahn et al. | Image super-resolution via progressive cascading residual network | |
CN112085655B (en) | Face super-resolution method based on dense residual error attention face priori network | |
Jia et al. | Ddunet: Dense dense u-net with applications in image denoising | |
CN109636721B (en) | Video super-resolution method based on countermeasure learning and attention mechanism | |
CN115358932B (en) | Multi-scale feature fusion face super-resolution reconstruction method and system | |
Rios et al. | Feature visualization for 3D point cloud autoencoders | |
CN113554058A (en) | Method, system, device and storage medium for enhancing resolution of visual target image | |
CN108734677A (en) | A kind of blind deblurring method and system based on deep learning | |
CN115082306A (en) | Image super-resolution method based on blueprint separable residual error network | |
Li | Image super-resolution using attention based densenet with residual deconvolution | |
Zang et al. | Cascaded dense-UNet for image super-resolution | |
CN113379606A (en) | Face super-resolution method based on pre-training generation model | |
CN117830900A (en) | Unsupervised video object segmentation method | |
CN113362239A (en) | Deep learning image restoration method based on feature interaction | |
CN116228576A (en) | Image defogging method based on attention mechanism and feature enhancement | |
CN116029905A (en) | Face super-resolution reconstruction method and system based on progressive difference complementation | |
CN113191947B (en) | Image super-resolution method and system | |
Zhang et al. | R2h-ccd: Hyperspectral imagery generation from rgb images based on conditional cascade diffusion probabilistic models | |
CN114862699A (en) | Face repairing method, device and storage medium based on generation countermeasure network | |
CN114332103A (en) | Image segmentation method based on improved FastFCN | |
Wang et al. | Information purification network for remote sensing image super-resolution | |
Liu et al. | A novel convolutional neural network architecture for image super-resolution based on channels combination | |
CN117670727B (en) | Image deblurring model and method based on residual intensive U-shaped network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |