CN111080521A - Face image super-resolution method based on structure prior - Google Patents
Face image super-resolution method based on structure prior Download PDFInfo
- Publication number
- CN111080521A CN111080521A CN201911271596.8A CN201911271596A CN111080521A CN 111080521 A CN111080521 A CN 111080521A CN 201911271596 A CN201911271596 A CN 201911271596A CN 111080521 A CN111080521 A CN 111080521A
- Authority
- CN
- China
- Prior art keywords
- resolution
- network
- face image
- image
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000012360 testing method Methods 0.000 claims abstract description 24
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims abstract description 14
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 36
- 238000013507 mapping Methods 0.000 claims description 11
- 238000012935 Averaging Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 8
- 238000013434 data augmentation Methods 0.000 claims description 3
- 210000000697 sensory organ Anatomy 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 2
- 230000003042 antagnostic effect Effects 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 6
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000005034 decoration Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000002969 morbid Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000004080 punching Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a face image super-resolution method based on structure prior, which comprises the following steps: preprocessing image data of a face image data set to obtain a training data set and a test data set; by calculating the structural prior loss between the face image generated by the generating network and the real face image, the generated face image can keep the rationality of the topological structure. Training comprises generating a network and judging a model of the network, wherein the generated network comprises 16 residual blocks to obtain a face image super-resolution model which can super-divide a low-resolution face image into a high-resolution face image; and carrying out hyper-segmentation processing on the low-resolution images in the test data set by using the trained face image hyper-segmentation model, and testing the hyper-segmentation performance of the trained face image hyper-segmentation model. The invention can obviously improve the visual quality of the generated high-resolution image.
Description
Technical Field
The invention relates to the technical fields of computer vision, pattern recognition, machine learning, image super-resolution and the like, in particular to a face image super-resolution method based on structure prior.
Background
The face image super-division task refers to reasoning and recovering a corresponding high-resolution face image from a given low-resolution face image. The super-resolution of face images is an important task in computer vision and image processing, and is widely concerned by AI companies and research communities. The system can be widely applied to many scenes in the real world, such as high-speed rail safety inspection, access control systems, laboratory card punching systems and the like.
Besides improving the visual quality of the face image, the face image super-separation task also provides help for other computer vision and image processing tasks, such as face recognition, make-up and face turning. Therefore, the face image super-separation task has important research significance.
This problem remains challenging because it is typically a morbid problem, i.e., given a low resolution face image, there may be multiple corresponding high resolution face images.
Therefore, the existing face image super-segmentation technology is yet to be further improved.
Disclosure of Invention
The invention aims to provide a super-resolution method of a face image with a structure prior aiming at the technical defects in the prior art, and the super-resolution method can generate the face image with abundant texture details.
The technical scheme adopted for realizing the purpose of the invention is as follows:
a face image super-resolution method based on structure prior comprises the following steps:
s1, preprocessing images in a face image data set to obtain a training data set and a testing data set:
s2, training a model by using a training data set to obtain a face image super-resolution model which can super-resolve a low-resolution face image into a high-resolution face image, wherein the face image super-resolution model comprises a generating network, a face matching image generating network and a judging network; generating a network containing 16 residual blocks; the face matching image generation network is a BiSeNet network;
using a low-resolution face image as the input of a model, using a corresponding high-resolution image as supervision, and training a generation network in the model;
inputting the target high-resolution face image and the high-resolution face image generated by the generation network into a discrimination network, judging the truth of the input image by the discrimination network, and finishing the training of the model after the model is iterated for multiple times and is stable;
inputting the target high-resolution face image and the high-resolution face image generated by the generation network into a face matching image generation network to respectively obtain a matching image of the target high-resolution face image and a matching image of the generated face image; constraining the Euclidean distances of the two matching images to enable the positions of five sense organs of the generated face image to meet the target requirement;
and S3, testing the super-resolution performance of the face image super-resolution model by using the trained face image super-resolution model and super-resolution testing the low-resolution images in the data set.
The face image super-resolution method based on the structure prior uses the residual block as the basis of constructing the network, and combines various loss functions, so that the model convergence is faster, the effect is better, and the generalization capability is stronger; a face image with rich texture details can be generated.
The invention uses the generating network, improves the model capacity and accelerates the training speed, improves the generalization ability of the model and accelerates the training speed; and a discrimination network is introduced, so that the generated high-resolution face image is closer to a real high-resolution face image, and the visual quality of the generated high-resolution face image is obviously improved.
The invention introduces the structure prior information of the human face. And respectively generating a matching image of a target human face high-resolution image and a matching image of a generated image through a human face matching image generation network, and enabling the generated topological structure of the human face to be consistent with a theorem by calculating the Euclidean distance of the two matching images.
Drawings
FIG. 1 is a test result of the present invention on a face image in a test data set, with a low resolution face image input at the time to the left, a high resolution face image generated in the middle, and a target high resolution face image on the right.
FIG. 2 is a flow chart of the super-resolution method of the face image based on structure prior of the present invention;
wherein: LR represents an input low-resolution image, Conv represents a convolutional neural network, Pixelshuffle represents an up-sampling module, HR _ rec represents a generated high-resolution image, HR _ tar represents a target high-resolution image, D represents a discrimination network, matching represents a human face matching image generation network, RB represents a residual block, and L represents a residual blockpriorRepresenting the structure prior loss function and ReLU the activation function.
Fig. 3 is an example of generating a network output for a face matching graph. The left side of the figure is the input image and the right side is the generated image of the face matching graph network.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention learns a group of highly complex nonlinear transformation by a face image super-resolution method based on structure prior, and is used for mapping a low-resolution face image to a high-resolution image and simultaneously keeping good texture and identity characteristics.
As shown in fig. 2, the method for super-resolution of face images based on structure prior includes the following steps:
step S1, the face image in the CelebA face dataset is preprocessed first.
Firstly, cutting an original high-resolution face image in a uniform alignment cutting mode, and only reserving a face area;
secondly, a bilinear downsampling method is used for downsampling, aligning and cutting the high-resolution face image to obtain a corresponding low-resolution face image;
thirdly, performing data augmentation on the generated low-score-high-score face image pair to increase the number of images in a training data set, wherein the number of images comprises random horizontal turning and random color transformation;
fourth, the LFW dataset was selected as the test set and processed in the same manner as the pretreatment of CelebA. The LFW dataset was used to test the generalization performance of its model.
And step S2, training a face image super-resolution method model based on structure prior by using the training data input in the step S1, so as to complete the super-resolution task of the face image.
In a generating network of a model, shallow feature extraction is carried out by utilizing a convolutional neural network structure, then deep feature extraction is carried out through 16 residual blocks, then the size of a generated face image is kept consistent with that of a GroundTruth real high-resolution face image through sampling operation on a pixelhuffle layer, and finally the number of channels is scaled to 3 through one convolutional layer. No regularization is contained in the residual block.
Wherein the number of input channels, the number of output channels, the filter size, the step size and the padding of the first convolutional layer of the dense residual neural network are 3, 64, 3, 1, respectively. Each residual block contains 2 convolutional layers. The input and output channels of the 2 convolutional layers in the dense residual block are 64, and the filter size, step size and fill are 3, 1, respectively. The number of input channels, the number of output channels, the filter size, the step size and the fill of the last convolutional layer are 64, 3, 3, 1, respectively. The pixelshaffle layer comprises a convolutional layer, a pixelshaffle layer and a relu layer.
The invention comprises 2 layers of pixelshuffle. The input to each convolutional layer in the residual block is the sum of the outputs of all the convolutional layers above. In the residual block, a RulU active layer is connected behind the first convolution layer. The number of the residual blocks can be selected and set according to actual conditions. The number of channels in the residual block can also be selected and set according to actual conditions.
The discriminating network structure is formed by stacking a convolutional layer, a BN layer and an active layer, wherein the size, the step length and the filling of the convolutional layer filter are respectively3, 1, 1, the number of the convolution layers is 7, the part is taken as the characteristic extraction of the image and then is classified by adding two full connection layers, and the input of the discrimination network is the high-resolution face image generated by the generation networkAnd the network structure of the discriminator can be freely set according to the requirement as well as the real target high-resolution face image y.
In the step, a low-resolution face image is used as the input of a model, a real high-resolution face image is used as a generation target, and a generation network and a discrimination network in the model are alternately trained to complete a face image super-resolution task.
Specifically, the face image with low resolution ratio is subjected to super-resolution processing through a generating network in the model to obtain a generated high-resolution face image, and the generated high-resolution face image and a real high-resolution face image are subjected to reconstruction L2And (4) calculating a loss function.
Wherein X and Y are low resolution face images and corresponding high resolution face images sampled from the low resolution image set X and the high resolution image set Y respectively, E (×) represents an averaging operation,represents L2Norm, FgeneratorTo generate a mapping function corresponding to the network.
And taking the generated high-resolution face image as the input of a discrimination network to calculate a resistance loss function
Wherein, E (×) represents the averaging operation, x to p (x) represent the sampling of the low resolution images from p (x), D (×) represents the mapping function of the discrimination network, and g (x) represents the high resolution face image generated by the generation network.
The generated high-resolution face image and the target high-resolution face image are used as the input of a face matching image generation network, and the Euclidean distance between two output matching images, namely a structure prior loss function L, is calculatedprior:
Wherein X and Y are low resolution face images and corresponding high resolution face images sampled from the low resolution image set X and the high resolution image set Y, respectively, E (×) represents an averaging operation,represents L2Norm, FgeneratorTo generate a mapping function corresponding to the network. And phi is a mapping function corresponding to the network generated by the face matching graph.
Judging the truth and falsity of the input generated high-resolution face image and the target high-resolution face image through a discrimination network, and calculating a resistance loss functionThe loss function is only used to update the parameters of the discrimination network. And (5) completing the training of the model after the model is iterated for multiple times to be stable.
Wherein, E (×) represents the averaging operation, y to p (y) represents the sampling of the target high resolution image from the distribution p (y), D (×) represents the mapping function of the discriminant network, x to p (x) represents the sampling of the low resolution image from the distribution p (x), and g (x) represents the high resolution image generated by the generation network.
In the invention, a neural network model taking a low-resolution face image as input is constructed by utilizing the high nonlinear fitting capability of the convolutional neural network and aiming at the face image super-division task.
In particular, the network generated in the model is based on the residual block, so that the model has better capacity, and the problems of gradient loss and explosion are not easy to occur. In the invention, the generation network combines the structure prior of the human face. Thus, through the network shown in fig. 2, one face image hyper-resolution model with good perception effect can be trained by using the confrontation generation network. In the testing stage, the low-resolution face image in the testing set is used as the input of the model, and the generated effect graph is obtained by only using the generated network in the model and judging that the network does not participate in the test, as shown in fig. 1.
Specifically, the face image super-resolution model based on the structure prior comprises three networks, namely a generation network, a face matching image generation network and a discrimination network. In particular, the network objective function of the model generation is as follows:
wherein λ is1,λ2,λ3The weight of each loss function is adjusted to balance the factors. In the present invention, λ1,λ2,λ3Are all 1.
The generated network model mainly completes the face image super-division task, and the final target of the model is L2、LpriorAndthe three loss functions are minimized and remain stable.
The three networks of the face image super-resolution model based on the structure prior are trained as follows:
step S21: initializing the generating network in the model, λ1,λ2,λ3All 1, batch size 32, learning rate 10-4And remains unchanged during the whole training process;
step S22: for the face image super-resolution task, specifically, the low-resolution image is subjected to super-resolution processing through a generation network to obtain a generated high-resolution face image, and L is reconstructed with the real high-resolution face image2Calculating loss by inputting the generated high-resolution face image into a discriminatorA loss function.
Step S23: the input of the human face matching image generation network is a high-resolution human face image generated by the generation network and a target high-resolution human face image, and the L of the high-resolution human face image and the target high-resolution human face image is calculatedpriorA structure prior loss function.
Inputting the target high-resolution face image and the high-resolution face image generated by the generation network into a face matching image generation network, respectively obtaining a matching image of the target high-resolution face image and a matching image of the generated face image, and enabling the position of five sense organs of the generated face image to meet the target requirement by constraining the Euclidean distance of the two matching images.
Step S24: and the input of the discrimination network is a high-resolution face image generated by the network in the model and a target high-resolution face image. Judging whether the network judges the input face image and calculatesA loss function. The loss function is only used to update the parameters of the discrimination network.
Step S25: and (4) alternately training the generation network and the discrimination network in the model at the same time, and updating the network weight.
Step S3: and carrying out the super-resolution processing on the low-resolution face image in the test data set by using the trained generation network.
The face matching graph generation network adopts a BiSeNet network packet, and the BiSeNet network comprises two branches: spatial tributaries and content ways; the spatial branch comprises three convolutional layers to obtain 1/8 feature map size; and adding a global pooling layer at the Xcenter end of the path of the content to maximize the receptive field of the network.
In order to describe the specific embodiment of the present invention in detail and verify the effectiveness of the present invention, the method proposed by the present invention is applied to an open data set training (CelebA), and the face images have about 2 million face images. And selecting the LFW face data set as a test set of the LFW face data set for testing the generalization performance of the model.
Firstly, preprocessing a face image in the CelebA face data set: firstly, cutting an original high-resolution face image in a uniform alignment cutting mode, and only reserving a face area; secondly, a bilinear downsampling method is used for downsampling, aligning and cutting the high-resolution face image to obtain a corresponding low-resolution face image; and thirdly, performing data augmentation on the generated low-score-high-score face image pair to increase the number of images in the training data set, wherein the number of images comprises random horizontal turning and random color transformation. And (3) training the model by using a training data set, and optimizing the model parameters by using a gradient back propagation technology to obtain a model for face image super-segmentation.
To test the validity of the model, the test set LFW is used as the test set of the trained model, and the visualization results are shown in fig. 1. In the experiment, the result of the experiment is shown in fig. 1 by comparing with the real image of group Truth, and the embodiment effectively proves the effectiveness of the method of the invention on super-resolution of the face image.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (10)
1. A face image super-resolution method based on structure prior is characterized by comprising the following steps:
s1, preprocessing images in a face image data set to obtain a training data set and a testing data set:
s2, training a model by using a training data set to obtain a face image super-resolution model which can super-resolve a low-resolution face image into a high-resolution face image, wherein the face image super-resolution model comprises a generating network, a face matching image generating network and a judging network; generating a network containing 16 residual blocks; the face matching image generation network is a BiSeNet network;
using a low-resolution face image as the input of a model, using a corresponding high-resolution image as supervision, and training a generation network in the model;
inputting the target high-resolution face image and the high-resolution face image generated by the generation network into a discrimination network, judging the truth of the input image by the discrimination network, and finishing the training of the model after the model is iterated for multiple times and is stable;
inputting the target high-resolution face image and the high-resolution face image generated by the generation network into a face matching image generation network to respectively obtain a matching image of the target high-resolution face image and a matching image of the generated face image; constraining the Euclidean distances of the two matching images to enable the positions of five sense organs of the generated face image to meet the target requirement;
and S3, testing the super-resolution performance of the face image super-resolution model by using the trained face image super-resolution model and super-resolution testing the low-resolution images in the data set.
2. The method for super-resolution of face images based on structure priors according to claim 1, wherein the BiSeNet network comprises two branches: spatial tributaries and content ways; the spatial branch comprises three convolutional layers to obtain 1/8 feature map size; and adding a global pooling layer at the Xcenter end of the path of the content to maximize the receptive field of the network.
3. The method for super-resolution of face images based on structure priors according to claim 1, wherein step S2 includes:
s21, randomly initializing weight parameters of a generation network and a discrimination network by using standard Gaussian distribution, wherein the reconstruction loss function of the generation network is L2The structure prior loss is LpriorThe function of the antagonistic loss isDiscriminating the loss function of the network as
S22, inputting the low-resolution face image into a generating network, outputting a generated image with the size consistent with that of the target high-resolution face image by the generating network, taking the generated image as the input of a judging network, and sequentially iterating to enable a countermeasure loss functionAnd a loss function L2All reduce to tend to be stable;
s23, inputting the target high-resolution face image and the generated image into a face matching image generation network to respectively obtain corresponding matching images; calculating the Euclidean distance between the two matching images to ensure that the generated human face image topological structure meets the target requirement;
s24, judging whether the network input is a high-resolution face image generated by a generating network and a target high-resolution face image, judging whether the network input image is true or false, and calculating a loss functionThe loss functionOnly used for updating and judging network parameters;
and S25, alternately training to generate a network and a judgment network until all loss functions are not reduced any more, and obtaining a final face image hyper-segmentation model.
4. The method for super-resolution of face images based on structure priors according to claim 3, wherein the objective function of the generation network is as follows:
wherein λ is1,λ2,λ3The balance factor is used for adjusting the weight occupied by each loss function;
the objective function of the discrimination network is
5. The method for super-resolution of face images based on structure priors according to claim 3, wherein the reconstruction loss function of the generation network is:
wherein X and Y are low resolution face images and corresponding high resolution face images sampled from the low resolution image set X and the high resolution image set Y respectively, E (×) represents an averaging operation,represents L2Norm, FgeneratorTo generate a mapping function corresponding to the network.
6. The method for super-resolution of face images based on structure priors according to claim 3, wherein the countermeasure loss function of the generation network is:
wherein, E (×) represents the averaging operation, x to p (x) represent the sampling of the low resolution images from p (x), D (×) represents the mapping function of the discrimination network, and g (x) represents the high resolution face image generated by the generation network.
7. The method for super-resolution of face images based on structure priors according to claim 3, wherein the structure prior loss function is:
wherein X and Y are low resolution face images and corresponding high resolution face images sampled from the low resolution image set X and the high resolution image set Y, respectively, E (×) represents an averaging operation,represents L2Norm, FgeneratorTo generate a mapping function corresponding to the network. And phi is a mapping function corresponding to the network generated by the face matching graph.
8. The method for super-resolution of face images based on structure priors according to claim 3, wherein the objective function of the discriminant network is as follows:
wherein, E (×) represents the averaging operation, y to p (y) represents the sampling of the target high resolution image from the distribution p (y), D (×) represents the mapping function of the discriminant network, x to p (x) represents the sampling of the low resolution image from the distribution p (x), and g (x) represents the high resolution image generated by the generation network.
10. The method for super-resolution of face images based on structure priors according to claim 1, wherein step S1 comprises the following steps:
cutting an original high-resolution face image in a uniform alignment cutting mode, and only reserving a face area; using a bilinear downsampling method to downsample, align and cut the high-resolution face image to obtain a corresponding low-resolution face image; performing data augmentation on the generated low-score-high-score face image pair to increase the number of images in a training data set; fourth, the LFW face data set is used as a test set for testing the generalization performance of its model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911271596.8A CN111080521A (en) | 2019-12-12 | 2019-12-12 | Face image super-resolution method based on structure prior |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911271596.8A CN111080521A (en) | 2019-12-12 | 2019-12-12 | Face image super-resolution method based on structure prior |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111080521A true CN111080521A (en) | 2020-04-28 |
Family
ID=70314098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911271596.8A Pending CN111080521A (en) | 2019-12-12 | 2019-12-12 | Face image super-resolution method based on structure prior |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111080521A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860212A (en) * | 2020-06-29 | 2020-10-30 | 北京金山云网络技术有限公司 | Face image super-segmentation method, device, equipment and storage medium |
CN112581370A (en) * | 2020-12-28 | 2021-03-30 | 苏州科达科技股份有限公司 | Training and reconstruction method of super-resolution reconstruction model of face image |
CN113628107A (en) * | 2021-07-02 | 2021-11-09 | 上海交通大学 | Face image super-resolution method and system |
WO2022087941A1 (en) * | 2020-10-29 | 2022-05-05 | 京东方科技集团股份有限公司 | Face reconstruction model training method and apparatus, face reconstruction method and apparatus, and electronic device and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615582A (en) * | 2018-11-30 | 2019-04-12 | 北京工业大学 | A kind of face image super-resolution reconstruction method generating confrontation network based on attribute description |
CN110148085A (en) * | 2019-04-22 | 2019-08-20 | 智慧眼科技股份有限公司 | Face image super-resolution reconstruction method and computer-readable storage medium |
CN110211035A (en) * | 2019-04-18 | 2019-09-06 | 天津中科智能识别产业技术研究院有限公司 | Merge the image super-resolution method of the deep neural network of mutual information |
CN110415172A (en) * | 2019-07-10 | 2019-11-05 | 武汉大学苏州研究院 | A kind of super resolution ratio reconstruction method towards human face region in mixed-resolution code stream |
-
2019
- 2019-12-12 CN CN201911271596.8A patent/CN111080521A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615582A (en) * | 2018-11-30 | 2019-04-12 | 北京工业大学 | A kind of face image super-resolution reconstruction method generating confrontation network based on attribute description |
CN110211035A (en) * | 2019-04-18 | 2019-09-06 | 天津中科智能识别产业技术研究院有限公司 | Merge the image super-resolution method of the deep neural network of mutual information |
CN110148085A (en) * | 2019-04-22 | 2019-08-20 | 智慧眼科技股份有限公司 | Face image super-resolution reconstruction method and computer-readable storage medium |
CN110415172A (en) * | 2019-07-10 | 2019-11-05 | 武汉大学苏州研究院 | A kind of super resolution ratio reconstruction method towards human face region in mixed-resolution code stream |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860212A (en) * | 2020-06-29 | 2020-10-30 | 北京金山云网络技术有限公司 | Face image super-segmentation method, device, equipment and storage medium |
CN111860212B (en) * | 2020-06-29 | 2024-03-26 | 北京金山云网络技术有限公司 | Super-division method, device, equipment and storage medium for face image |
WO2022087941A1 (en) * | 2020-10-29 | 2022-05-05 | 京东方科技集团股份有限公司 | Face reconstruction model training method and apparatus, face reconstruction method and apparatus, and electronic device and readable storage medium |
CN112581370A (en) * | 2020-12-28 | 2021-03-30 | 苏州科达科技股份有限公司 | Training and reconstruction method of super-resolution reconstruction model of face image |
CN113628107A (en) * | 2021-07-02 | 2021-11-09 | 上海交通大学 | Face image super-resolution method and system |
CN113628107B (en) * | 2021-07-02 | 2023-10-27 | 上海交通大学 | Face image super-resolution method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110610464A (en) | Face image super-resolution method based on dense residual error neural network | |
CN111080513B (en) | Attention mechanism-based human face image super-resolution method | |
CN108710831B (en) | Small data set face recognition algorithm based on machine vision | |
CN110135366B (en) | Shielded pedestrian re-identification method based on multi-scale generation countermeasure network | |
CN111080521A (en) | Face image super-resolution method based on structure prior | |
CN109509152B (en) | Image super-resolution reconstruction method for generating countermeasure network based on feature fusion | |
CN111428667A (en) | Human face image correcting method for generating confrontation network based on decoupling expression learning | |
CN112396607B (en) | Deformable convolution fusion enhanced street view image semantic segmentation method | |
CN110827213A (en) | Super-resolution image restoration method based on generation type countermeasure network | |
CN110211035B (en) | Image super-resolution method of deep neural network fusing mutual information | |
CN112800937B (en) | Intelligent face recognition method | |
CN110660020A (en) | Image super-resolution method of countermeasure generation network based on fusion mutual information | |
CN110427968A (en) | A kind of binocular solid matching process based on details enhancing | |
CN112288627B (en) | Recognition-oriented low-resolution face image super-resolution method | |
CN112598775B (en) | Multi-view generation method based on contrast learning | |
CN108171249B (en) | RGBD data-based local descriptor learning method | |
CN113792641A (en) | High-resolution lightweight human body posture estimation method combined with multispectral attention mechanism | |
CN115754954A (en) | Feature fusion method applied to radar and AIS track association | |
CN115984339A (en) | Double-pipeline point cloud completion method based on geometric feature refining and confrontation generation network | |
CN112598575B (en) | Image information fusion and super-resolution reconstruction method based on feature processing | |
CN115860113B (en) | Training method and related device for self-countermeasure neural network model | |
CN114764754B (en) | Occlusion face restoration method based on geometric perception priori guidance | |
CN110782503A (en) | Face image synthesis method and device based on two-branch depth correlation network | |
CN113344110B (en) | Fuzzy image classification method based on super-resolution reconstruction | |
CN115294182A (en) | High-precision stereo matching method based on double-cross attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200428 |