CN101425138A - Human face aging analogue method based on face super-resolution process - Google Patents

Human face aging analogue method based on face super-resolution process Download PDF

Info

Publication number
CN101425138A
CN101425138A CNA2008102266292A CN200810226629A CN101425138A CN 101425138 A CN101425138 A CN 101425138A CN A2008102266292 A CNA2008102266292 A CN A2008102266292A CN 200810226629 A CN200810226629 A CN 200810226629A CN 101425138 A CN101425138 A CN 101425138A
Authority
CN
China
Prior art keywords
msub
face
mover
resolution
mrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008102266292A
Other languages
Chinese (zh)
Other versions
CN101425138B (en
Inventor
王蕴红
耿伟
姜方圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN2008102266292A priority Critical patent/CN101425138B/en
Publication of CN101425138A publication Critical patent/CN101425138A/en
Application granted granted Critical
Publication of CN101425138B publication Critical patent/CN101425138B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a human face aging simulating method based on human face super-resolution treatment. The method comprises the following steps: normalizing a human face image; training a super-resolution method for each age bracket; reducing the resolution of each inputted image; and performing the human face super-resolution treatment for an appointed age bracket, i.e. utilizing the trained human face super-resolution method to fill face venation information on the appointed age into the inputted face image with low-absolution so as to obtain a human face aging simulation image. The available human face super-resolution method is a human face super-resolution method based on learning. The invention adopts eigentransformation, can be applicable to any human face super-resolution methods based on the learning, utilizes the human face super-resolution based on the learning and can genuinely and believably simulate human face aging; and the invention only takes the change of human face venation, so the calculation is fast.

Description

Face aging simulation method based on face super-resolution processing
Technical Field
The invention relates to a face aging simulation method based on face super-resolution, and belongs to the technical field of pattern recognition.
Background
Among various factors influencing the accuracy of face recognition, gestures, expressions, illumination and the like belong to external factors, and are relatively solved by a relatively mature method, and the influence caused by age change is that the feature difference caused by the change of an individual is large so that the individual cannot be recognized. The human face aging simulation method can realize automatic updating of the human face database so as to enhance the robustness of the human face recognition system, and can also be used for searching missing persons and the electronic entertainment field.
Designing and implementing a mature and reliable face aging simulation and age recognition algorithm has many practical meanings:
specific facial features of a particular individual may be modeled. Through face aging, the specific current long-phase and facial features of a person missing for years can be described. One specific example is that when a person is lost in the young period and the appearance of the person is changed greatly after the person grows up, the image of the face of the person after the person grows up can be simulated according to the picture of the person in the young period.
Automatic updating is realized for a plurality of huge face databases. In some management systems there is a very large database of faces, and the photos of the persons in the database will of course also differ from the actual appearance of the relevant individual over time. These all cause inconvenience to the management, while manual updating requires cooperation of the user population. If accurate and real face aging simulation can be realized, samples which are inconvenient to match with people, such as suspects or persons who are inconvenient to move, can be updated.
A face recognition system that is robust to age is implemented. The existing face recognition systems cannot solve the aging effect. After a system is trained, the face image recognition effect on a specific age group at that time can be good, but the recognition accuracy of the system is greatly influenced after several years. Therefore, the aging simulation of the face image can provide a good solution to the problem.
The existing face aging simulation method mainly comprises four methods: geometric transformation based methods, average face based methods, three-dimensional information based methods and statistical information based methods.
The method of geometric transformation is mainly to change the biological organs of the same individual by coordinate transformation to obtain their different morphologies, as disclosed in reference [1 ]: thompson D.W. On Growth and form.: cambridge: cambridge Unit. 1961; reference [2 ]: pittenger j.b.and Shaw r.e.: aging Faces as visual-Elastic Events: impedances for a Theory of non-shaped Shape Perception [ J ]. Experimental Psychology: human Perception and Performance, 1975, V1 (4): 374-; reference [3 ]: pittenger J.B.and Shaw R.E., and Mark L.S.Perceptional Information for the Age Level of Faceas a Higher Order invent of Growth [ J ]. Experimental Psychology: human Performance and Performance, 1979, V5 (3): 478-493 it can be seen that there are two main coordinate changes for geometric transformation: "shear strain" and "cardioid strain", wherein shear strain is a tensile transformation parallel to the surface of an object, cardioid strain is a geometric transformation that approximates the contour of the face and skull as the face ages, and "cardioid strain" is superior in the simulation of face contour aging. The face aging simulation based on the coordinate change method only processes the features of the face, such as the appearance, the outline and the like, but no research can prove that the method can process the change of the texture features.
The average face based method mainly utilizes average faces of different age groups and a cartoon algorithm to carry out face aging simulation. From reference [4 ]: burt d.m.and Perrett d.i.perception of Age in AdultCaucasian Male Faces: the Comptor Graphic management of Shape and ColorInformation [ J ]. Proc.royal soc.london, 1995, V259: 137-143. The method can be obtained by mainly calculating the difference between the color and the shape of the face images in the age range of 25-29 years and the age range of 50-54 years, and applying the difference to the images of other individuals at a certain age after the difference is compromised according to a certain algorithm, so that the required aging simulation image of the specific age can be obtained. In the average face-based method, Burt utilizes the color difference between the images of the old age group and all the images in the face library, and then utilizes a cartoon algorithm to expand the difference, thereby realizing the aging simulation of the face; the method can only simulate the image of the face in the old age and can not simulate the image of the face in other age groups.
The method based on the three-dimensional information uses richer three-dimensional face information, and uses methods such as a cartoon algorithm, Principal Component Analysis (PCA), a three-dimensional contour model and the like to simulate face aging, so that good effects can be obtained. From reference [5 ]: o' Toole j., Vetter t., Volz h., et al. Three Dimensional Caricatus of Human Heads: distingtive and Perception of Age [ J ]. Perception, 1997, V26: 719-732; reference [6 ]: o' Toole j., Price t., Vetter t., et al. 3D Shape and 2D Surface Textures of human faces: the 'Role' of "Averages in attraction and Age" [ J ]. Image and vision Computing, 1999, V18 (1): 9-20 parts of; reference [7 ]: choice changsook. age for Predicting Future Faces [ a ]. Proceedings of IEEE International fuzzy Systems Conference [ C ]. 1999: 1603-1608. It can be seen that this method needs to be based on detailed three-dimensional human face features, which can only be obtained by a three-dimensional scanner, and its cost and execution time limit the wide application of this method.
The statistical information-based method simulates the influence of age change on a face image by using a certain model through statistical learning of a large amount of data, and mainly comprises gray level and outline face models based on ASM (automatic sampling model). From reference [8 ]: lanitis A, Taylor C.J., and Cootes T.F.Moduling the Process of Aging in face Images [ A ]. Proceedings of IEEE ICCV99[ C ]. 1999, V1: 131-136; reference [9 ]: lanitis A, Taylor C.J., and Cootes T.F. Towards Automatic organization of Aging Effects on Face Images [ J ]. IEEE Trans on PAMI, 2002, V24 (4): 442-456; reference [10 ]: ramatahan n. and Chellappa r. faceversion Across agent progress [ J ]. IEEE Trans on Image Processing. 2006, V15 (11): 3349-3361. It can be known that Lanitis a and Taylor c.j. propose an ASM-based gray scale and contour face model, which is used to normalize a face image into an average face and extract feature vectors by PCA, and the model can handle not only age changes but also illumination, posture and other influences. The model is used for extracting characteristic vectors sensitive to age change, a polynomial age aging formula is provided, and the formula is trained through a neural network algorithm to simulate aging face images of different ages and estimate the ages of test images. In most cases, the simulation results are better than the traditional PCA method in both qualitative and quantitative modes, but the ideal effect cannot be achieved for individuals who are not similar to any individual in the training library, such as: different from the race of the training library and the situation of occlusion. Narayanan Ramanathan and RamaChellappa provide a craniofacial growth model which can simulate the growth process of human faces from young to adult, the inspiration of the model is from "cardio street", the collected shape information of the human faces in the growth process is defined by using a mark point, the human face aging process is expressed by adopting two constraints of linearity and nonlinearity, and according to the experimental result, the method can better solve the simulation problem of the human face aging from young to adult; but this method does not take into account the characteristics of gray scale, texture, etc. The method based on the statistical information can realize a more real and credible human face aging simulation effect.
Disclosure of Invention
The invention aims to reduce the influence of age factors on face recognition and predict face image change caused by age change. The human face aging simulation method is rapid and real and credible in effect. In order to achieve the purpose, the invention provides a face aging simulation method based on face super-resolution, and the face image is highly structured and has structural similarity. When the resolution of the face image is reduced, most high-frequency information is lost, but research on the face image super-resolution technology shows that the characteristic space dimension of the face image is insensitive to the image resolution, that is, the image resolution does not greatly affect the face recognition result. The experiment of Moghaddam shows that: when the resolution of the face image is reduced to 12 x 21 pixels, the recognition rate on the database consisting of 1800 face images selected from FERET is still as high as 95% or more. That is, the low resolution face image still carries identity information.
In a general super-resolution method based on learning, corresponding low-resolution and high-resolution image databases are needed, and images in the two databases correspond to each other one by one and are respectively low-resolution and high-resolution images of the other side. The super-resolution algorithm maps and reconstructs the new input low-resolution image by learning the information in the two databases. From the above discussion, the low resolution image has little high frequency information left, i.e., little texture information, but its identity information remains. If the high resolution image database can be exchanged for a high resolution face database of a target age, then the super resolution method should be able to retain and utilize images from low resolutionAnd the identity information extracted from the image is used for learning facial texture information of the target aging age and mapping the facial texture information to the original low-resolution face image, so that the aging simulation effect is realized. For example, in age group A1The face image group of the user is used as a low-resolution image database in the age group A2The face image group of the user is used as a high-resolution image database to train a super-resolution algorithm. When the given age group is A1When a certain face image is obtained, the resolution ratio of the certain face image is firstly reduced, and then the certain face image is processed through a trained super-resolution algorithm, so that the age group A of the certain person can be obtained2The face of the person simulates an image. The process of the method is as follows:
the method comprises the following steps: normalizing the face image;
step two: training a super-resolution method of each age group;
step three: reducing the resolution of the input image;
step four: and performing face super-resolution processing of a specified age group.
In the above technical scheme, the age granularity of the face aging simulation result depends on the age group division of the training face image library, and the smaller the training age group division granularity is, the finer the result age is.
In the above technical solution, all face images should be normalized to the same shape and size before face super resolution is performed, wherein the normalization operation should be performed before the face images are used.
The invention has the advantages that:
(1) various learning-based face super-resolution methods can be applied;
(2) only the texture is processed, so that the operation speed is high;
(3) based on learning, the result is more real and credible.
Drawings
FIG. 1 is a flow chart of a face aging simulation method based on face super-resolution;
FIG. 2 is a schematic diagram of an eigen conversion eigen transformation face super resolution algorithm;
FIG. 3 is a simulation result of human face aging from young to middle-aged people using the method of the present invention and eigen conversion eigen transformation;
fig. 4 shows the simulation result of aging of the human face from the middle aged to the young by using the method and eigen transformation of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 shows a specific implementation flow of face aging simulation based on face super-resolution of the present invention:
the method comprises the following steps: normalizing the face image;
and carrying out normalization processing on the face image, wherein the normalization processing comprises histogram equalization, picture size normalization and the like. The histogram equalization is to perform nonlinear stretching on the histogram of the face image and redistribute the image pixel values to make the number of pixels in a certain gray scale range approximately the same, so that the gray scale histogram of the original image is changed from a certain concentrated gray scale interval to uniform distribution in the whole gray scale range. Picture size normalization cuts out the face portion of the image to a specified size. When the image is intercepted to be in the designated size, a method of positioning according to the eye position can be adopted, and the face image can be further converted into the average face shape through thin plate spline interpolation.
Step two: training a super-resolution method of each age group;
and training a face super-resolution algorithm by using face data sets of different ages for selection. The training process mainly refers to the fact that an eigenface space is constructed through a PCA algorithm, and the more detailed the age group division is, namely the smaller the age span in the age group is, the more accurate the human face aging simulation result can be obtained.
The specific calculation process of the eigenface is as follows:
using an n x m order matrix
Figure A200810226629D00091
A face image data set is stored, wherein n is the number of pixels of each face image, M and M are the number of face images in the training face data set, and n [ M ], M are positive integers larger than 1 in general. The average face of the face data set can be easily calculated using the training matrix:
<math> <mrow> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <msub> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> </msub> <msub> <mover> <mi>l</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow></math>
wherein,is the above-mentioned n × m order matrix
Figure A200810226629D00094
And pixels of the ith personal face image in the represented facial image data set are connected into column vectors in columns. The average face is subtracted from each face image vector to yield:
<math> <mrow> <mi>L</mi> <mo>=</mo> <mrow> <mo>[</mo> <msub> <mover> <mi>l</mi> <mo>&RightArrow;</mo> </mover> <mn>1</mn> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mover> <mi>l</mi> <mo>&RightArrow;</mo> </mover> <mi>M</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>]</mo> </mrow> <mo>=</mo> <mrow> <mo>[</mo> <mover> <msubsup> <mi>l</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&RightArrow;</mo> </mover> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mover> <msubsup> <mi>l</mi> <mi>M</mi> <mo>&prime;</mo> </msubsup> <mo>&RightArrow;</mo> </mover> <mo>]</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow></math>
wherein,
Figure A200810226629D00096
for an average face, the foot mark l takes a natural number [ 1.,. M. ]],
Figure A200810226629D00097
Representing the difference between each face image and the average face, and then an orthonormal eigenvector matrix, i.e., eigenface space, can be calculated from the covariance matrix W.
<math> <mrow> <mi>W</mi> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mover> <mi>l</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>l</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>=</mo> <msup> <mi>LL</mi> <mi>T</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow></math>
It is not practical to compute the eigenvector matrix directly from W because the matrix W is usually too large. A smaller matrix R ═ L may be initially alignedTL is calculated
(LTL)Vl=VlΛl                          (4)
Here VlIs a matrix of eigenvectors, andlfor the eigenvalue matrix, L is left-multiplied at each end of equation (4):
(LLT)LVl=LVlΛl (5)
so that C is LLTCharacteristic vector E oflThis can be derived from equation (5):
<math> <mrow> <msub> <mi>E</mi> <mi>l</mi> </msub> <mo>=</mo> <mi>L</mi> <msub> <mi>V</mi> <mi>l</mi> </msub> <msubsup> <mi>&Lambda;</mi> <mi>l</mi> <mrow> <mo>-</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow></math>
PCA uses a series of weighted eigenfaces to describe the face image. Giving a face image
Figure A200810226629D0009112956QIETU
Projected into eigenface space, a parameter vector can be obtained:
<math> <mrow> <msub> <mover> <mi>w</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <msubsup> <mi>E</mi> <mi>l</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow></math>
using this parameter vector and K eigenfaces, the original face image can be reconstructed. If it is used
Figure A200810226629D000911
Representing a reconstructed low resolution face image, thenCan be expressed as
<math> <mrow> <msub> <mover> <mi>r</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <msub> <mi>E</mi> <mi>l</mi> </msub> <msub> <mover> <mi>w</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow></math>
Wherein
Figure A200810226629D000914
In the form of a matrix of feature vectors,
Figure A200810226629D000915
is the average face of the face data set.
Step three: reducing the resolution of the input image;
by down-sampling the input image to low resolution, the texture information of the face image is lost, and preparation is made for reconstructing and filling the face texture information of the specified age bracket by using a face super-resolution method in the fourth step.
Step four: performing face super-resolution processing of a specified age group;
and filling the facial texture information of the specified age into the low-resolution input facial image by using the trained facial super-resolution method, thereby obtaining the facial aging simulation image. The face super-resolution method that can be used is a learning-based face super-resolution method, and in this embodiment, a specific implementation process of face image super-resolution is described by taking eigen transformation as an example.
The super-resolution method of eigen transformation can realize the super-resolution of the face image through the mapping relationship between two training sets. The method adopts PCA to express the structural similarity of the face image, and realizes the mapping between low-resolution and high-resolution training samples through a conversion function. In the feature space extracted by PCA, information components of different frequencies are irrelevant, and by selecting the number of eigenfaces (eigenfaces), the face information can be extracted to the maximum extent while removing noise.
FIG. 2 illustrates the flow of the eigen-transformation Eigenrans super-resolution algorithm:
first, a parameter vector [ c ] is obtained using PCA for an input low-resolution face image1,c2,…,cM]T
The parameter vector [ c ]1,c2,…,cM]TIs obtained by projecting an input image to an eigenface space, as shown in formula (1); wherein, c1,c2,...,cMThe projected coordinates of the input image in eigenface space.
Then, keeping the parameter vector unchanged, converting the feature vector set obtained by training the low-resolution face image into a corresponding feature vector set obtained by training the high-resolution face image, and obtaining a preliminary approximate super-resolution result; specifically, the feature transformation aims to establish a mapping relationship between a low-resolution face library and a high-resolution face library based on feature parameters extracted by PCA. From equations (6) and (8):
<math> <mrow> <msub> <mover> <mi>r</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <mi>L</mi> <msub> <mi>V</mi> <mi>l</mi> </msub> <msup> <msub> <mi>&Lambda;</mi> <mi>l</mi> </msub> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </msup> <msub> <mover> <mi>w</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <mi>L</mi> <mover> <mi>c</mi> <mo>&RightArrow;</mo> </mover> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow></math>
wherein, <math> <mrow> <mover> <mi>c</mi> <mo>&RightArrow;</mo> </mover> <mo>=</mo> <msub> <mi>V</mi> <mi>l</mi> </msub> <msup> <msub> <mi>&Lambda;</mi> <mi>l</mi> </msub> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </msup> <msub> <mover> <mi>w</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>c</mi> <mi>M</mi> </msub> <mo>]</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow></math> equation (9) can also be rewritten as:
<math> <mrow> <msub> <mover> <mi>r</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <mi>L</mi> <mover> <mi>c</mi> <mo>&RightArrow;</mo> </mover> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <msub> <mi>c</mi> <mi>i</mi> </msub> <msubsup> <mover> <mi>l</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow></math>
this indicates that the input low resolution image can be recovered from a linear combination of M images in the low resolution image training library. Here, the
Figure A200810226629D00104
The contribution of each image in the training library to the reconstructed image, namely the weight value, is reflected. Wherein,
Figure A200810226629D00105
representing a low resolution face image of a person
Figure A200810226629D00106
Replacement with a corresponding high resolution image
Figure A200810226629D0010113023QIETU
And using high resolution average face
Figure A200810226629D00107
Instead of the former
Figure A200810226629D00108
The following can be obtained:
<math> <mrow> <msub> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <msub> <mi>c</mi> <mi>i</mi> </msub> <msubsup> <mover> <mi>h</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow></math>
then
Figure A200810226629D00111
Should be the desired high resolution image reconstructed from the low resolution.
Then, carrying out PCA projection transformation once in the high-resolution eigenface space, and carrying out parameter constraint on the obtained principal component to reduce or even eliminate distortion and noise influence;
specifically, due to noise interference, the image obtained by eigen transformation cannot completely truly approximate a true high-resolution image, and it needs to satisfy two constraints: firstly, it must be able to obtain the original low resolution image through fuzzification and other processes; second, it needs to be similar to a human face image under a high resolution condition. For the first constraint, equation (9) shows that the linear approximation linear operation relationship between the low resolution image and the high resolution image without considering the noise, for the training set,
<math> <mrow> <msub> <mover> <mi>l</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <mi>H</mi> <msub> <mover> <mi>h</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow></math>
<math> <mrow> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <mi>H</mi> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow></math>
where H is the linear operator of the high resolution image d to the low resolution image,
Figure A200810226629D00114
and
Figure A200810226629D00115
respectively low-resolution and high-resolution training images,
Figure A200810226629D00116
and
Figure A200810226629D00117
low resolution and high resolution average faces, respectively.
Substituting the formulas (12) and (13) into the formulas (10) and (11) to obtain
<math> <mrow> <msub> <mover> <mi>r</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <msub> <mi>c</mi> <mi>i</mi> </msub> <mi>H</mi> <msubsup> <mover> <mi>h</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> <mo>&prime;</mo> </msubsup> <mo>+</mo> <mi>H</mi> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>=</mo> <mi>H</mi> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msub> <mi>c</mi> <mi>i</mi> </msub> <msub> <mover> <mi>h</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>H</mi> <mover> <mi>h</mi> <mo>&RightArrow;</mo> </mover> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow></math>
Due to the fact thatIs for low resolution input image
Figure A200810226629D001110
So that the resolution is seen to be reduced
Figure A200810226629D001111
Can obtain
Figure A200810226629D001112
I.e. the image obtained by eigenransformation can be blurred to obtain the original resolution image.
For the second constraint, equation (11) indicates
Figure A200810226629D001111
Is a linear combination of high resolution face images and therefore it is necessarily like a face image. However, because
Figure A200810226629D001114
Rather than being generated by a high resolution image database, some noise due to distortion that is not like a human face image is inevitable. If the eigenface space pair of the high-resolution face database is utilized
Figure A200810226629D001115
Reconstruction should be performed to mitigate these noises.
Get EhFor the feature vector matrix calculated from the high resolution face image database, the feature vector matrix is calculated
Figure A200810226629D001116
Projection in high resolution eigenface space:
<math> <mrow> <msub> <mover> <mi>w</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>=</mo> <msubsup> <mi>E</mi> <mi>h</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow></math>
note wh(i) Is composed of
Figure A200810226629D001118
Projection coordinates in high resolution eigenface space
Figure A200810226629D001119
Master of (1)The components are as follows. If w ish(i) Far greater than the eigenvalue lambda corresponding to the high resolution eigenfaceiThen some non-face, face-independent distortion-induced noise may be introduced; otherwise, it indicates that no noise is introduced,
Figure A200810226629D001120
all contained information is related to human faces.
Before reconstructing the face image, in order to reduce noise, the following constraint is carried out on the principal component:
<math> <mrow> <msubsup> <mi>w</mi> <mi>h</mi> <mo>&prime;</mo> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>w</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>,</mo> </mtd> <mtd> <mrow> <mo>|</mo> <msub> <mi>w</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mo>&le;</mo> <mi>a</mi> <msqrt> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> </msqrt> </mtd> </mtr> <mtr> <mtd> <mi>sign</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mi>a</mi> <msqrt> <msub> <mi>&lambda;</mi> <mi>l</mi> </msub> </msqrt> <mo>,</mo> </mtd> <mtd> <mrow> <mo>|</mo> <msub> <mi>w</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mo>></mo> <mi>a</mi> <msqrt> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> </msqrt> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>a</mi> <mo>></mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow> </mrow></math>
wherein the constraint parameter a is a positive real number, i.e. wh(i) Is limited to
Figure A200810226629D00122
Within the range of (1), using
Figure A200810226629D00123
To reconstruct a final super-resolution image on the high-resolution eigenface space.
And finally, obtaining a final face image super-resolution result through image reconstruction.
Although the eigen conversion eigen transformation is taken as an example for performing the face super-resolution processing in the present embodiment, those skilled in the art can easily implement the face aging simulation by using other learning-based face super-resolution methods.
Fig. 3 and 4 show the face aging simulation result obtained by using the method of the present invention and the eigen transformation face super-resolution method, and the face aging database is FG-net (face and geometry registration Research network). The FG-NET face aging database is one of the databases which are only suitable for face aging simulation and age identification research in a plurality of face databases, can be published for all researchers and can be freely obtained, and is established as a part of an FG-NET project of the European Union. Published part a of the database contains 1002 facial images from 82 individuals, ranging in age from 0 to 62 years. Fig. 3 shows the aging simulation results of faces from young to middle-aged people, 4 groups of results are extracted from the aging simulation results of 20 tested faces, 4 columns are shown in the figure, wherein 2 groups of male and female are respectively shown in the figure, the 1 st behavior original young test image, the 2 nd behavior young to middle-aged people face aging simulation result, and the 3 rd behavior is the real face image of the individual in middle-aged and aged people. Fig. 3 shows the results of face aging simulation from middle aged to young, 4 groups of results are extracted from the results of aging simulation of 20 tested faces, 4 columns in the figure, wherein 2 groups of men and women are respectively provided, the 1 st line is the original middle aged and old test image, the 2 nd line is the results of face aging simulation from middle aged to young, and the 3 rd line is the real face image of the individual in young. According to experimental results, the method can truly and effectively simulate the human face aging.

Claims (2)

1. A face aging simulation method based on face super-resolution processing comprises the following steps:
the method comprises the following steps: normalizing the face image;
carrying out normalization processing on the face image, wherein the normalization processing comprises histogram equalization and picture size normalization;
step two: training a super-resolution method of each age group;
training a face super-resolution algorithm by using face data sets of different age groups for selection;
the training process in the method is to construct an eigenface space by using a PCA algorithm, and the more detailed the age group division is, namely the smaller the age span in the age group is, the more accurate the human face aging simulation result is obtained;
in a feature space extracted by PCA, information components of different frequencies are irrelevant, and the number of the eigenfaces is selected, so that the face information is extracted to the maximum extent while noise is removed;
step three: reducing the resolution of the input image;
the input image is down-sampled to low resolution, so that the texture information of the face image is lost, and preparation is made for reconstructing by using a face super-resolution method and filling the face texture information of a specified age bracket in the fourth step;
it is characterized in that the preparation method is characterized in that,
step four: performing face super-resolution processing of a specified age group;
filling face texture information of a specified age into a low-resolution input face image by using a trained face super-resolution method, thereby obtaining a face aging simulation image;
the face super-resolution method is a learning-based face super-resolution method, and intrinsic transformation Eigenransfromation is adopted in the invention;
the process of the eigen conversion eigen transformation super-resolution algorithm comprises the following steps:
first, a parameter vector [ c ] is obtained using PCA for an input low-resolution face image1,c2,…,cM]T
The parameter vector [ c ]1,c2,…,cM]TIs obtained by projecting the input image into the eigenface space, where c1,c2,...,cMProjection coordinates of the input image in the eigenface space;
then, keeping the parameter vector unchanged, converting a feature vector set obtained by training the low-resolution face image into a corresponding feature vector set obtained by training the high-resolution face image, and obtaining a preliminary approximate super-resolution result;
<math> <mrow> <msub> <mover> <mi>r</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <msub> <mi>LV</mi> <mi>l</mi> </msub> <msup> <msub> <mi>&Lambda;</mi> <mi>l</mi> </msub> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </msup> <msub> <mover> <mi>w</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <mi>L</mi> <mover> <mi>c</mi> <mo>&RightArrow;</mo> </mover> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> </mrow></math>
wherein, the foot mark l takes a natural number [ 1.,. M. ]],
Figure A200810226629C0002174451QIETU
Representing a reconstructed low-resolution image of a human face,is the average face of the face data set, L is the matrix obtained by subtracting the average face from each face image vector, VlA matrix of eigenvectors, a human, a matrix of eigenvalues, <math> <mrow> <mover> <mi>c</mi> <mo>&RightArrow;</mo> </mover> <mo>=</mo> <msub> <mi>V</mi> <mi>l</mi> </msub> <msup> <msub> <mi>&Lambda;</mi> <mi>l</mi> </msub> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </msup> <msub> <mover> <mi>w</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>c</mi> <mi>M</mi> </msub> <mo>]</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow></math>
Figure A200810226629C00032
for a vector of parameters, the above formula is rewritten as:
<math> <mrow> <msub> <mover> <mi>r</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <mi>L</mi> <mover> <mi>c</mi> <mo>&RightArrow;</mo> </mover> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <msub> <mi>c</mi> <mi>i</mi> </msub> <msubsup> <mover> <mi>l</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>l</mi> </msub> </mrow></math>
wherein,
Figure A200810226629C0003175425QIETU
the contribution of each image in the training library to the reconstructed image, namely the weight value is reflected; wherein,
Figure A200810226629C00035
representing a low resolution face image of a person
Figure A200810226629C00036
Replacement with a corresponding high resolution imageAnd using high resolution average face
Figure A200810226629C00038
Average face replacing face data set
Figure A200810226629C00039
Obtaining:
<math> <mrow> <msub> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <msub> <mi>c</mi> <mi>i</mi> </msub> <msubsup> <mover> <mi>h</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msub> <mover> <mi>m</mi> <mo>&RightArrow;</mo> </mover> <mi>h</mi> </msub> </mrow></math>
wherein,the required high-resolution image reconstructed from low resolution is obtained;
then, carrying out PCA projection transformation once in the high-resolution eigenface space, and carrying out parameter constraint on the obtained principal component;
and finally, obtaining a final face image super-resolution result through image reconstruction.
2. The face aging simulation method based on face super-resolution processing according to claim 1, wherein: the PCA projection transform described in step four, in which the image obtained by the PCA projection transform satisfies two constraints:
firstly, blurring the image to obtain an original low-resolution image;
and secondly, the image is similar to a human face image under the condition of high resolution.
CN2008102266292A 2008-11-18 2008-11-18 Human face aging analogue method based on face super-resolution process Expired - Fee Related CN101425138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102266292A CN101425138B (en) 2008-11-18 2008-11-18 Human face aging analogue method based on face super-resolution process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102266292A CN101425138B (en) 2008-11-18 2008-11-18 Human face aging analogue method based on face super-resolution process

Publications (2)

Publication Number Publication Date
CN101425138A true CN101425138A (en) 2009-05-06
CN101425138B CN101425138B (en) 2011-05-18

Family

ID=40615743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102266292A Expired - Fee Related CN101425138B (en) 2008-11-18 2008-11-18 Human face aging analogue method based on face super-resolution process

Country Status (1)

Country Link
CN (1) CN101425138B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551902B (en) * 2009-05-15 2011-07-27 武汉大学 A characteristic matching method for compressing video super-resolution based on learning
US8818050B2 (en) 2011-12-19 2014-08-26 Industrial Technology Research Institute Method and system for recognizing images
CN104133899A (en) * 2014-08-01 2014-11-05 百度在线网络技术(北京)有限公司 Method and device for generating picture search library and method and device for searching for picture
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
CN104978554A (en) * 2014-04-08 2015-10-14 联想(北京)有限公司 Information processing method and electronic equipment
CN105869226A (en) * 2016-06-02 2016-08-17 南京安智易达智能科技有限公司 Face-recognition-based automatic roll-call system and method for prisons
CN105956627A (en) * 2016-05-13 2016-09-21 中国航空工业集团公司西安飞机设计研究所 Method for recognizing display screen of avionics system
CN107491747A (en) * 2017-08-08 2017-12-19 西南大学 Face Forecasting Methodology based on regression analysis and wavelet transformation
CN108140110A (en) * 2015-09-22 2018-06-08 韩国科学技术研究院 Age conversion method based on face's each position age and environmental factor, for performing the storage medium of this method and device
CN108171167A (en) * 2017-12-28 2018-06-15 百度在线网络技术(北京)有限公司 For exporting the method and apparatus of image
CN109145135A (en) * 2018-08-03 2019-01-04 厦门大学 A kind of human face portrait aging method based on principal component analysis
CN109544714A (en) * 2018-10-16 2019-03-29 广州师盛展览有限公司 A kind of people face identification based on biological characteristic is registered system
CN109994206A (en) * 2019-02-26 2019-07-09 华为技术有限公司 A kind of appearance prediction technique and electronic equipment
CN110147458A (en) * 2019-05-24 2019-08-20 涂哲 A kind of photo screening technique, system and electric terminal
CN110232799A (en) * 2019-06-24 2019-09-13 秒针信息技术有限公司 The method and device of pursuing missing object
CN112581356A (en) * 2020-12-14 2021-03-30 广州岸边网络科技有限公司 Portrait transformation processing method, device and storage medium

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551902B (en) * 2009-05-15 2011-07-27 武汉大学 A characteristic matching method for compressing video super-resolution based on learning
US8818050B2 (en) 2011-12-19 2014-08-26 Industrial Technology Research Institute Method and system for recognizing images
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
CN104851123B (en) * 2014-02-13 2018-02-06 北京师范大学 A kind of three-dimensional face change modeling method
CN104978554B (en) * 2014-04-08 2019-02-05 联想(北京)有限公司 The processing method and electronic equipment of information
CN104978554A (en) * 2014-04-08 2015-10-14 联想(北京)有限公司 Information processing method and electronic equipment
CN104133899A (en) * 2014-08-01 2014-11-05 百度在线网络技术(北京)有限公司 Method and device for generating picture search library and method and device for searching for picture
CN104133899B (en) * 2014-08-01 2017-10-13 百度在线网络技术(北京)有限公司 The generation method and device in picture searching storehouse, image searching method and device
CN108140110B (en) * 2015-09-22 2022-05-03 韩国科学技术研究院 Age conversion method, storage medium and apparatus for performing the same
CN108140110A (en) * 2015-09-22 2018-06-08 韩国科学技术研究院 Age conversion method based on face's each position age and environmental factor, for performing the storage medium of this method and device
CN105956627A (en) * 2016-05-13 2016-09-21 中国航空工业集团公司西安飞机设计研究所 Method for recognizing display screen of avionics system
CN105869226A (en) * 2016-06-02 2016-08-17 南京安智易达智能科技有限公司 Face-recognition-based automatic roll-call system and method for prisons
CN107491747A (en) * 2017-08-08 2017-12-19 西南大学 Face Forecasting Methodology based on regression analysis and wavelet transformation
CN108171167A (en) * 2017-12-28 2018-06-15 百度在线网络技术(北京)有限公司 For exporting the method and apparatus of image
CN109145135A (en) * 2018-08-03 2019-01-04 厦门大学 A kind of human face portrait aging method based on principal component analysis
CN109544714A (en) * 2018-10-16 2019-03-29 广州师盛展览有限公司 A kind of people face identification based on biological characteristic is registered system
CN109994206A (en) * 2019-02-26 2019-07-09 华为技术有限公司 A kind of appearance prediction technique and electronic equipment
CN110147458A (en) * 2019-05-24 2019-08-20 涂哲 A kind of photo screening technique, system and electric terminal
CN110232799A (en) * 2019-06-24 2019-09-13 秒针信息技术有限公司 The method and device of pursuing missing object
CN112581356A (en) * 2020-12-14 2021-03-30 广州岸边网络科技有限公司 Portrait transformation processing method, device and storage medium
CN112581356B (en) * 2020-12-14 2024-05-07 广州岸边网络科技有限公司 Portrait transformation processing method, device and storage medium

Also Published As

Publication number Publication date
CN101425138B (en) 2011-05-18

Similar Documents

Publication Publication Date Title
CN101425138B (en) Human face aging analogue method based on face super-resolution process
Yang et al. Learning face age progression: A pyramid architecture of gans
CN104851123B (en) A kind of three-dimensional face change modeling method
Wang et al. Attentive normalization for conditional image generation
Wang et al. Combining tensor space analysis and active appearance models for aging effect simulation on face images
Du et al. Face aging simulation and recognition based on NMF algorithm with sparseness constraints
Duan et al. 3D face reconstruction from skull by regression modeling in shape parameter spaces
CN103649987A (en) Face impression analysis method, cosmetic counseling method, and face image generation method
CN111950430B (en) Multi-scale dressing style difference measurement and migration method and system based on color textures
CN105550989B (en) The image super-resolution method returned based on non local Gaussian process
CN113781640A (en) Three-dimensional face reconstruction model establishing method based on weak supervised learning and application thereof
CN113362924A (en) Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system
CN110852935A (en) Image processing method for human face image changing with age
CN111814891A (en) Medical image synthesis method, device and storage medium
Lanitis et al. Towards automatic face identification robust to ageing variation
Bastanfard et al. Toward anthropometrics simulation of face rejuvenation and skin cosmetic
JP2004102359A (en) Image processing device, method and program
Sharma et al. Comparative analysis of CycleGAN and AttentionGAN on face aging application
Bian et al. Conditional adversarial consistent identity autoencoder for cross-age face synthesis
CN113947520A (en) Method for realizing face makeup conversion based on generation of confrontation network
Fang et al. Facial makeup transfer with GAN for different aging faces
CN117593178A (en) Virtual fitting method based on feature guidance
CN116758220A (en) Single-view three-dimensional point cloud reconstruction method based on conditional diffusion probability model
CN115050067B (en) Facial expression construction method and device, electronic equipment, storage medium and product
Mena-Chalco et al. 3D human face reconstruction using principal components spaces

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110518

Termination date: 20151118

EXPY Termination of patent right or utility model