GB2416098A - Facial image generation system - Google Patents

Facial image generation system Download PDF

Info

Publication number
GB2416098A
GB2416098A GB0415390A GB0415390A GB2416098A GB 2416098 A GB2416098 A GB 2416098A GB 0415390 A GB0415390 A GB 0415390A GB 0415390 A GB0415390 A GB 0415390A GB 2416098 A GB2416098 A GB 2416098A
Authority
GB
United Kingdom
Prior art keywords
vector
individuals
group
individual
age range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0415390A
Other versions
GB0415390D0 (en
Inventor
Christopher Solomon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Kent at Canterbury
Original Assignee
University of Kent at Canterbury
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Kent at Canterbury filed Critical University of Kent at Canterbury
Priority to GB0415390A priority Critical patent/GB2416098A/en
Publication of GB0415390D0 publication Critical patent/GB0415390D0/en
Publication of GB2416098A publication Critical patent/GB2416098A/en
Application status is Withdrawn legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Abstract

A facial image generation system uses a first vector representation of a first facial image of a first individual having an age in a first age range, vector representations of a plurality of facial images of a first group of individuals having ages in the first age range, third vector representations of a plurality of facial images of a second group of individuals having ages in a second age range which does not overlap with the first age range and fourth vector representations of a plurality of facial images of a third group of individuals which are related to the individuals of the second group and of a different generation to the individuals of the second group. Data derived from the first, second, third and fourth vector representations is processed to generate an output vector representation which comprises a vector representation of an estimate of the facial image of the first individual in the second age range. This system uses samples of facial images of the starting age (the first age range) and the target age (the second age range) in order to model the ageing process on the vector representations of a given facial image. In addition, samples of facial images are used to derive a relationship between the facial images of different generations of the same family. This relationship can then be used to ensure that an ageing algorithm (to an older or to a younger age) can be applied which is consistent both with a general change in appearance between the starting age and the target age, but also to provide consistency of the estimated facial image with other family members of the first individual.

Description

24 1 6098

FACIAL IMAGE GENERATION SYSTEM

This invention relates to the generation of facial images, using digital representations of facial features. In particular, the invention relates to the generation of a facial image which is based on a known facial image and to which an ageing algorithm has been applied.

There has been much work into the digital representation of facial images. One known method involves use of a statistical model of facial appearance. This model captures both shape and texture properties of a human face, and an individual face is represented as a highly compact vector of parameters. A close resemblance to a real face can be provided with as few as 25 parameters.

The known statistical model is described further in the article by T.F. Cootes, G.J. Edwards and C.J.Taylor entitled "Active Appearance Models", IEEE PAMI, Vol.23, No.6, pp.681-685, 2001.

The representation of the human face as a vector of parameters opens up many possibilities.

One proposed use of the statistical model is to generate a resemblance of a facial image from memory, for example for use by the police. In this proposed use of the statistical model, an evolutionary algorithm is used to iteratively modify an initial facial image (which may be randomly generated) to reach a target facial image.

This invention is directed to the processing of a digital image of a human face so as to change the apparent age of the face. There are a number of possible applications for such a process.

For missing persons applications, a plausible and near photo-realistic facial appearance can be generated of individuals for whom no current photographic images exist. Such a process can also be used to produce a plausible and near photo-realistic facial appearance of individuals as they were at an earlier age. Such a method could be used, for example, in the attempt to convict or exonerate individuals accused of war crimes in an earlier epoch.

The effect of ageing on a human face is not, however, easily predicted or easily modelled, and a reliable method of processing facial images to represent ageing has not been achieved. In particular, the ageing effect as a child matures to adulthood is particularly difficult to model.

According to the invention, there is provided a facial image generation system, comprising: means for receiving a first vector representation of a first facial image of a first individual having an age in a first age range; means for receiving data derived from second vector representations of a plurality of facial images of a first group of individuals having ages in the first age range; means for receiving data derived from third vector representations of a plurality of facial images of a second group of individuals having ages in a second age range which does not overlap with the first age range; means for receiving data derived from fourth vector representations of a plurality of facial images of a third group of individuals which are related to the individuals of the second group and of a different generation to the individuals of the second group; and a processor for processing the data derived from the first, second, third and fourth vector representations to generate an output vector representation which comprises a vector representation of an estimate of the facial image of the first individual in the second age range.

This apparatus uses samples of facial images of the starting age (the first age range) and the target age (the second age range) in order to model the ageing process on the vector representations of a given facial image. In addition, samples of facial images are used to derive a relationship between the facial images of different generations of the same family.

This relationship can then be used to ensure that an ageing algorithm (to an older or to a younger age) can be applied which is consistent both with a general change in appearance between the starting age and the target age, but also to provide consistency of the estimated facial image with other family members of the first individual.

The second, third and fourth vector representations are preferably of individuals having the same gender and of the same racial origin as the first individual. The second and third vector representations are preferably of individuals not related to the first individual, and these may therefore comprise general samples which can be used for processing any starting image of the corresponding age range. The fourth group of individuals preferably include a parent or child of the first individual. The image for this parent or child can then be used to ensure that the ageing algorithm also takes into account the way ageing effects are specific to individual families.

The second age range may be higher than the first age range, so that an image of a child can for example be used to generate an image of an adult. For example, the first age range can be a pre-pubescent age range, and the second age range can be a pubescent or adult age range.

The processor is adapted to apply a transformation to the first vector representation using a first scahng vector representing the difference between an average of the second vector representations and an average of the third vector representations. This difference can be considered to represent a general ageing effect. The processor is preferably further adapted to apply a transformation to the first vector representation using a second scaling vector representing the difference between the vector representation of the parent or child of the first individual and the first vector representation. This second scaling vector represents a difference between different generations of the family of the first individual. The two scaling vectors thus enable family resemblances to be maintained whilst also applying general ageing effects.

The transformation may comprise a linear transformation using weighted values of the first and second scaling vectors.

For example, the weighting values can be selected to maximise the product of (i) the probability of the output vector representation belonging to the second group of individuals with (ii) the probability of the output vector representation being a parent or child of the known parent or child of the first individual. This aims to ensure that the generated facial image Is probabilistically a good fit into the second group of individuals of the target age, and also probabihstically remains a good fit into the family of the first individual.

The invention also provides a method of generating a facial image, comprising: receiving a first vector representation of a first facial image of a first individual having an age in a first age range; receiving data derived from second vector representations of a plurality of facial images of a first group of individuals having ages in the first age range; receiving data derived from third vector representations of a plurality of facial images of a second group of individuals having ages in a second age range which does not overlap with the first age range; receiving data derived from fourth vector representations of a plurality of facial images of a third group of individuals which are related to the individuals of the second group and of a different generation to the individuals of the second group; and processing the data derived from the first, second, third and fourth vector representations to generate an output vector representation which comprises a vector representation of an estimate of the facial image of the first individual in the second age range.

An example of the invention will now be described in detail with reference to the accompanying drawings, in which: Figure 1 shows the data used in the method and apparatus of the invention; Figure 2 shows the information derived from the input data in the method and apparatus of the invention; and Figure 3 shows an apparatus of the invention.

The invention provides a method of producing a statistically optimal estimate of the facial appearance of a first individual at a different age.

An original face is thus to be age-transformed from its current age of Know to a target age of IT. To produce such an optimal estimate, the invention utilises the following data in digital form: r -The original face. This group (of one) will be referred to by the symbol O. -A selection of faces of approximately the same age, gender and racial origin as the original face. This group is referred to by the symbol S. -A selection of faces corresponding to the parent(s) or sibling(s) of the group S. These S faces therefore represent individuals of a different generation to the faces in group S. This group is referred to by the symbol P. -A selection of faces of the same age, gender and racial origin as the original face having the (approximate) target age AT This group is referred to by the symbol T. This data is schematically summarised in Figure 1.

The first step in the procedure of the invention is to build a statistical appearance model using all the faces in the groups O,P,T and S. The precise method for calculating such an appearance model is described in detail in the article by T.F.Cootes, G.J. Edwards and C.J. Taylor entitled "Active Appearance Models", referenced above. The central result of constructing such an appearance model as is that each example face in any of the the groups O,P,T and S can be parametrically encoded as a compact vector of numerical parameters which retains all the important shape and textural information in the facial appearance. There will typically be 25-60 such parameters.

Making the general assumption that N such parameters are sufficient to encode the facial appearance of the faces to the required accuracy, an appearance vector can be denoted by X = [X,X2, XN]' Each appearance vector, consisting of N numerical parameters, may be considered to occupy a certain location in an abstract parameter space of N dimensions, the magnitude of the kth component xk thereby corresponding to the extension along the kth axis of this abstract space.

Altering any of the components in an appearance vector thus moves to a different position in this abstract space and alters the facial appearance of the individual. The parameter space effectively defines a very large number of different facial appearances each of which corresponds to a specific point location encoded by the vector x = [x,, X2, XN] . This invention provides a means to move from one location in the parameter space to another location effectively aging the facial appearance in a way which is consistent with information about parental similarity in facial appearance and typical aging effects for the subject's age, gender and racial origin.

For this explanation: The appearance vector of the original face (i.e. the single subject in group O) is denoted by c = [c,, C2'' ' 'CN] ' The appearance vector of the transformed face is denoted by x = [x,, X2, XN] . This is the output of the system of the invention.

The appearance vector of a parent (or child if the aim is to reduce the age) of the subject is denoted by xp = [xP,,XP2,. XPN].

ANT = M I (XT)k represents the average or prototype appearance vector of the target age group T where (x T) is the appearance vector of the kth member of group T. As = L (is) represents the average or prototype appearance vector of the current age group S where (XS) is the appearance vector of the kth member of group S. Using these definitions it is possible to conceptually represent the transformation process by a vector diagram as shown in Figure 2.

The aim is to transform c to a new vector x in a way which gives optimal results. The preferred implementation of the invention provides a linear transformation of the form:

- - - -

x=c+w,s+w2v (1) where the free parameters to be determined are Wanda. This transformation models the aging process as a weighted combination of two basic effects: i) The tendency for an individual to resemble a parent or sibling as defined by the vector s.

ii) The tendency for an individual to follow a prototypical dominant aging process defined by representative samples of people of the same gender and ethnic origin at both the current and target ages. This is defined by the vector v.

As shown in Figure 2, the vector v is a normalised vector representing the difference between the average vectors of groups S and T. The vector s is the difference between the starting vector c and the known parent (or sibling) vector.

To determine the appropriate weighting parameters Wanda, the following criterion for optimality are selected: The transformed vector x (which represents the facial image after the change in age) should, as far as possible, be both a typical member of the target age group T and a typical member of the parent (sibling) - child distribution as determined by corresponding pairs in the groups P and T Specifically, the density functions for both groups are estimated and an estimator for x is obtained which is maximum likelihood for simultaneously originating from both distributions.

It is a direct result of the calculation of a statistical appearance model as proposed (i.e. on groups O,P,S and T together), that the distribution of the appearance model parameters over the whole space is independent:, multivariate normal. Indeed, the independent distribution of the model parameters is precisely the aim of the calculation. Thus the model parameters (here denoted generally by y = [Y., Y2, IN] ) over the whole space is of the form: P(Y) = INNS IC lll2 P{ 2 (Y by) Cy (Y- )} (2) where C,, is a diagonal covariance matrix.

The respective distributions of the parameters over the two sub-spaces described above (i.e. target age group T and parent to child at target age group) both take a similar multivariate normal form but are not independent. Thus, the distribution over the target age group is: PT(X) = , exp -- x-y C' x- - C, { 2 ( T) T ( /1T)} (3) where At,. and CT are the mean vector and covariance matrix of the target group T respectively.

The covariance matrix CT in this case will not be of diagonal form.

This equation essentially provides the probability of a given vector x being a member of the group T based on the mean and covariance values.

Similarly, the distribution for the difference in appearance vector between members of the target age group and their parents = x - xp is given by: PA (I) 2 IL' 72 1 11/2 exp {-2 (A-QUA) CA (A-AA)} (4) where QUA end C, are tile mean vector and covariance matrix respectively.

This equation essentially provides the probability of a given difference vector being a suitable difference between a member of age group T and their parent (or child).

It is important to recognise that the free parameters of these two density functions,UT'CT, u and C, are not known a priors and must be estimated from the example faces available in a training sample. These values can all be fixed, however, so that they are stored for use in many different ageing operations, and provided samples are stored for the appropriate age ranges, all that is needed to implement the ageing algorithm is the starting image and an image of the parent or child.

The maximum likelihood estimator x for the facial appearance at the target age (where the subject has a parent with appearance vector xP) is that vector x which maximises the product PT (X)PA, (I) of the two density functions defined by equations 3 and 4. The exponential nature of these distributions allows this to be equivalently expressed as a minimization problem, namely, to find the appearance vector x which minimises the cost function: Q = -2 In [PT (X)PA (I)] (5) Substituting equations 1,3 and 4 into equation 5, a function of the two free variables w, and w2 is obtained: Q(W},W2) k[(X ANT) CT (X A7.)+(/A At) CA (A Its)] (6) where k Is a multiplicative constant and = x - xp.

To find the optimal solution for x, standard methods of calculus can be used. In particular, partial derivatives of Q with respect to the variables w, and w2 are obtained, and these are set to zero to obtain a pair of simultaneous equations in w, and w2 which can be solved trivially.

These equations are: aw, + bw2 = y, (7) bw, + CW2 = Y2 where the coefficients a,b,c,y, end y2 are known quantities given by a = s s b = S T -T - with = CT + CA Y. = S CT [LIT -C] + s CA [{Z C] (8) Y2 = V CT [LIT -C] + V CA [I-C] with GO = x p + Ah, In this way, the optimal estimates of the weighting parameters w, and w2 are obtained which can be substituted into equation l to produce the transformed (aged) facial appearance.

Figure 3 shows the apparatus lO of the invention. The method of the invention is essentially implemented as software running on a suitable processor. As shown, the apparatus of the invention needs to receive as inputs the mean vectors for the groups S and T and the difference vectors A, the covariance matrix for the groups T and the difference vectors A, the starting vector c, and the corresponding parent vector xp. Of these, all of the data may be pre- available and stored within the system other than the starting vector and the parent vector.

The apparatus may have a display for displaying the obtained vector x.

The generation of the appearance vectors corresponding to known images (such as c) can be achieved in known manner so that image vectors can be created by the system starting from known physical images of the individuals which are entered into the system. The system of the invention may additionally include such capability as to operate an iterative process to arrive at a good likeness on a display to a known image and to generate the appearance vector for this and systems are known for generating the vectors in this way.

Various modifications will be apparent to those skilled in the art.

Claims (23)

1. A facial image generation system, comprising: means for receiving a first vector representation of a first facial image of a first individual having an age in a first age range; means for receiving data derived from second vector representations of a plurality of facial images of a first group of individuals having ages in the first age range; means for receiving data derived from third vector representations of a plurality of facial images of a second group of individuals having ages in a second age range which does not overlap with the first age range; means for receiving data derived from fourth vector representations of a plurality of facial images of a third group of individuals which are related to the individuals of the second l group and of a different generation to the individuals of the second group; and J a processor for processing the data derived from the first, second, third and fourth vector representations to generate an output vector representation which comprises a vector representation of an estimate of the facial image of the first individual in the second age range.
2. A system as claimed in claim I, wherein the second, third and fourth vector representations are of individuals having the same gender as the first individual.
3. A system as claimed in claim I or 2, wherein the second, third and fourth vector representations are of individuals of the same racial origin as the first individual.
4. A system as claimed in any preceding claim, wherein the second and third vector representations are of individuals not related to the first individual.
5. A system as claimed in any preceding claim, wherein the fourth group of individuals includes a parent or child of the first individual.
6. A system as claimed in any preceding claim, wherein the second age range is higher than the first age range.
7. A system as claimed in any preceding claim, wherein the processor is adapted to apply a transformation to the first vector representation using a first scaling vector representing the difference between an average of the second vector representations and an average of the third vector representations.
8. A system as claimed in claim 7 and claim 5, wherein the processor is further adapted to apply a transformation to the first vector representation using a second scaling vector representing the difference between the vector representation of the parent or child of the first individual and the first vector representation.
9. A system as claimed in claim 8, wherein the transformation comprises a linear transformation using weighted values of the first and second scaling vectors.
10. A system as claimed in claim 9, wherein the first vector is normalized.
11. A system as claimed in claim 9 or 10, wherein the weighting values are selected to maximise the product of the probability of the output vector representation belonging to the second group of individuals with the probability of the output vector representation being a family member of the parent or child of the first individual.
12. A system as claimed in any preceding claim, wherein the first age range is pre- pubescent age range, and the second age range is a pubescent or adult age range.
13. A method of generating a facial image, comprising: receiving a first vector representation of a first facial image of a first individual having an age In a first age range; receiving data derived from second vector representations of a plurality of facial images of a first group of individuals having ages in the first age range; receiving data derived from third vector representations of a plurality of facial images of a second group of individuals having ages in a second age range which does not overlap with the first age range; receiving data derived from fourth vector representations of a plurality of facial images of a third group of individuals which are related to the individuals of the second group and of a different generation to the individuals of the second group; and processing the data derived from the first, second, third and fourth vector representations to generate an output vector representation which comprises a vector representation of an estimate of the facial image of the first individual in the second age range.
14. A method as claimed in claim 13, wherein the second, third and fourth vector representations are of individuals having the same gender as the first individual.
15. A method as claimed in claim 13 or 14, wherein the second, third and fourth vector representations are of individuals of the same racial origin as the first individual.
16. A method as claimed in any one of claims 13 to 15, wherein the second and third vector representations are of individuals not related to the first individual.
17. A method as claimed in any one of claims 13 to 16, wherein the fourth group of individuals include a parent or child of the first individual.
18. A method as claimed in any one of claims 13 to 17, wherein processing the data comprises applying a transformation to the first vector representation using a first scaling vector representing the difference between an average of the second vector representations and an average of the third vector representations.
19. A method as claimed in claim 17 and claim 17, wherein processing the data further comprises applying a transformation to the first vector representation using a second scaling vector representing the difference between the vector representation of the parent or child of the first individual and the first vector representation.
20. A method as claimed in claim 19, wherein applying a transformation comprises applying a linear transformation using weighted values of the first and second scaling vectors.
2]. A method as claimed in claim 20, further comprising selecting the weighting values to maximise the product of the probability of the output vector representation belonging to the second group of individuals with the probability of the output vector representation being a family member of the parent or child of the first individual.
22. A computer program comprising code means adapted to perform all of the steps of claims 3 to 21 when said program is run on a computer.
23. A computer program as claimed in claim 22 embodied on a computer readable medium.
GB0415390A 2004-07-08 2004-07-08 Facial image generation system Withdrawn GB2416098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0415390A GB2416098A (en) 2004-07-08 2004-07-08 Facial image generation system

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB0415390A GB2416098A (en) 2004-07-08 2004-07-08 Facial image generation system
PCT/GB2005/002669 WO2006005917A1 (en) 2004-07-08 2005-07-06 Plausible ageing of the human face
EP20050757848 EP1766582A1 (en) 2004-07-08 2005-07-06 Plausible ageing of the human face

Publications (2)

Publication Number Publication Date
GB0415390D0 GB0415390D0 (en) 2004-08-11
GB2416098A true GB2416098A (en) 2006-01-11

Family

ID=32865702

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0415390A Withdrawn GB2416098A (en) 2004-07-08 2004-07-08 Facial image generation system

Country Status (3)

Country Link
EP (1) EP1766582A1 (en)
GB (1) GB2416098A (en)
WO (1) WO2006005917A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877054A (en) * 2009-11-23 2010-11-03 北京中星微电子有限公司 Method and device for determining age of face image
US20180276869A1 (en) 2017-03-21 2018-09-27 The Procter & Gamble Company Methods For Age Appearance Simulation
US20180276883A1 (en) 2017-03-21 2018-09-27 Canfield Scientific, Incorporated Methods and apparatuses for age appearance simulation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4276570A (en) * 1979-05-08 1981-06-30 Nancy Burson Method and apparatus for producing an image of a person's face at a different age
JP3936156B2 (en) * 2001-07-27 2007-06-27 株式会社国際電気通信基礎技術研究所 Image processing apparatus, image processing method, and image processing program
JP3920747B2 (en) * 2002-09-04 2007-05-30 株式会社国際電気通信基礎技術研究所 Image processing device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
9 Int. Conf. in Central Europe on Computer Graphics, Visualization and Computer Vision 2001, February 5-9, 2001. 11, "Moving Facial Image Transformations Based on Static 2D Prototypes" *
IEE Computer Graphics and Applications, Volume15, No 5,pages 70-76. *
The Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999. Vol 1, pages 131-136 *

Also Published As

Publication number Publication date
GB0415390D0 (en) 2004-08-11
EP1766582A1 (en) 2007-03-28
WO2006005917A1 (en) 2006-01-19

Similar Documents

Publication Publication Date Title
Deutscher et al. Articulated body motion capture by annealed particle filtering
Wah et al. Multiclass recognition and part localization with humans in the loop
Liu et al. A two-step approach to hallucinating faces: global parametric model and local nonparametric model
Sminchisescu et al. Discriminative density propagation for 3d human motion estimation
Johnson et al. Learning the distribution of object trajectories for event recognition
Isard Visual motion analysis by probabilistic propagation of conditional density
Yang et al. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis
Gimel'farb Texture modeling by multiple pairwise pixel interactions
Zeger et al. The analysis of binary longitudinal data with time independent covariates
Krishnamachari et al. Multiresolution Gauss-Markov random field models for texture segmentation
Kulić et al. Incremental learning, clustering and hierarchy formation of whole body motion patterns using adaptive hidden markov chains
Ghahramani et al. Variational learning for switching state-space models
Zhou et al. Activity analysis, summarization, and visualization for indoor human activity monitoring
Lanitis et al. Comparing different classifiers for automatic age estimation
Buxton Learning and understanding dynamic scene activity: a review
Aldrian et al. Inverse rendering of faces with a 3D morphable model
Rikert et al. Gaze estimation using morphable models
Verbeek et al. Efficient greedy learning of Gaussian mixture models
Arminger et al. Mixtures of conditional mean-and covariance-structure models
Sato Online model selection based on the variational Bayes
Wren et al. Pfinder: Real-time tracking of the human body
De la Torre et al. Robust principal component analysis for computer vision
Wang et al. Gaussian process dynamical models
Lu et al. Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning
John et al. Markerless human articulated tracking using hierarchical particle swarm optimisation

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)