GB2359971A - Image processing system using hierarchical set of functions - Google Patents

Image processing system using hierarchical set of functions Download PDF

Info

Publication number
GB2359971A
GB2359971A GB9927314A GB9927314A GB2359971A GB 2359971 A GB2359971 A GB 2359971A GB 9927314 A GB9927314 A GB 9927314A GB 9927314 A GB9927314 A GB 9927314A GB 2359971 A GB2359971 A GB 2359971A
Authority
GB
United Kingdom
Prior art keywords
parameters
input parameters
appearance
model
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9927314A
Other versions
GB9927314D0 (en
Inventor
Rhys Andrew Newman
Charles Stephen Wiles
Mark Jonathan Williams
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anthropics Technology Ltd
Original Assignee
Anthropics Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anthropics Technology Ltd filed Critical Anthropics Technology Ltd
Priority to GB9927314A priority Critical patent/GB2359971A/en
Publication of GB9927314D0 publication Critical patent/GB9927314D0/en
Priority to EP00976195A priority patent/EP1272979A1/en
Priority to AU14070/01A priority patent/AU1407001A/en
Priority to PCT/GB2000/004411 priority patent/WO2001037222A1/en
Publication of GB2359971A publication Critical patent/GB2359971A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Description

2359971 1 IMAGE PROCESSING SYSTEM The present invention relates to the
parametric modelling of the appearance of objects. The resulting model can be used, for example, to track the object, such as a human face, in a video sequence.
The use of parametric models for image interpretation and synthesis has become increasing popular. Cootes et al have shown in their paper entitled "Active Shape Models Their Training and Application", Computer Vision and Image Understanding, Volume 61, No. 1, January, pages 3859, 1995, how such parametric models can be used to model the variability of the shape and texture of human faces. They have mainly used these models for face recognition and tracking within video sequences, although they have also demonstrated that their model can be used to model the variability of other deformable objects, such as MRI scans of knee joints. The use of these models provides a basis for a broad range of applications since they explain the appearance of a given image in terms of a compact set of model parameters which can be used for higher levels of interpretation of the image. For example, when analysing face images, they can be used to characterise the identity, pose or expression of a face.
Using such models for image interpretation requires, however, a method of fitting them to new image data.
2 This involves identifying the model parameters that generate an image which best f its (according to some measure) the new input image. Typically this problem is one of minimising the sum of squares of pixel errors between the generated image and the input image. In their paper entitled "Estimating Coloured 3D Face Models from Single Images: An Example-Based Approach" Vetter and Blanz have proposed a stochastic gradient descent optimisation technique to identify the optimum model parameters for the new image. Although this technique can give very accurate results finding the locally optimal solution, they generally get stuck in local minima since the error surface for the problem of fitting an appearance model to an image is particularly rough containing many local minima. Therefore, this minimisation technique often fails to converge on the global minimum. An additional drawback of this technique is that it is very slow requiring several minutes to achieve convergence.
A faster, more robust technique known as the active appearance model was proposed by Edwards et al in the paper entitled "Interpreting Face Images using Active Appearance Models", published in the Third International Conference on Automatic Face and Gesture Recognition 1998, pages 300-305, Japan, April 1998. This technique uses a prior training stage in which the relationship between model parameter displacements and the resulting 3 change in image error is learnt. Although the method is much faster than direct optimisation techniques, it also requires fairly accurate initial model parameters if the search is to converge. Additionally, this technique does not guarantee that the optimum parameters will be found.
The appearance model proposed by Cootes et al includes a single appearance model matrix which linearly relates a set of parameters to corresponding image data. Blanz et al segmented the face into a number of completely independent appearance models, each of which is used to render a separate region of the face. The results are then merged using a general interpretation technique.
The present invention aims to provide an alternative way of modelling the appearance of objects which will allow subsequent image interpretation through appropriate processing of parameters generated for the image.
According to one aspect, the present invention provides a hierarchical parametric model for modelling the shape of an object, the model comprising data defining a hierarchical set of functions in which a function in a top layer of the hierarchy is operable to generate a set of output parameters from a set of input parameters and in which one or more functions in a bottom layer of the hierarchy are operable to receive parameters output from one or more functions from a higher layer of the 4 hierarchy and to generate therefrom the relative positions of a plurality of predetermined points on the object. Such a hierarchical parametric model has the advantage that small changes in some parts of the object can still be modelled by the parameters, even though they are significantly smaller than variations in other less important parts of the object. This model can be used for face tracking, video compression, 2D and 3D character generation, face recognition for security purposes, image editing etc.
According to another aspect, the present invention provides an apparatus and method of determining a set of appearance parameters representative of the appearance of an object, the method comprising the steps of storing a hierarchical parametric model such as the one discussed above and at least one function which relates a change in input parameters to an error between actual appearance data for the object and appearance data determined from the set of input parameters and the parametric model; initially receiving a current set of input parameters for the object; determining appearance data for the object from the current set of input parameters and the stored parametric model; determining the error between the actual appearance data of the object and the appearance data determined from the current set of input parameters; determining a change in the input parameters using the at least one stored function and said determined error; and updating the current set of input parameters with the determined change in the input parameters.
An exemplary embodiment of the present invention will now be described with reference to the accompanying drawings in which:
Figure 1 is a schematic block diagram illustrating a general arrangement of a computer system which can be programmed to implement the present invention; Figure 2 is a block diagram of an appearance model generation unit which receives some of the image frames of a source video sequence together with a target image frame and generates therefrom. an appearance model; Figure 3 is a block diagram of a target video sequence generation unit which generates a target video sequence from a source video sequence using a set of stored difference parameters; Figure 4 is a flow chart illustrating the processing steps which the target video sequence generation unit shown in Figure 3 performs to generate the target video sequence; Figure 5 schematically illustrates the form of a hierarchical appearance model generated in one embodiment 6 of the invention; Figure 6 shows a head with a mesh of triangular facets placed over the head and whose positions are defined by the position of landmark points at the corners of the facets; Figure 7 is a flow chart illustrating the processing steps required to generate a facet appearance model from the training images; Figure 8 schematically illustrates the way in which a transformation is defined between a facet in a training image and a predefined shape of facet which allows texture information to be extracted from the facet; Figure 9 is a flow chart illustrating the main processing steps involved in determining an appearance model for the mouth using the appearance models for the facets which appear in the mouth and using the training images; Figure 10 schematically illustrates the way in which training images are used to determine some of the appearance models which form the hierarchical appearance model illustrated in Figure 5; Figure 11a is a flow chart illustrating the processing steps performed during a training routine to identify an 7 Active matrix associated with a current f acet; Figure llb is a flow chart illustrating the processing steps performed during a training routine to identify an Active matrix associated with the mouth; Figure 12 is a flow chart illustrating the processing steps involved in determining a set of parameters which define the appearance of a face within a input image; Figure 13a shows three frames of an example source video sequence which is applied to the target video sequence generation unit shown in Figure 4; Figure 13b shows an example target image used to generate a set of difference parameters used by the target video sequence generation unit shown in Figure 4; Figure 13c shows a corresponding three frames from a target video sequence generated by the target Video sequence generation unit shown in Figure 4 from the three frames of the source video sequence shown in Figure 13a using the difference parameters generated using the target image shown in Figure 13b; Figure 13d shows a second example of a target image used to generate a set of difference parameters for use by the target video sequence generation unit shown in Figure 4; 8 and Figure 13e shows the corresponding three f rames from the target video sequence generated by the target video sequence generation unit shown in Figure 4 when the three frames of the source video sequence shown in Figure 13a are input to the target video sequence generation unit together with the difference parameters calculated using the target image shown in Figure 13d.
Figure 1 is an image processing apparatus according to an embodiment of the present invention. The apparatus comprises a computer 1 having a central processing unit (CPU) 3 connected to a memory 5 which is operable to store a program defining the sequence of operations of the CPU 3 and to store object and image data used in calculations by the CPU 3. Coupled to an input port of the CPU 3 there is an input device 7, which in this embodiment comprises a keyboard and a computer mouse. Instead of, or in addition to the computer mouse, another position sensitive input device (pointing device) such as a digitiser with associated stylus may be used.
A frame buffer 9 is also provided and is coupled to the CPU 3 and comprises a memory unit (not shown) arranged to store image data relating to at least one image, for example by providing one (or several) memory location(s) per pixel of the image. The value stored in the frame 9 buf f er f or each pixel def ines the colour or intensity of that pixel in the image. In this embodiment, the images are represented by 2-D arrays of pixels, and are conveniently described in terms of Cartesian coordinates, so that the position of a given pixel can be described by a pair of x-y coordinates. This representation is convenient since the image is displayed on a raster scan display 11. Therefore, the x- coordinate maps to the distance along the line of the display and the ycoordinate maps to the number of the line. The frame buf f er 9 has suf f icient memory capacity to store at least one image. For example, for an image having a resolution of 1000 x 1000 pixels, the frame buffer 9 includes 106 pixel locations, each addressable directly or indirectly in terms of a pixel coordinate x,y.
In this embodiment, a video tape recorder (VTR) 13 is also coupled to the frame buffer 9, for recording the image or sequence of images displayed on the display 11. A mass storage device 15, such as a hard disc drive, having a high data storage capacity is also provided and coupled to the memory 5. Also coupled to the memory 5 is a floppy disc drive 17 which is operable to accept removable data storage media, such as a f loppy disc 19 and to transfer data stored thereon to the memory 5. The memory 5 is also coupled to a printer 21 so that generated images can be output in paper form, an image input device 23 such as a scanner or video camera and a modem 25 so that input images and output images can be received from and transmitted to remote computer terminals via a data network, such as the Internet. The CPU 3, memory 5, frame buffer 9, display unit 11 and mass storage device 13 may be commercially available as a complete system, for example as an IBM compatible personal computer (PC) or a workstation such as the Sparc station available from Sun Microsystems.
A number of embodiments of the invention can be supplied commercially in the form of programs stored on a floppy disc 19 or on other mediums, or as signals transmitted over a data link, such as the Internet, so that the receiving hardware becomes reconf igured into an apparatus embodying the present invention.
In this embodiment, the computer 1 is programmed to receive a source video sequence input by the image input device 23 and to generate a target video sequence from the source video sequence using a target image. In this embodiment, the source video sequence is a video clip of an actor acting out a scene, the target image is an image of a second actor and the resulting target video sequence is a video sequence showing the second actor acting out the scene. The way in which this is achieved will now be briefly described with reference to Figures 2 to 4.
In this embodiment, in order to generate the target video sequence from the source video sequence, a hierarchical parametric appearance model which models the variability of shape and texture of the head images is used. This appearance model makes use of the fact that some prior knowledge is available about the contents of head images in order to facilitate their modelling. For example, it can be assumed that two frontal images of a human face will each include eyes, a nose and a mouth. In this embodiment, as shown in Figure 2, the hierarchical parametric appearance model 35 is generated by an appearance model generation unit 31 from training images which are stored in an image database 32. In this embodiment, all the training images are colour images having 500 x 500 pixels, with each pixel having a red, green and a blue pixel value. The resulting appearance model 35 is a parameterisation of the appearance of the class of head images defined by the heads in the training images, so that a relatively small number of parameters (for example 50) can describe the detailed (pixel level) appearance of a head image from the class. In particular, the hierarchical appearance model 35 defines a function (F) such that:
I = F(P) (1) where 12 is the set of appearance parameters (written in vector notation) which generates, through the hierarchical appearance model (F), the face image I. The 12 structure of the hierarchical appearance model used in this embodiment will be described later.
Once the hierarchical appearance model 35 has been determined, a target video sequence can be generated from a source video sequence. As shown in Figure 3, the source video sequence is input to a target video sequence generation unit 51 which processes the source video sequence using a set of difference parameters 53 to generate and to output the target video sequence. The difference parameters 53 are determined by subtracting the appearance parameters which are generated for the first actor's head in one of the source video frames, from the appearance parameters which are generated for the second actor's head in the target image. The way in which these appearance parameters are determined for these images will be described later. In order that these difference parameters only represent differences in the general shape and colour texture of the two actors' heads, the pose and facial expression of the first actor's head in the source video frame used should match, as closely as possible, the pose and facial expression of the second actor's head in the target image.
The processing steps required to generate the target video sequence from the source video sequence will now be described in more detail with reference to Figure 4. As shown, in step sl, the appearance parameters (pi) f or 13 the f irst actor 1 s head in the current video f rame (I,i) are automatically calculated. The way that this is achieved will be described later. Then, in step s3, the difference parameters (pdif) are added to the appearance parameters for the first actor's head in the current video frame to generate:
i = 12 1 + 12 12od S dif (2) The resulting appearance parameters (P,,di) are then used, in step s5, to regenerate the head for the current target video f rame. In particular, the modified appearance parameters are inserted into equation (1) above to regenerate a modified head image which is then composited, in step s7, into the source video frame to generate the corresponding target video frame. A check is then made, in step s9, to determine whether or not there are any more source video frames. If there are, then the processing returns to step sl where the procedure described above is repeated for the next source video frame. If there are no more source video frames, then the processing ends.
Figure 13 illustrates the results of this animation technique (although showing black and white images and not colour). In particular, Figure 13a shows three frames of the source video sequence, Figure 13b shows the target image (which in this embodiment is computer 14 generated) and Figure 13c shows the corresponding three frames of the target video sequence obtained in the manner described above. As can be seen, an animated sequence of the computer generated character has been generated from a video clip of a real person and a single image of the computer generated character.
HIERARCHICAL APPEARANCE MODEL In the systems described by Cootes et al and Blanz et al, the parametric model is created by placing a number of landmark points on a training image and then identifying the same landmark points on the other training images in order to identify how the location of and the pixel values around the landmark points vary within the training images. A principal component analysis is then performed on the matrix which consists of vectors of the landmark points. This PCA yields a set of Eigenvectors which describe the directions of greatest variation along which the landmark points change. Their appearance model includes the linear combination of the Eigenvectors plus parameters for translation, rotation and scaling. This single appearance model relates a compact set of appearance parameters to pixel values.
In this embodiment, rather than having a single appearance model for the object, a hierarchical appearance model comprising several appearance models which model variations in components of the object is used. For example, in the case of human f aces, the hierarchical appearance model may include an appearance model for the mouth, one for the left eye, one for the right eye and one for the nose. Since it may be possible to model various components of the object, the particular hierarchical structure which will be used for a particular object and application must first of all be defined by the system designer.
Figure 5 schematically illustrates the structure of the hierarchical appearance model used in this embodiment. As shown, at the top of the hierarchy there is a general face appearance model 61. Beneath the face appearance model there is a mouth appearance model 63, a lef t eye appearance model 65, a right eye appearance model 67, a left eyebrow appearance model 69, a rest of left eye appearance model 71, a right eyebrow appearance model 73, a rest of right eye appearance model 75 and, in this embodiment, a facet appearance model for each facet defined in the training images. Figure 6 shows the head of a training image in which the set of landmark points has been placed at the appropriate points on the head. As shown, in this embodiment, there are one hundred and forty- eight triangular areas or facets defined by the positions of the landmark points. Therefore, in this embodiment, there are one hundred and fortyeight facet appearance models 77.
16 The f ace appearance model 61 operates to relate a small number of "globaP' appearance parameters to a further set of appearance parameters, some of which are input to facet appearance models 77, some of which are input to the mouth appearance model 63, some of which are input to the lef t eye appearance model 65 and the rest of which are input to the right eye appearance model 67. The facet appearance models 77 operate to relate the input parameters received from the appearance model which is above it in the hierarchy into corresponding pixel values for that facet. The mouth appearance model 63 is operable to relate the parameters it receives f rom the f ace appearance model 61 into a further set of appearance parameters, respective ones of which are output to the respective facet appearance models 77 for the facets which are associated with the mouth. Similarly, the left and right eye appearance models 65 and 67 operate to relate the parameters it receives from the face appearance model 61 into a further set of appearance parameters, some of which are input to the appropriate eyebrow appearance model and the rest of which are input to the appropriate rest of eye appearance model. These appearance models in turn convert these parameters into parameters for input to the facet appearance models associated with the facets which appear in the left and right eyes respectively. In this way, a small compact set of "global" appearance parameters input to the face appearance model 61 can filter through the hierarchical 17 structure illustrated in Figure 5 to generate a set of pixel values for all the f acets in a head which can then be used to regenerate the image of the head.
The way in which the individual appearance models of this hierarchical appearance model are generated in this embodiment will now be described with reference to Figures 6 to 10.
In this embodiment, each of the training images stored in the image database 32 is labelled with eighty six landmark points. In this embodiment, this is performed manually by the user via the user interface 33. In particular, each training image is displayed on the display 11 and the user places the landmark points over the head in the training image. These points delineate the main features in the head, such as the position of the hairline, neck, eyes, nose, ears and mouth. In order to compare training faces, each landmark point is associated with the same point on each face. in this embodiment, the following landmark points are used:
Landmark Point Associated Position landmark Associated Position Point LP, Left corner of left eye LP,, Eye, bottom LP2 Right corner of right LP,, Eye, top eye LP3 Chin, bottom LP, Eye, bottom LP, Right corner of left LP47 Eyebrow, lower eye 18 Landmark Point Associated Position Landmark Associated Position 1 Point LP, Left corner of right LP, Eyebrow, upper eye LP, Mouth, left LP,, Cheek, left LP, Mouth, right LPSO Cheek, right LP, Nose, bottom LPS1 Eyebrow, lower LP9 Nose, between eyes LPS2 Eyebrow, upper LPIO Upper lip, top LP5, Eyebrow, lower LP11 Lower lip, bottom LP5, Eyebrow, upper LP12 Neck, left, top LPS5 Eyebrow, lower LP13 Neck, right, top LPS6 Eyebrow, upper LP14 Face edge left, level LP57 Eyebrow, lower with nose LPIS Face edge LPS8 Eyebrow, upper LP16 Face edge right, level LPS9 Eyebrow, lower with nose LP17 Face edge LP, Eyebrow, upper LP18 Top of head LP61 Eyebrow, lower LP19 Hair edge LP62 Lower lip, top LP, Hair edge LP, Centre forehead LP21 Hair edge LP, Upper lip, top left LP22 Hair edge LP65 Upper lip, top right LP23 Hair edge LP, Lower lip, bottom right LP24 Hair edge LP, Lower lip, bottom left LP, Hair edge LP, Eye, top left LP,, Hair edge LP69 Eye, top right LP, Hair edge W,, Eye, bottom right LP28 Hair edge LP71 Eye, bottom left LP29 Bottom, far left LP, Eye, top left LP, Bottom, far right LP73 Eye, top right LP3, Shoulder LP74 Eye, bottom right 19 Landmark Point Associated Position Landmark Associated Position 1 Point LP32 Shoulder LP75 Eye, bottorn left LP, Bottom, left LP76 Lower lip, top left LP34 Bottom, middle Lpn Lower lip, top right LP35 Bottom, right LP7, Chin, left LP, Left forehead LP79 Chin, right LP37 Right forehead LPSO Neck, left LP38 Centre, between LP81 Neckline, left eyebrows LP39 Nose, left LP82 Neckline LP40 Nose, right LP83 Neckline, right LP41 Nose edge, left LP, Neck, right LP42 Nose edge, right LP85 Hair edge LP43 Eye, top LP86 Hair edge The result of the manual placement of the landmark points is a table of landmark points for each training image, which identifies the (x, y) coordinate of each landmark point within the image. As shown in Figure 6, these landmark points are also used to define the location of predetermined triangular facets or areas within the training image.
FACET APPEARANCE MODEL Figure 7 shows a flow chart illustrating the main processing steps involved in this embodiment in determining a facet appearance model for facet (i). As shown, in step s61, the system determines, for each training image, the apex coordinates of facet (i) and texture values f rom within f acet (i). In order to sample texture from within the f acet at corresponding points within each training facet, a transformation which transforms the facet onto a reference facet is determined. Figure 8 illustrates this transformation. In particular, Figure 8 shows facet fiv taken from the V-th training image, which is defined by the landmark points (Xlv, Y1v) ' (X2V1Y2V) and (X3V 1 Y3V). The transformation (Tiv) which transforms those coordinates onto coordinates (0,0), (1,0) and (0,1) is determined. in this embodiment, the texture information extracted from each training facet is defined by the regular array of pixels shown in the reference facet. In order to determine the corresponding red, green and blue pixel values in the training image, the inverse transformation ( [Tiv]-1) is used to transform the pixel locations in the reference facet, into corresponding locations in the training facet, from which the RGB pixel values are determined. In this embodiment, this transformation may not result in an exact correspondence with a single image pixel location since the pixel resolution in the actual facet may be different to the resolution in the reference facet. In this embodiment, the texture information (RGB pixel values) which is determined is obtained by interpolating between the surrounding image RGB pixel values. In this embodiment, there are fifty pixels in the regular array of pixels in the reference facet. Therefore, fifty RGB pixel values are extracted for each 21 training facet. The texture information for facet (i) from the V-th training image can then be represented by a vector (tiv) of the f orm:
tiV = [tliV,t2iV,t3iV... tSoiV 1 T where tliv is the RGB texture information for the f irst reference pixel extracted from facet (i) in the V-th training image etc.
In this embodiment, the facet appearance models 77 treat shape and texture separately. Therefore, in step s63, the system performs a principal component analysis (PCA) on the set of texture training vectors generated in step s61. For a more detailed discussion of principal component analysis, the reader is referred to the book by W. J. Krzanowski entitled "Principles of Multivariate Analysis - A User 1 s Perspective " 19 9 8, Oxf ord Statistical Science Series. As those skilled in the art will appreciate, this principal component analysis determines all possible modes of variation within the training texture vectors. However, since each of the facets is associated with a similar point on the face, most of the variation within the data can be explained by a few modes of variation. The result of the principal component analysis is a facet texture appearance model (defined by matrix Fi) which relates a vector of facet texture parameters to a vector of texture pixel values, by:
22 OF't = F (-tiv- P) v i (3) where tiv is the RGB texture vector defined above, fi is the mean RGB texture vector for facet (i), Fi is a matrix which defines the facet texture appearance model for facet (i) and &itv is a vector of the facet texture parameters which describes the RGB texture vector tiv. The matrix Fi describes the main modes of variation of the texture within the training facets; and the vector of facet texture parameters (l2itv) for a given input facet has a parameter associated with each mode of variation whose value relates the texture of the input f acet to the corresponding mode of variation.
As those skilled in the art will appreciate, for f acets which describe f airly constant parts of the face, such as the chin or cheeks, very fewparameters will be needed to model the variability within the training images. However, f acets which are associated with areas of the f ace where there is a large amount of variability (such as facets which form part of the eye), will require a larger number of facet texture parameters to describe the variability within the training images. Therefore, in step s65, the system determines how many texture parameters are needed for the current facet and stores the appropriate facet appearance model matrix.
In addition to being able to determine a set of texture 23 parameters jfit, f or a given texture vector tiv, equation (3) can be solved with respect to the texture vector tj-v to give:
tiv = T Fit - P-Fip v (4) since FiF,T equals the identity matrix. Theref ore, by modifying the set of texture parameters (Jit) within suitable limits, new textures f or f acet (i) can be generated which are similar to those in the training set.
Once the above procedure has been performed for each of the one hundred and forty-eight facets in the training images, a facet texture appearance model will have been generated for each of those facets. In this embodiment, the facet appearance model does not compress the parameters defining the shape of the facets, since only six parameters are needed to define the shape of each facet - two parameters for each (x,y) coordinate of the facet's apexes.
MOUTH APPEARMCE MODEL Figure 9 shows a flow chart illustrating the main processing steps required in order to generate the mouth appearance model 63. As shown, in step s67, the system uses the f acet appearance models f or the f acets which f orm part of the mouth to generate shape and texture parameters f rom those f acets f or each training image.
24 is Therefore, referring to Figure 10, the mouth appearance model 63 will receive texture and shape parameters from the facet appearance model for facet (i), facet (j) and facet (n) for the corresponding facets in each of the training images 79. As illustrated in Figure 10, the appearance model for facet (i) is operable to generate, for each training image, six shape parameters (corresponding to the three (x,y) coordinates of the apexes of facet (i)) and six texture parameters. Similarly, the appearance model for facet (j) is operable to generate, for each training image, six shape parameters and four texture parameters and the appearance model for facet (n) is operable to generate, for each training image, six shape parameters and three texture parameters.
The processing then proceeds to step s69 where the system performs a principal component analysis on the shape and texture parameters generated for the training images by the facet appearance models associated with the mouth. In this embodiment, the mouth appearance model 63 treats the shape and texture separately. In particular, for each training image, the system concatenates the six shape parameters for the facets associated with the mouth to form the following shape vector:
MS = fi fi fi fi fi fj fj fj fj ILY iXill,Y1 PX2 O'Y2 ' X3 P Y3 Xl tyl fX2 fY2 i and concatenates the f acet texture parameters output by the facet appearance models associated with the mouth to form the following texture vector:
mt Fit Fit... it p,Fjt Fjt. Fnt 5]g = [p, J1 P2 P6' FP2Pjt... P4. P1 '... IT The system then performs a principal component analysis on the shape vectors generated by all the training images to generate a shape appearance model for the mouth (defined by matrix M.) which relates each mouth shape vector to a corresponding vector of shape mouth parameters by:
es (,2 MS - -FMS) p v v (5) where pFHsv is the mouth shape vector for the mouth in the V-th training image, jms is the mean mouth shape vector from the training vectors and 115v is a vector of mouth shape parameters for the mouth shape vector &"sv. The mouth shape model, defined by matrix M,, describes the main modes of variation of the shape of the mouths within the training images; and the vector of mouth shape parameters (esv) for the mouth in the V-th training image has a parameter associated with each mode of variation whose value relates the shape of the input mouth to the corresponding mode of variation.
As with the facet appearance models, equation (5) above 26 can be rewritten with respect to the mouth shape vector 19MSv to give:
MS = -Fms M T ms PV P - s P v (6) since M., M,' equals the identity matrix. Therefore, by modifying the mouth shape parameters, new mouth shapes can be generated which will be similar to those in the training set.
The system then performs a principal component analysis on the mouth texture parameter vectors (pFmt) which are generated for the training images. This principal component analysis generates a mouth texture model (defined by matrix M,) which relates each of the facet texture parameter vectors for the facets associated with the mouth, to a corresponding vector of mouth texture parameters, by:
P Mt = M t (I2F7t -P--Fmt) (7) v v where I2Fmtv is a vector of mouth facet texture parameters generated by the facet appearance models associated with the mouth for the mouth in the V-th training image; PFIlt is the mean vector of mouth facet texture parameters from the training vectors and lKtv is a vector of mouth texture parameters for the facet texture parameters pFmtv. The matrix Mt describes the main modes of variation within 27 the training images of the f acet texture parameters generated by the facet appearance models which are associated with the mouth; and the vector of mouth texture parameters (ptv) has a parameter associated with each of those modes of variation whose value relates the texture of the input mouth to the corresponding mode of variation.
The processing then proceeds to step s71 shown in Figure 9 where the system determines the number of shape parameters and texture parameters needed to describe the training data received from the facet appearance models which are associated with the mouth. As shown in Figure 10, in this embodiment, the mouth appearance model 63 requires five shape parameters and four texture parameters to be able to model most of this variation. The system therefore stores the appropriate mouth shape and texture appearance model matrices for subsequent use.
As those skilled in the art will appreciate, a similar procedure is performed to determine each of the appearance models shown in Figure 5, starting from the facet appearance models at the base of the hierarchy. A f urther description of how these remaining appearance models are determined will, therefore, not be given here. The resulting hierarchical appearance model allows a small number of global face appearance parameters to be input to the face appearance model 61, which generates
28 f urther parameters which propagate down through the hierarchical model structure until f acet pixel values are generated, f rom which an image which corresponds to the global appearance parameters can be generated.
AUTOMATIC GENERATION OF APPEARANCE PARAMETERS In the description given above of the way in which the appearance models are generated, appearance parameters for an image were generated from a manual placement of a number of landmark points over the image. However, during use of the appearance model to track the first actor's head in the source video sequence and during the calculation of the difference parameters (pdif), the appearance parameters for the heads in the input images were automatically calculated. This task involves finding the set of global appearance parameters p which best describe the pixels in view. This problem is complicated because the inverse of each of the appearance models in the hierarchical appearance model is not necessarily one-to-one. In this embodiment, the appearance parameters for the head in an input image are calculated in a two-step process. In the first step, an initial set of global appearance parameters for the head in the current frame (I, i) is found using a simple and rapid technique. For all but the first frame of the source video sequence, this is achieved by simply using the appearance parameters from the preceding video frame (Isi-1) before modification in step s3 (i.e. parameters
29 PS i-1). In this embodiment, the global appearance parameters (12) effectively define the shape and colour texture of the head. For the first frame and for the target image the initial estimate of the appearance parameters is set to the mean set of appearance parameters and the scale, position and orientation is initially estimated by the user manually placing the mean head over the head in the image.
In the second step, an iterative technique is used in order to make fine adjustments to the initial estimate of the appearance parameters. The adjustments are made in an attempt to minimise the difference between the head described by the global appearance parameters (the model head) and the head in the current video frame (the image head). With 50 appearance parameters, this represents a difficult optimisation problem. This can be performed by using a standard steepest descent optimisation technique to iteratively reduce the mean squared error between the given image pixels and those predicted by a particular set of appearance parameter values. In particular, minimising the following error function E(I2):
E (12) 1 a_ F(p) T[ I a _F(p (8) where Ja is a vector of actual image RGB pixel values at the locations where the appearance model predicts values (the appearance model does not predict all pixel values since it ignores background pixels and only predicts a subsample of pixel values within the object being modelled) and F(p) is the vector of image RGB pixel values predicted by the hierarchical appearance model. As those skilled in the art will appreciate, E(p) will only be zero when the model head (i. e. F(j2)) predicts the actual image head (Ia) exactly. Standard steepest descent optimisation techniques stipulate that a step in the direction VE(p) should result in a reduction in the error function E(V), provided the error function is well behaved. Therefore, the change (Ap) in the set of parameter values should be:
Ai5 = 2 [VF(P)] ' [ I a - F(p)] (9) which requires the calculation of the differential of the appearance model, i.e. VF(p).
The technique described by Edwards et al assumes that, on average over the whole parameter space, VF(p) is constant. The update equation then becomes:
Ap = A[I'-F(I2)] (10) for some constant matrix A (referred to as the "Active matrix") which is determined beforehand during a training routine. In this embodiment, rather than using a single 31 constant matrix associated with the entire hierarchical appearance model, an Active matrix is determined and used for each of the individual appearance models which form part of the hierarchical appearance model. The way in which these Active matrices are determined in this embodiment will now be described with reference to Figures lla and llb, which illustrate the processing steps performed to generate the Active matrix for each f acet appearance model and the Active matrix for the mouth appearance model.
As shown in Figure lla, in step s73, the system chooses a random facet parameter vector (LYj-) for the current facet (i) and then, in s75, perturbs this facet parameter vector by a small random amount to create pFI + ApPl. In this embodiment, the facet parameter vectors include not only the texture parameters, but also the six shape parameters which define the (x,y) coordinates of the facet's location within the image. The processing then proceeds to step s77 where the system uses the parameter vector p"! and the perturbed parameter vector RF1 + APP1 to create model images Iji and I,-'.i respectively. The processing then proceeds to step s79 where the system records the parameter change ApFi and image difference I,. - I.Fi. Then in step s81, the system determines whether or not there is sufficient training data for the current facet. If there is not then the processing returns to step s21. Once sufficient training data has been 32 generated, the processing proceeds to step s83 where the system performs multiple multivariate linear regressions on the data for the current facet to identify an Active matrix (Ai) for the current facet.
Figure llb shows the processing steps required to calculate the Active matrix for the mouth appearance model. As shown, in step s85, the system chooses a random mouth parameter vector j1. In this embodiment, this vector includes both the mouth shape parameters and the mouth texture parameters. Then, in step s87, the system perturbs this mouth parameter vector by a small random amount to create L-0 + All. The processing then proceeds to step s89 where the system uses the mouth parameter vectors pom and the perturbed mouth parameter vector p + ApK to create model images I0m and Ilm respectively, using the mouth appearance model and the facet appearance models associated with the mouth. The processing then proceeds to step s91 where the facet appearance models associated with the mouth are used again to transform the mouth model images I0m and Ilm into corresponding facet appearance parameters poFm and p,Fm, which are then subtracted to determine the corresponding change ApIm in the mouth facet parameters. The processing then proceeds to step s93 where the system records the mouth parameter change Ap and the mouth facet parameter change ARFm. The processing then proceeds to step s95 where the system determines whether 33 or not there is suf f icient training data. If there is not, then the processing returns to step s85. Once sufficient training data has been generated, the processing proceeds to step s97, where the system performs multiple multivariate linear regressions on the training data for the mouth to identify the Active matrix (Am) for the mouth which relates changes in mouth parameters AW to changes in facet parameters Alpm for the facets associated with the mouth.
is AS those skilled in the art will appreciate, a similar processing technique is used in order to identify the Active matrix for each of the appearance models shown in Figure 5.
Once the Active matrices have been determined for the hierarchical appearance model, they can then be used to iteratively update a current estimate of a set of appearance parameters for an input image. Figure 12 illustrates the processing steps performed in this iterative routine for the current source video frame. As shown, in step s101, the system initially estimates a set of global parameters for the head in the current source video frame. The processing then proceeds to step s103 where the system generates a model image from the estimated global parameters and the hierarchical appearance model. The system then proceeds to step s105 where it determines the image error between the model 34 image and the current source video frame. Then, in step s107, the system uses this image error to propagate parameter changes up the hierarchy of the hierarchical appearance model using the stored Active matrices to determine a change in the global parameters. This change in global parameters is then used, in step s109, to update the current global parameters for the current source video frame. The system then determines, in step s111, whether or not convergence has been reached by comparing the error obtained from equation (8) using the updated global parameters with a predetermined threshold (Th). If convergence has not been reached, then the processing returns to step s103. Once convergence is reached, the processing proceeds to step s113, where the current global appearance parameters are output as the global appearance parameters for the current source video frame and then the processing ends.
ALTERNATIVE EMBODIMENTS In the above embodiment, the same hierarchical model structure was used to model the variation in the shape and texture within the training images. As those skilled in the art will appreciate, one model hierarchy can be used to model the shape variation and a different model hierarchy can be used to model the texture variation. Alternatively still, rather than separating the shape and texture parameters,' each of the appearance models within the hierarchical model may model the combined variation of the shape and texture within the training images.
In the above embodiments, a f acet appearance model was generated for each f acet defined within the training images. As those skilled in the art will appreciate, many of the facets may be grouped together such that a single facet appearance model is generated for those facets. In one f orm of such an embodiment, a single facet appearance model may be determined which models the variability of texture within each facet of the training images.
In the above embodiments, the same amount of texture information was extracted from each facet within the training images. In particular, f if ty RGB texture values were extracted from each training facet. In an alternative embodiment, the amount of texture information extracted from each facet may vary in dependence upon the size of the facet. For example, more texture information may be extracted f rom larger f acets or more texture information may be extracted from facets associated with important features of the face, such as the mouth, eyes or nose.
In the above embodiments, each appearance model was determined from a principal component analysis of a set of training data. This principal component analysis determines a linear relationship between the training 36 data and a set of model parameters. As those skilled in the art will appreciate, techniques other than principal component analysis can be used to determine a parametric model which relates a set of parameters to the training data. This model may define a non-linear relationship between the training data and the model parameters. For example, one or more of the models within the hierarchy may comprise a neural network which relates the set of input parameters to the training data.
In the above embodiments, a principal component analysis was performed on a set of training data in order to identify a relatively small number of parameters which describe the main modes of variation within the training data. This allows a relatively small number of input parameters to be able to generate a larger set of output parameters from the model. However, as those skilled in the art will appreciate, this is not essential. one or more of the appearance models may act as transformation models in which the number input parameters is the same as or greater than the number of output parameters. This can be used to generate a set of input parameters which can be changed by the user in some intuitive way. For example, in order to identify parameters which have a linear relationship with features in the object, such as a parameter that linearly changes the amount of smile within a face image.
37 In the above embodiments, a set of Active matrices were used in order to identify automatically a set of appearance parameters for an input image. As those skilled in the art will appreciate, rather than having separate Active matrices for each of the components in the hierarchical appearance model, a global Active matrix may be used instead. Further, although both the shape and grey level parameters were used in order to derive the Active matrices, suitable Active matrices can be determined using just the shape information.
In the above embodiments, the variation in both the shape and texture within the training images were modelled. As those skilled in the art will appreciate, this hierarchical modelling technique can be used to model only the shape of the objects within the training images. Such a shape model could be then used to track objects within a video sequence.
In the first embodiment, the target image illustrated a computer generated head. This is not essential. For example, the target image might be a hand-drawn head or an image of a real person. Figures 13d and 13e illustrate how an embodiment with a hand-drawn character might be used in character animation. In particular, Figure 13d shows a hand-drawn sketch of a character which, when combined with the images from the source video sequence (some of which are shown in Figure 13a) 38 generate a target video sequence, some f rames of which are shown in Figure 13e. As can be seen from a comparison of the corresponding frames in the source and target video frames, the hand-drawn sketch has been animated automatically using this technique. As those skilled in the art will appreciate, this is a much quicker and simpler technique for achieving computer animation as compared with existing systems which require the animator to manually create each frame of the animation. In particular, in this embodiment, all that is required is a video sequence of a real lif e actor acting out the scene to be animated, together with a single sketch of the character to be animated.
The above embodiment has described the way in which a target image can be used to modify a source video sequence. In order to do this, a set of appearance parameters has to be automatically calculated for each frame in the video sequence. This involved the use of a number of Active matrices which relate image errors to appearance parameter changes. As those skilled in the art will appreciate, similar processing is required in other applications, such as the tracking of an object within a video sequence, the tracking of a human face within a video sequence or the tracking of a knee joint in an MRI scan.
In the above embodiment, the appearance model was used to 39 is model the variations in facial expressions and 3D pose of human heads. As those skilled in the art will appreciate, the appearance model can be used to model the appearance of any deformable object such as parts of the body and other animals and objects. For example, the above techniques can be used to track the movement of lips in a video sequence. Such an embodiment could be used in f ilm dubbing applications in order to synchronise the lip movements with the dubbed sound. This animation technique might also be used to give animals and other objects human-like characteristics by combining images of them with a video sequence of an actor. This technique can also be used for monitoring the shape and appearance of objects passing along a production line for quality control purposes.
In the above embodiment, the appearance model was generated by using a principal component analysis of shape and texture data which is extracted from the training images. As those skilled in the art will appreciate, by modelling the features of the training heads in this way, it is possible to accurately model each head by just a small number of parameters. However, other modelling techniques, such as vector quantisation and wavelet techniques can be used.
In the above embodiments, the training images used to generate the appearance model were all colour images in which each pixel had an RGB value. As those skilled in the art will appreciate, the way in which the colour is represented in this embodiment is not important. In particular, rather than each pixel having a red, green and blue value, they might be represented by a chrominance and a luminance component or by hue, saturation and value components. Alternatively still, the training images may be black and white images, in which case only grey level data would be extracted from the facets in the training images. Additionally, the resolution of each training image may be different.
In the above embodiment, during the automatic generation of the appearance parameters, and in particular during the iterative updating of these appearance parameters the error between the input image and the model image was generated using the appearance model. Since this iterative technique still requires a relatively accurate initial estimate for the appearance parameters, it is possible initially to perform the iterations using lower resolution images and once convergence has been reached for the lower resolutions to then increase the resolution of the images and to repeat the iterations for the higher resolutions. In such an embodiment, separate Active matrices would be required for each of the resolutions.
In the above embodiment, the difference parameters were determined by comparing the image of the first actor from 41 one of the frames of the source video sequence with the image of the second actor in the target image. In an alternative embodiment, a separate image of the first actor may be provided which does not f orm part of the source video sequence.
In the above embodiments, each of the appearance models modelled variations in two-dimensional images. The above modelling technique could be adapted to work with 3D images and animations. In such an embodiment, the training images used to generate the appearance model would normally include 3D images instead of 2D images. The three-dimensional models may be obtained using a three dimensional scanner which typically work either by using laser range-finding over the object or by using one or more stereo pairs of cameras. Once a 3D hierarchical appearance model has been created from the training models, new 3D models can be generated by adjusting the appearance parameters and existing 3D models can be animated using the same differencing technique that was used in the twodimensional embodiment described above. This 3D model can then be used to track 3D objects directly within a 3D animation. Alternatively, a 2D model may be used to track the 3D object within a video sequence and then use the result to generate 3D data for the tracked object.
In the above embodiment, a set of difference parameters 42 were identified which describe the main differences between the head in the video sequence and the head in the target image, which difference parameters were used to modify the video sequence so as to generate a target video sequence showing the second head. In the embodiment, the set of difference parameters were added to a set of appearance parameters for the current frame being processed. In an alternative embodiment, the difference parameters may be weighted so that, for example, the target video sequence shows a head having characteristics from both the first and second actors.
In the above embodiment, a hierarchical appearance model is used to model the appearance of human faces. The model is then used to modify a source video sequence showing a first actor performing a scene to generate a target video sequence showing a second actor performing the same scene. As those skilled in the art will appreciate, the hierarchical model presented above can be used in various other applications. For example, the hierarchical appearance model can be used for synthetic twodimensional or three-dimensional character generation; video compression when the video is substantially that of an object which is modelled by the appearance model; object recognition for security purposes; face tracking for human performance analysis or human computer interaction and the like; 3D model generation from twodimensional images; and image editing 43 (f or example making people look older or younger, f atter or thinner etc).
In the above embodiment, an iterative process was used to update an estimated set of appearance parameters for an input image. This iterative process continued until an error between the actual image and the image predicted by the model was below a predetermined threshold. In an alternative embodiment, where there is only a predetermined amount of time available for determining a set of appearance parameters for an input image, this iterative routine may be performed for a predetermined period of time or for a predetermined number of iterations.
44

Claims (1)

  1. CLAIMS:
    is 1. A parametric model f or modelling the shape of an object, the model comprising:
    data defining a function which relates a set of input parameters to a set of locations which identify the relative positions of a plurality of predetermined points on the object; characterised in that said data defines a hierarchical set of functions in which a function in a top layer of the hierarchy is operable to generate a set of output parameters f rom a set of input parameters and in which one or more functions in a bottom layer of the hierarchy are operable to receive parameters output f rom. one or more functions from a higher layer of the hierarchy and to generate theref rom, at least some of said locations which identify the relative positions of said predetermined points.
    2. A model according to claim 1, wherein said hierarchy comprises one or more intermediate layers of functions which are operable to receive parameters output from one or more functions from a higher layer of the hierarchy and to generate therefrom a set of output parameters for input to functions in a lower layer of the hierarchy.
    3. A model according to claim 1 or 2, for modelling the two-dimensional shape of the object by identifying the relative positions of said predetermined points in a predetermined plane.
    4. A model according to claim 1 or 2, for modelling the three-dimensional shape of the object by identifying the relative positions of the predetermined points in a three-dimensional space.
    5. A model according to any preceding claim, wherein one or more of said functions comprises a linear function which linearly relates the input parameters to the function to the output parameters of the function.
    6. A model according to claim 5, wherein said one or more linear functions are identified from a principal component analysis of training data derived from a set of training objects.
    7. A model according to any preceding claim, wherein one or more of said functions are non-linear.
    8. A model according to claim 7, wherein at least one of said non-linear functions comprises a neural network.
    9. A model according to any preceding claim, wherein the number of parameters input to at least one of said functions is smaller than the number of parameters output from the function.
    46 10. A model according to any preceding claim, wherein the number of input parameters to at least one of said functions is greater than or equal to the number of parameters output by the function.
    11. A model according to any preceding claim for modelling the shape and texture of the object, the model further comprising data defining a hierarchical set of functions in which a function in a top layer of the hierarchy is operable to generate a set of output parameters from a set of input parameters and in which one or more functions in a bottom layer of the hierarchy are operable to receive parameters output from one or more functions from a higher layer of the hierarchy and to generate therefrom texture information for the object.
    12. A model according to claim 11, wherein the texture hierarchy has the same structure as the shape hierarchy.
    13. A model according to claim 11 or 12, wherein one or more of said functions are operable to relate an input set of shape and texture parameters to an output set of appearance parameters defining both shape and texture.
    14. A model according to any preceding claim, wherein said object is a deformable object.
    15. A model according to claim 14, wherein said 47 def ormable object includes a human f ace.
    16. A model according to claim 15, wherein said function in said top layer of the hierarchy models the shape of the entire face and wherein said hierarchy includes a function which models the shape of the mouth.
    17. A model according to claim 16, wherein said hierarchy further comprises a function for modelling the shape of the eyes.
    18. A model according to any preceding claim, wherein the or each function in the bottom layer of the hierarchy identifies the positions of a plurality of predetermined points according to a predefined function of smaller number of control point positions.
    19. A model according to claim 18, wherein the predefined function for each of the plurality of points is a linear mapping of the control point positions and the control points are the three corners of a triangular facet.
    20. A model according to claim 18, wherein the predefined function for each of the plurality of points is a predefined non-linear mapping of a fixed number of control point positions.
    48 21. A model according to claim 18, wherein the predefined function for each of the plurality of points is a predef ined displacement from a single control point.
    22. A method of determining a set of appearance parameters representative of the appearance of an object, the method comprising the steps of:
    (i) storing a parametric model according to any of claims 1 to 21 which relates a set of input parameters to appearance data representative of the appearance of the object; (ii) storing at least one function which relates a change in the input parameters to an error between actual appearance data for the object and appearance data determined from the set of input parameters and said parametric model; (iii) initially estimating a current set of input parameters for the object; (iv) determining appearance data for the object from the current set of Input parameters and the stored parametric model; (v) determining the error between actual appearance data of the object and the appearance data determined from the current set of input parameters; (vi) determining a change in the input parameters using said at least one stored function and said determined error; and (vii) updating the current set of input parameters 49 with the determined change in the input parameters.
    23. A method according to claim 22, further comprising the step of repeating steps (iv) to (vii) until the error determined in step (v) is less than a predetermined threshold.
    24. A method according to claim 22, further comprising the step of repeating steps (iv) to (vii) for a predetermined amount of time or for a predetermined number of repetitions.
    25. A method according to claim 22, 23 or 24, wherein said second storing step stores a plurality of functions, one associated with each function within the hierarchical model.
    26. A method of tracking an object comprising the steps of:
    (i) storing a parametric model according to any of claims 1 to 21 which relates a set of input parameters to appearance data representative of the appearance of the object; (ii) storing at least one function which relates a change in the input parameters to an error between the actual appearance data for the object and the appearance data determined from the set of input parameters and said parametric model; pup 4DGEqo 0n4 jog R.4pp ODURJPaddR IRn4OR U00M4aq Joiae up o4 saa. 4eumaed -4ndu-r ail-4 u-r abu-eqz) p se-4e-[aj iqz)Tqm UOT4OUnj OUO 4SPOI 4P (TT) PUR!4oelqo an4 go eoupapaddp On-4 JO aAT-4-e-4UGSGadaa -e-4L2P 9OU'9-TleE)dde 0-4 saE)-4E)um-TL-d -4ndUT JO 4aS R sO-4PTaJ nOTnm TZ 04 T smTRID 90 AuR 04 BUTPJ000P 10POM 0Ta-4oureaud u (T) buTao-4s jog suuaui :BuTsTadmoo sn4PaRddR an4 1-4z)ecqo up go aou-eauedd-e aq-4 go OAT-4-e- 4u0SO-1de-T saa-4euma-ed ez)u-ealeadd-e go -4es L, BUTUTUI-Te-40p Jog sn4- eaeddL, uy L Z am-4 X0RJ-4 04 (TTTA) 04 (TTT) sda4S BUT4Radea (XT) pup! (A) do-4s u-r pauTmia-4ap joiae an-4 eonpea o4 japao uT (TTA) o-4 (AT) sda-4s SuT-4padea (TTTA) !sae4auiuapd 4nduT an4 uT aúupno pTps n4Tm sja4eumapd -4nduT go -4es - 4uaaano an-4 BuT4Rpdn (TTA) TOT-To pau-rmae-4ap alq-4 pup uoT4oun; pajo4s euo -4sRaT 4P an-4 SuTsn sae4emRaRd 4ndUT an-4 UT aSURRO P BUTUTUIa64ap (TA) ! saa4eumjRd 4ndUT go 49s 4uaaano an4 maj pauTmia-4ap 4oaCqo aT4 jog R4Rp eoupaRaddp an-4 pup 4oaCqo an-4 jog R4,ep eoupi.?eddle lRn4op aR4 uaam4aq J0=0 UR BUTUTUI2040p (A) 1 lopoui OTJ4eumaud paao-4s an-4 pup saa4aumaRd 4nduT go 4es 4ueajno an-4 uioa; 4Delqo en4 jog R4pp eoupaueddp en; BUTUTU1Za40p (AT) !4oaCqo aq4 jog sae4aureard InduT go.4as 4ueajno R BUT-4RIUT4SO AlIPT4TUT (TTT) 09 9Z 0Z 9T 01 9 51 is the appearance data for the object determined from the set of input parameters and said parametric model; means for receiving an initial estimate of a current set of input parameters for the object; means for updating the current set of input parameters comprising:
    (i) means for determining appearance data for the object from the current set of input parameters and the stored parametric model; (ii) means for determining the error between the actual appearance data for the object and the appearance data for the object determined from the current set of input parameters; (iii) means for determining a change in the input parameters using said at least one stored function and said determined error; and (iv) means for updating the current set of input parameters with the determined change in the input parameters.
    28. An apparatus according to claim 27, wherein said updating means is operable to update iteratively the current set of input parameters until the error determining means determines an error which is less than a predetermined threshold.
    29. An apparatus according to claim 27 or 28, wherein said storing means stores a plurality of functions, one 52 associated with each function within the hierarchical model.
    30. An apparatus for tracking an object comprising:
    means for storing (i) a parametric model according to any of claims 1 to 21 which relates a set of input parameters to appearance data representative of the appearance of the object; and (ii) at least one function which relates a change in the input parameters to an error between actual appearance data for the object and the appearance data for the object determined from the set of input parameters and said parametric model; means for receiving an initial estimate of a current set of input parameters for the object; means for updating the current set of input parameters comprising:
    (i) means for determining appearance data for the object from the current set of input parameters and the stored parametric model; (ii) means for determining an error between actual appearance data for the object and the appearance data for the object determined from the current set of input parameters; (iii) means for determining a change in the input parameters using the at least one stored function and the determined error; and (iv) means for updating the current set of input parameters with said change in the input parameters; 53 wherein said updating means is operable to update iteratively the current set of input parameters in order to reduce the determined error, wherein said receiving means is operable to receive further estimates of the current input parameters and wherein said update means is operable to update the received estimates of the current input parameters in order to track said object.
    31. A storage medium storing the parametric model according to any of claims 1 to 21 or storing processor implementable instructions for controlling a processor to implement the method of any one of claims 22 to 26.
    32. Processor implementable instructions for controlling a processor to implement the method of any one of claims 22 to 26.
GB9927314A 1999-11-18 1999-11-18 Image processing system using hierarchical set of functions Withdrawn GB2359971A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB9927314A GB2359971A (en) 1999-11-18 1999-11-18 Image processing system using hierarchical set of functions
EP00976195A EP1272979A1 (en) 1999-11-18 2000-11-20 Image processing system
AU14070/01A AU1407001A (en) 1999-11-18 2000-11-20 Image processing system
PCT/GB2000/004411 WO2001037222A1 (en) 1999-11-18 2000-11-20 Image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9927314A GB2359971A (en) 1999-11-18 1999-11-18 Image processing system using hierarchical set of functions

Publications (2)

Publication Number Publication Date
GB9927314D0 GB9927314D0 (en) 2000-01-12
GB2359971A true GB2359971A (en) 2001-09-05

Family

ID=10864763

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9927314A Withdrawn GB2359971A (en) 1999-11-18 1999-11-18 Image processing system using hierarchical set of functions

Country Status (4)

Country Link
EP (1) EP1272979A1 (en)
AU (1) AU1407001A (en)
GB (1) GB2359971A (en)
WO (1) WO2001037222A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7130446B2 (en) 2001-12-03 2006-10-31 Microsoft Corporation Automatic detection and tracking of multiple individuals using multiple cues
AU2003260902A1 (en) * 2002-10-16 2004-05-04 Koninklijke Philips Electronics N.V. Hierarchical image segmentation
GB2402311B (en) * 2003-05-27 2006-03-08 Canon Kk Image processing
GB2402535B (en) 2003-06-05 2006-06-21 Canon Kk Image processing
US8422781B2 (en) * 2008-12-03 2013-04-16 Industrial Technology Research Institute Methods and systems for creating a hierarchical appearance model
US9928635B2 (en) * 2012-09-19 2018-03-27 Commonwealth Scientific And Industrial Research Organisation System and method of generating a non-rigid model

Also Published As

Publication number Publication date
GB9927314D0 (en) 2000-01-12
AU1407001A (en) 2001-05-30
WO2001037222A9 (en) 2001-08-09
WO2001037222A1 (en) 2001-05-25
EP1272979A1 (en) 2003-01-08

Similar Documents

Publication Publication Date Title
Blanz et al. A morphable model for the synthesis of 3D faces
Grassal et al. Neural head avatars from monocular rgb videos
Beymer et al. Image representations for visual learning
US6556196B1 (en) Method and apparatus for the processing of images
Beymer et al. Example based image analysis and synthesis
US5995110A (en) Method and system for the placement of texture on three-dimensional objects
Vetter Synthesis of novel views from a single face image
US5745668A (en) Example-based image analysis and synthesis using pixelwise correspondence
Jones et al. Multidimensional morphable models: A framework for representing and matching object classes
Vetter et al. A bootstrapping algorithm for learning linear models of object classes
US5774129A (en) Image analysis and synthesis networks using shape and texture information
Pighin et al. Modeling and animating realistic faces from images
EP0990224B1 (en) Generating an image of a three-dimensional object
US5990901A (en) Model based image editing and correction
US6016148A (en) Automated mapping of facial images to animation wireframes topologies
US5758046A (en) Method and apparatus for creating lifelike digital representations of hair and other fine-grained images
Nastar et al. Flexible images: matching and recognition using learned deformations
CN111710036A (en) Method, device and equipment for constructing three-dimensional face model and storage medium
Fua et al. Animated heads from ordinary images: A least-squares approach
GB2359971A (en) Image processing system using hierarchical set of functions
Kang et al. Appearance-based structure from motion using linear classes of 3-d models
US20030146918A1 (en) Appearance modelling
GB2342026A (en) Graphics and image processing system
Gu et al. Resampling based method for pixel-wise correspondence between 3D faces
GB2360183A (en) Image processing using parametric models

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)