WO2006097902A2 - Method of performing face recognition - Google Patents

Method of performing face recognition Download PDF

Info

Publication number
WO2006097902A2
WO2006097902A2 PCT/IB2006/050811 IB2006050811W WO2006097902A2 WO 2006097902 A2 WO2006097902 A2 WO 2006097902A2 IB 2006050811 W IB2006050811 W IB 2006050811W WO 2006097902 A2 WO2006097902 A2 WO 2006097902A2
Authority
WO
WIPO (PCT)
Prior art keywords
face model
face
model
reference face
image
Prior art date
Application number
PCT/IB2006/050811
Other languages
French (fr)
Other versions
WO2006097902A3 (en
Inventor
Felix Gremse
Vasanth Philomin
Original Assignee
Philips Intellectual Property & Standards Gmbh
Koninklijke Philips Electronics N. V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Intellectual Property & Standards Gmbh, Koninklijke Philips Electronics N. V. filed Critical Philips Intellectual Property & Standards Gmbh
Priority to JP2008501478A priority Critical patent/JP2008533606A/en
Priority to EP06711106A priority patent/EP1864245A2/en
Priority to US11/908,443 priority patent/US20080192991A1/en
Priority to BRPI0608711-6A priority patent/BRPI0608711A2/en
Publication of WO2006097902A2 publication Critical patent/WO2006097902A2/en
Publication of WO2006097902A3 publication Critical patent/WO2006097902A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the invention relates to a method of performing face recognition, and to a system for performing face recognition.
  • Face recognition is often associated with security systems, in which face recognition technology is used to decide whether a person is to be granted or denied access to the system, or surveillance systems, which are used to identify or track a certain individual.
  • Other applications which are becoming more widespread include those of identifying users of dialog systems, such as home dialog systems, or image searching applications for locating a specific face in a video or photo archive, or finding a certain actor in a movie or other recorded video sequence.
  • Any face recognition technique is based on models of faces.
  • a database of face models is generally used, against which a probe image is compared to find the closest match.
  • a person wishing to gain entry to a system such as a building may first have to undergo a face recognition step in which it is attempted to match an image of his face to a face model in a security databank in order to determine whether the person is to be permitted or denied access.
  • a model of a face is built or trained using information obtained from images, usually a number of images of the same face, all taken under slightly different circumstances such as different lighting or different posture.
  • US2004/0071338 Al suggests training a model for each person separately with respect to the Maximum Likelihood (ML) criterion. This is a well- known technique used for training models for many face recognition applications. In its approach to face recognition, US2004/0071338 determines the closest model for a given probe image, or image of a face, but fails to cover the eventuality that the probe image originates from an unknown person, leaving open the possibility that an unknown person could gain access to a system protected by this approach. Another disadvantage of this system is that the recognition process is quite time-consuming, so that a person has to wait for a relatively long time before the face recognition system has come up with an identification result.
  • ML Maximum Likelihood
  • the reason for the long delay is that, in order to determine the likelihood that a model of the database represents the same face as that in the probe image, it is necessary to carry out time-intensive computations for each model in the database in order to decide which model most closely resembles the person being subject to the identification procedure.
  • an object of the present invention is to provide a faster and more accurate way of performing face recognition.
  • the present invention provides a method of performing face recognition, which method comprises the steps of generating an average face model - comprising a matrix of states representing regions of the face - from a number of distinct face images, and training a reference face model for each one of a number of known faces, where the reference face model is based on the average face model. Therefore, the reference face model is compatible with the average face model.
  • the method further comprises the steps of acquiring a test image for a face to be identified, calculating a best path through the average face model based on the test image, evaluating a degree of similarity for each reference face model against the test image by applying the best path of the average face model to each reference face model, identifying the reference face model most similar to the test image, and accepting or rejecting the identified reference face model on the basis of the degree of similarity.
  • An appropriate system for performing face recognition comprises a number of reference face models and an average face model where each face model comprises a matrix of states representing regions of the face, an acquisition unit for acquiring a test image, and a best path calculator for calculating a best path through the average face model.
  • the system further comprises an evaluation unit for applying the best path of the average face model to each reference face model in order to evaluate a degree of similarity between each reference face model and the test image. To decide whether to accept or reject the reference face model with the greatest degree of similarity, the system comprises a decision-making unit.
  • a face model for use in the invention is specifically a statistical model composed of a matrix of states, each of which represents a region of a face, so that one particular state can be associated with a local facial feature such as an ear, an eye, an eyebrow, or a part of a facial feature.
  • Each state comprises, for example, a Gaussian mixture model for modelling the probability of a local feature vector given the local facial region.
  • a linear sequence of such states can be modelled using a type of statistical model known as the hidden Markov model (HMM).
  • the statistical model used in the present invention is preferably a two- dimensional model, such as a pseudo two-dimensional HMM (P2DHMM) which models two-dimensional data by using an outer HMM for the vertical direction whose states are themselves HMMs, modelling the horizontal direction.
  • P2DHMM pseudo two-dimensional HMM
  • the strength of HMMs and therefore also P2DHMMs is their ability to compensate for signal 'distortions' like stretches and shifts.
  • a distortion can arise if the face is turned away from the camera, is foreshortened, or if the face has been inaccurately detected and localised.
  • regions of the face are first identified in the image and then compared to the corresponding regions of the model, in a technique known as 'alignment' or 'segmentation'.
  • An 'average face model' also called the 'universal background model' (UBM) or 'stranger model', is 'built' or trained using many images from many different people, e.g. 400 images from 100 people.
  • the images used for training are preferably chosen to be a representative cross-section through all suitable types of faces.
  • the average face model might be trained using faces of adults of any appropriate nationality.
  • An archive searching system used to locate images of actors in a video archive might require an average face model based on images of people over a broader age group.
  • the average face model can be trained using known methods which apply an 'expectation maximization' algorithm, which is commonly used to estimate the probability density of a set of given data, in this case the facial features of an image.
  • This method of training also called 'maximum likelihood' (ML) training, is slow, requiring up to several hours to train the average face model, but this initial investment only needs to be carried out once.
  • ML 'maximum likelihood'
  • a 'reference face model' is used to model a particular face.
  • a reference face model might be used to model the face of a person permitted to gain access to a system.
  • Such a reference face model is also trained using the method for training the average face model, but with much fewer images, where the images are all of that person's face.
  • a system for face recognition preferably comprises a number of reference face models, at least one for each face, which it can identify.
  • a security system might have a database of reference face models, one for each of a number of employees who are to be permitted access to the system.
  • the images used to train the average face model and reference face model can be of any suitable image format, for example JPEG (Joint Photographic Experts Group), a standard commonly used for the compression of colour digital images, or some other suitable image format.
  • the images can be obtained from an archive or generated with a camera expressly for the purpose of training.
  • the test image of the person who is to be subjected to the identification procedure can also be obtained by means of a camera or video camera. An image obtained thus can be converted as necessary into a suitable electronic data format using an appropriate conversion tool.
  • the test image is then processed to extract a matrix of local feature vectors, to derive a representation of the face in the test image that is invariant to the lighting conditions but still contains relevant information about the identity of the person.
  • the test image is evaluated against each of the reference face models.
  • the feature matrix of the test image is aligned to the average face model, which can be understood to be a type of mapping of the local facial features of the feature matrix to the states of the average model.
  • an optimal path or alignment through the state sequences of the average face model is calculated for the feature matrix of the test image.
  • This optimal path is commonly referred to as the 'best path'.
  • the Viterbi algorithm is applied to find the best path efficiently.
  • the best path is then applied to each of the reference face models of the face recognition system, and a 'degree of similarity' is efficiently computed for each reference model.
  • the degree of similarity is a score, which is calculated for a reference model when evaluating the test image against the reference face model.
  • the score is an indication of how well the test image can be applied to the reference face model, e.g. the score might denote the production probability of the image given the reference model.
  • an approximate score is computed using the best path through the average model.
  • a high degree of similarity for a reference face model indicates a relatively close match between the reference face model and the test image, whereas a low degree of similarity indicates only a poor match.
  • the most evident advantage of the method of performing face recognition according to the present invention is its successful exploitation of the similarity between face images to speed up the recognition process.
  • the calculation of the best path a cost-intensive process requiring the greater part of the entire computational effort, need only be computed once for the average face model and can then used to evaluate an image against each reference face model of a face recognition system. Therefore, using the method according to the present invention, it is not necessary to perform the cost-intensive best-path computations for each reference face model.
  • the quickest way to compute a degree of similarity is to apply the best path directly to a reference face model, so that it only remains to calculate the score.
  • the best path of the average face model can first be modified or optimised for a particular reference face model, resulting in a somewhat greater computational effort, but a correspondingly more accurate score, thereby improving even further the accuracy of the face recognition system.
  • a relatively high score for a reference face model need not necessarily mean that that reference face model is an unequivocal match for the test image, since common lighting conditions also lead to higher scores because the features are usually not totally invariant to lighting conditions.
  • the score on the average model will, in such a case, also generally be higher.
  • the degree of similarity is preferably taken to be the ratio of the score for the reference face model to the score of the average face model. Therefore, in a preferred embodiment, a score is also calculated for the average face model, and the ratio of the highest reference face model score to the average face model score is computed. This ratio might then be compared to a threshold value. If the ratio is greater than the threshold value, the system may accept the corresponding reference face model, otherwise it should reject that reference face model.
  • the fact that the reference model is derived from the average model using MAP parameter estimation supports the use of the ratio since the sensitivity of both models to the lighting conditions is similar.
  • the accuracy of state-of-the-art face recognition systems depends to some extent on a threshold level, used to decide whether to accept or reject a face model identified as most closely resembling the probe image.
  • Face recognition systems to date use a single threshold value for all face models. If this threshold level is too high, a face model might be rejected, even if it is indeed the correct face model corresponding to the probe image. On the other hand, if the threshold level is too low, a face model unrelated to the probe image might incorrectly be accepted as the "correct" face model.
  • a unique similarity threshold value is assigned to each reference face model, improving the accuracy of the system's decision to accept or reject a reference face model.
  • a preferred method of calculating a similarity threshold value for a reference face model for use in a face recognition system comprises the steps of acquiring a reference face model based on a number of distinct images of the same face and acquiring a control group of unrelated face images.
  • the reference face model is evaluated against each of the unrelated face images in the control group and an evaluation score is calculated for each of the unrelated face images.
  • the evaluation scores are used to determine a similarity threshold value for this reference face model, which would cause a predefined majority of these unrelated face images to be rejected, were they to be evaluated against this reference face model.
  • each reference face model is evaluated against a control group of images.
  • Each image is of a face different to that modelled by the reference face model, and the control group of images is preferably a representative selection of faces of varying similarity to the face modelled by the reference face model.
  • An evaluation score is computed for each image of the control group, by finding the best path through an average face model and applying this best path to each of the images in the control group in order to evaluate each of them against the reference face model. The best path can also be applied to the reference face model to calculate its score.
  • the scores of each of the images in the control group and the score of the reference face model can then be used to choose a threshold value that would ensure that, in a later face recognition procedure, a predefined majority - for example 99% - of these images would be rejected when evaluated against the reference face model.
  • Such a unique similarity threshold value may not only be used in the particular method of performing face recognition described above, but in any method of performing face recognition where in an identification procedure, a test image is evaluated against each of the reference face models, and the reference face model most closely resembling the test image is identified, and where the reference face model is subsequently accepted or rejected on the basis of the similarity threshold value of that reference face model, and therefore offers an independent contribution in addressing the underlying object of the invention.
  • An appropriate system for calculating a similarity threshold value for a reference face model for use in a face recognition system comprises a means for acquiring a reference face model based on a number of distinct images of the same face, and a means of acquiring a control group of unrelated face images. Furthermore, the system comprises an evaluation unit for evaluating the reference face model against each of the unrelated face images of the control group, and an evaluation score calculation unit for calculating an evaluation score for each of the unrelated face images. The system additionally comprises a similarity threshold value determination unit for determining a similarity threshold value for the reference face model on the basis of the evaluation scores, which would cause a predefined majority of these unrelated face images to be rejected were they to be evaluated against this reference face model.
  • a method of training a reference face model is used in the face recognition system, which method comprises the steps of acquiring an average face model based on a number of face images of different faces and acquiring a training image of the face for which the reference face model is to be trained.
  • a training algorithm is applied to the average face model with the information obtained from the training image to give the reference face model.
  • the training image of the person which is to be used to train the reference face model for that person can be obtained, for example, by using a camera or video camera, or by scanning from a photograph, etc.
  • the image can be converted as necessary into a suitable digital format such as those described above.
  • a number of training images are used to train the reference face model for the person, and all training images are of that person.
  • a two-dimensional model, preferably a P2DHMM, is computed for each image using the method described above.
  • the training algorithm preferably an algorithm using maximum a posteriori (MAP) techniques, uses a clone or copy of the average face model and adapts this to suit the face of the person by using a feature matrix generated for the training image.
  • MAP maximum a posteriori
  • a further training image of the person's face is used to refine or improve the reference face model.
  • the training algorithm is applied to the old reference face model, the average face model, and the new training image to adapt the old reference model using any new image data.
  • the new image data is thereby cumulatively added to the old reference face model.
  • the reference face model will have reached a level, which cannot perceptibly be improved upon, so that it is not necessary to further refine it.
  • this level is generally attained after using about ten images of the person. Since new image data is cumulatively added, without having to train the reference face model using all the known images for this person, the training process is considerably faster than existing methods of training reference face models.
  • the average face model trained using a selection of face images of different faces as described above, is preferably the same average face model used in the face recognition system. Therefore, the application of this training method together with the face recognition method according to the invention requires very little additional computational effort and is extremely advantageous. Furthermore, the training method can also be used independently with any other face recognition process, so that it offers an independent contribution in addressing the underlying object of the invention.
  • the average face model can be trained expressly for this system, or can be purchased from a supplier.
  • An appropriate system for training a reference face model comprises a means for acquiring an average face model and a means for acquiring a number of test images of the same face. Furthermore, the system comprises a reference face model generator for generating a reference face model from the training images, whereby the reference face model is based on the average face model.
  • images of faces which are to be subject to an identification procedure, are not taken under ideal conditions. More generally, the lighting is less than perfect, with, for example, back lighting or strong lighting from the side, or poor lighting. These results in a face image, which might be subject to strong fluctuations in local intensity, for example one side of the face might be in relative shadow, while the other side is strongly illuminated. More importantly, different images of the same face can exhibit significant discrepancies in appearance, depending on the variation of the lighting conditions. Thus, a model trained from one image of a person may fail to achieve a high score on another image of the same person taken under different lighting conditions. Therefore, it is very important to transform the features into a form that is independent on the lighting conditions, otherwise, a test image of a person's face taken under less than ideal lighting conditions could result in a false rejection, or, perhaps even worse, a false acceptance.
  • a method of optimizing images is used in the face recognition process and/or training process, wherein the illumination intensity of an image is equalised by sub-dividing the image into smaller sub-images, preferably overlapping, calculating a feature vector for each sub-image, and modifying the feature vector of a sub-image by dividing each coefficient of that feature vector by a value representing the overall intensity of that sub-image. Normally, this value corresponds to the first coefficient of the feature vector. This first coefficient is then no longer required and can subsequently be discarded.
  • the feature vector can be converted to a normalised vector.
  • the feature vectors for each sub-image of the entire image are modified, or decorrelated, in order to remove the dependence on the local illumination intensity. Both techniques significantly improve the recognition performance.
  • An appropriate system for optimizing an image for use in face recognition comprises a subdivision unit for subdividing the image into a number of sub-images, a feature vector determination unit for determining a local feature vector associated with each sub-image, and a feature vector modification unit for modifying the local feature vector associated with a sub-image by dividing each coefficient of that feature vector by a value representing the overall intensity of that sub-image, and/or by discarding a coefficient of the feature vector, and/or by converting that feature vector to a normalised vector.
  • Fig. 1 is a block diagram of a system for performing face recognition
  • Fig. 2 is a block diagram of a system for training an average face model for use in a face recognition system
  • Fig. 3a is a block diagram of a system for training a reference face model for use in a face recognition system according to a first embodiment
  • Fig. 3b is a block diagram of a system for training a reference face model for use in a face recognition system according to a second embodiment
  • Fig. 4 is a block diagram showing a system for calculating a similarity threshold level for a reference face model
  • Fig. 5 is a block diagram showing a system for optimizing images for use in face recognition.
  • Fig. 1 shows the main blocks of a system for face recognition.
  • An image acquisition unit 2 such as a camera, video camera or closed circuit TV camera is used to capture a test image I T of the person to be identified.
  • the image I T is processed in an image processing block 8, in which a matrix of feature vectors, or feature matrix, is calculated for the image I T , or simply extracted from the image I T , according to the image type.
  • the feature vectors may be optimised to compensate for any uneven lighting effects in the image I T by modifying the feature vectors as appropriate. This modification or compensation step is described in more detail under Fig. 5.
  • an optimal state sequence or best path 10 for the test image I T is calculated through the average face model M A v by applying the Viterbi algorithm in a method of alignment explained in the description above, in a best path calculation block 3.
  • This best path 10 is then used in an evaluation unit 4 as a basis for calculating the degree of similarity, or score, for each of a number of reference face models M 1 , M 2 , ..., M n retrieved from a database 6.
  • the highest score 11 is passed to a decision making unit 5, as is the score 12 for the average face model.
  • the ratio of these two scores 11, 12 is calculated and compared to a threshold value 13 read from a file.
  • the threshold value 13 is the threshold value corresponding to the reference face model, which attained the highest score 11 in evaluation against the test image I T . The manner in which such a threshold value can be obtained is described in detail in Fig. 4.
  • the output 14 of the decision-making unit 5 depends on the result of the comparison. If the ratio of the two scores 11, 12 falls below the threshold value 13, then even the closet fitting reference face model has failed, i.e. the system must conclude that the person whose face has been captured in the test image I T cannot be identified from among the reference face models in its database 6. In this case the output 14 might be a message to indicate identification failure. If the system is a security system, the person would be denied access. If the system is an archive searching system, it might report that the test image I T has not been located in the archive.
  • the comparison has been successful, i.e. the ratio of the two scores 11, 12 lies above the threshold value 13, then that reference face model can be taken to match the person whose test image I T is undergoing the face recognition process. In this case, the person might be granted access to the system, or the system reports a successful search result, as appropriate.
  • Fig. 2 illustrates the creation of an average face model M A v for use in the face recognition system described above.
  • a set of feature vectors 21, or feature vector matrix is calculated for or extracted from the image, F 1 , F 2 , ..., F n as necessary, and forwarded to a training unit 22.
  • a method of training is applied to the processed feature vectors 21 of each image F 1 , F 2 , ..., F n .
  • the training method uses the expectation maximization (EM) algorithm following a maximum likelihood (ML) criterion to find the model parameters for the average face model M A v-
  • the average face model M AV as a pseudo 2-dimensional Hidden Markov Model (P2DHMM), describes the general likelihood of each of the local features of a face. Faces with 'average' facial features will achieve a higher score than faces exhibiting more unusual facial features. A face image taken under common lighting situations will also achieve a higher score.
  • the number of face images F 1 , F 2 , ..., F n in the collection is chosen to give a satisfactory average face model M AV -
  • Fig. 3a shows a system for training a reference face model M 1 , preferably for use in the above mentioned face recognition system, for a particular person.
  • the training system is supplied with a number of training images T 1 , T 2 , ..., T m , all of that person's face.
  • a feature vector matrix is derived from each training image T 1 , T 2 , ..., T m .
  • the feature vectors for each training image T 1 , T 2 , ..., T m can be first processed in the image processing unit 30, in a manner described in more detail under Fig. 5, to compensate for any uneven illumination effects.
  • a copy or clone of the average face model M AV is used, along with the information obtained from the training images T 1 , T 2 , ..., T m , as input to a reference face model generator 31.
  • the average face model M AV is used as a starting point, and is modified using information extracted from the images T 1 , T 2 , ..., T m under application of maximum a posteriori (MAP) parameter estimation in order to arrive at a reference face model M 1 for the face depicted in the training images T 1 , T 2 , ..., T m .
  • MAP maximum a posteriori
  • the initial training of a person's reference face model M 1 can take effect using a minimum of one image of that person's face, but evidently a greater number of images will give a better reference face model M 1 .
  • One method of MAP parameter estimation for P2DHMM whose states are Gaussian mixtures is the following: the best path through the average model is computed for each training image. The feature vectors (also referred to as "features" in the following) are then assigned to the states of the P2DHMM according to the best path. Each feature assigned to a Gaussian mixture is then assigned to the closest Gaussian of the mixture. The mean of the Gaussian is set to a weighted average of the average model's mean and the mean of the features.
  • the reference model has thus been altered to give a better representation of the appearance of the person in the training image.
  • Other parameters of the P2DHMM can be altered in a similar manner, or can simply be copied from the average model, since the means are the most important parameters.
  • the sum of the features - which was computed to estimate the mean of the features - and the total number, or count, of the features are stored with the Gaussian to enable the incremental training described below.
  • the reference face model M 1 for a person can be further improved by refining it using additional image data T new of that person's face.
  • a further training image T new has been acquired for the person.
  • the new training image T new is first processed in an image-processing unit 30 as described under Fig. 3a above.
  • Image information from the new training image T new along with the average face model M A v and a copy M 1 ' of the reference face model for this person, is input to the reference face model generator 31, in which MAP parameter estimation is applied to the old and new data to give an improved reference face model M 1 for this person.
  • the incremental MAP training can be implemented in the following way: the features of the new training images are assigned to the Gaussians as described above, where the average model is used for the assignment.
  • the mean of the reference model's Gaussian has to be set to a weighted average of the average model's mean and the mean of all training features.
  • the mean of all training features is easily computed since the sum and the count of the old features are stored along with the Gaussian.
  • the sum and the count are updated by including the new features to enable further training sessions.
  • the same reference model will result, no matter in which order the training images arrive.
  • each reference face model M 1 , M 2 , ..., M n of a face recognition database can be supplied with its own specific similarity threshold value.
  • Fig. 4 shows a system for generating a unique similarity threshold value for a reference face model M n .
  • An existing reference face model M n for a particular person is acquired.
  • a control group of unrelated face images G 1 , G 2 , ...Gk is also acquired.
  • These images G 1 , G 2 , ...Gk are chosen as a representative selection of faces of varying degrees of similarity to the person modelled by the reference face model M n .
  • the images are first processed in an image-processing unit 42, described in more detail under Fig. 5, to extract a feature matrix 48 for each image.
  • a best path calculation unit 40 the best path 47 is calculated through the average face model M A v for each image the score 43 on the average model M A v is also computed.
  • the feature matrices 48, scores 43 and best paths 47 only have to get computed once since the average model never changes, and can be saved in a file F for later use.
  • Unit 44 computes the degrees of similarity 49 from the reference model's scores and the average model's scores.
  • the similarity threshold determination unit 45 requires the degrees of similarity 49 for all control group images G 1 , G 2 , ...Gk to find a threshold value V n that will result in the rejection of the majority of the control group images G 1 , G 2 , ...Gk, when compared to the reference model M n .
  • the scores 43 for the reference model M n are supplied by unit 41 which requires the best paths 47 and the feature matrices 48 of the control group images as well as those of the reference model M n .
  • the computationally expensive part is the computation of the best path 47 through the average model M A v- However, this step can be performed offline, whereas the actual calibration is very fast and can be performed online directly after training the reference face model M n .
  • Fig. 5 shows components of a system for image optimization, which can be used as the image processing units 8, 20, 30, 42 mentioned in the previous figure descriptions.
  • An image I is input to an image subdivision unit 50, which divides the image into smaller, overlapping sub-images. Allowing the sub-images to overlap to some extent improves the overall accuracy of a model, which will eventually be derived from the input image.
  • the sub-images 53 are forwarded to a feature vector determination unit 51, which computes a local feature vector 54 for each sub-image 53.
  • a possible method of computing the local features is to apply the discrete cosine transformation on the local sub-image and extract a sub set of the frequency coefficients.
  • the illumination intensity of each sub-image 53 is then equalised by modifying its local feature vector 54 in a feature vector modification unit 52.
  • the output of the feature vector modification unit 52 is thus a matrix 55 of decorrelated local feature vectors describing the input image I.
  • This feature vector matrix is 55 used in the systems for training face models, for face recognition, and for similarity threshold value calculation, as described above.
  • the methods for face recognition, for training a reference face model, for optimizing images for use in a face recognition system, and for calculating similarity threshold values, and therefore also the corresponding systems for face recognition, for training a reference face model, for calculating a similarity threshold value for a reference face model, and for optimising an image for use in a face recognition system can be utilised in any suitable combination, even together with state-of-the-art face recognition systems ad training methods and systems, so that these combinations also underlie the scope of the invention.
  • a “unit” may comprise a number of blocks or devices, unless explicitly described as a single entity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention describes a method of performing face recognition, which method comprises the steps of generating an average face model (MAV) - comprising a matrix of states representing regions of the face - from a number of distinct face images (I1, I2, ...Ij) and training a reference face model (M1, M2, ..., Mn) for each one of a number of known faces, where the reference face model (M1, M2, ... , Mn) is based on the average face model (MAV). A test image (IT) is acquired for a face to be identified, and a best path through the average face model (MAv) is calculated, based on the test image (IT). A degree of similarity is evaluated for each reference face model (M1, M2,..., Mn) against the test image (IT) by applying the best path of the average face model (MAV) to each reference face model (M1, M2,..., Mn) to identify the reference face model (M1, M2, ..., Mn) most similar to the test image (IT), which identified reference face model (M1, M2, ..., Mn) is subsequently accepted or rejected on the basis of its degree of similarity. Furthermore, the invention describes a system for performing face recognition. Also, the invention describes a method of and system for training a reference face model (M1) which may be used in the face recognition system, a method of and system for calculating a similarity threshold value for a reference face model (Mn) which may be used in the face recognition system, and a method of and system for optimizing images (I, IT, IT , G1, G2, ...G,, T1, T2, ..., Tm, Tnew) which may be used in the face recognition system.

Description

Method of performing face recognition
The invention relates to a method of performing face recognition, and to a system for performing face recognition.
Applications involving face recognition are often associated with security systems, in which face recognition technology is used to decide whether a person is to be granted or denied access to the system, or surveillance systems, which are used to identify or track a certain individual. Other applications which are becoming more widespread include those of identifying users of dialog systems, such as home dialog systems, or image searching applications for locating a specific face in a video or photo archive, or finding a certain actor in a movie or other recorded video sequence.
Any face recognition technique is based on models of faces. A database of face models is generally used, against which a probe image is compared to find the closest match. For example, a person wishing to gain entry to a system such as a building may first have to undergo a face recognition step in which it is attempted to match an image of his face to a face model in a security databank in order to determine whether the person is to be permitted or denied access. A model of a face is built or trained using information obtained from images, usually a number of images of the same face, all taken under slightly different circumstances such as different lighting or different posture.
US2004/0071338 Al suggests training a model for each person separately with respect to the Maximum Likelihood (ML) criterion. This is a well- known technique used for training models for many face recognition applications. In its approach to face recognition, US2004/0071338 determines the closest model for a given probe image, or image of a face, but fails to cover the eventuality that the probe image originates from an unknown person, leaving open the possibility that an unknown person could gain access to a system protected by this approach. Another disadvantage of this system is that the recognition process is quite time-consuming, so that a person has to wait for a relatively long time before the face recognition system has come up with an identification result. The reason for the long delay is that, in order to determine the likelihood that a model of the database represents the same face as that in the probe image, it is necessary to carry out time-intensive computations for each model in the database in order to decide which model most closely resembles the person being subject to the identification procedure. However, in most face recognition systems, it is desirable that the face recognition to be completed as quickly as possible, since any perceived time delay will annoy the user.
Furthermore, it is unfortunately often the case that the conditions under which the probe image is captured may be less than ideal. Apart from being unable to precisely control the aspect at which the user faces the camera, or the facial expression he assumes, varying illumination conditions lead to the same face appearing differently in different images. A face recognition system used for real application has to function in such an unconstrained environment.
Overall, it remains a problem that the entire face recognition process is often too slow und too inaccurate, i.e., that many face recognition systems exhibit unsatisfactory behaviour.
Therefore, an object of the present invention is to provide a faster and more accurate way of performing face recognition.
To this end, the present invention provides a method of performing face recognition, which method comprises the steps of generating an average face model - comprising a matrix of states representing regions of the face - from a number of distinct face images, and training a reference face model for each one of a number of known faces, where the reference face model is based on the average face model. Therefore, the reference face model is compatible with the average face model. The method further comprises the steps of acquiring a test image for a face to be identified, calculating a best path through the average face model based on the test image, evaluating a degree of similarity for each reference face model against the test image by applying the best path of the average face model to each reference face model, identifying the reference face model most similar to the test image, and accepting or rejecting the identified reference face model on the basis of the degree of similarity.
An appropriate system for performing face recognition comprises a number of reference face models and an average face model where each face model comprises a matrix of states representing regions of the face, an acquisition unit for acquiring a test image, and a best path calculator for calculating a best path through the average face model. The system further comprises an evaluation unit for applying the best path of the average face model to each reference face model in order to evaluate a degree of similarity between each reference face model and the test image. To decide whether to accept or reject the reference face model with the greatest degree of similarity, the system comprises a decision-making unit.
A face model for use in the invention is specifically a statistical model composed of a matrix of states, each of which represents a region of a face, so that one particular state can be associated with a local facial feature such as an ear, an eye, an eyebrow, or a part of a facial feature. Each state comprises, for example, a Gaussian mixture model for modelling the probability of a local feature vector given the local facial region. A linear sequence of such states can be modelled using a type of statistical model known as the hidden Markov model (HMM). However, since a facial image is a two-dimensional image, in which each row can be seen as a linear state sequence, the statistical model used in the present invention is preferably a two- dimensional model, such as a pseudo two-dimensional HMM (P2DHMM) which models two-dimensional data by using an outer HMM for the vertical direction whose states are themselves HMMs, modelling the horizontal direction. The strength of HMMs and therefore also P2DHMMs is their ability to compensate for signal 'distortions' like stretches and shifts. In the case of comparing an image of a face to a face model, such a distortion can arise if the face is turned away from the camera, is foreshortened, or if the face has been inaccurately detected and localised. To compare an image of a face with a face model, regions of the face are first identified in the image and then compared to the corresponding regions of the model, in a technique known as 'alignment' or 'segmentation'. An 'average face model', also called the 'universal background model' (UBM) or 'stranger model', is 'built' or trained using many images from many different people, e.g. 400 images from 100 people. The images used for training are preferably chosen to be a representative cross-section through all suitable types of faces. For a security system, for example, the average face model might be trained using faces of adults of any appropriate nationality. An archive searching system used to locate images of actors in a video archive might require an average face model based on images of people over a broader age group.
The average face model can be trained using known methods which apply an 'expectation maximization' algorithm, which is commonly used to estimate the probability density of a set of given data, in this case the facial features of an image. This method of training, also called 'maximum likelihood' (ML) training, is slow, requiring up to several hours to train the average face model, but this initial investment only needs to be carried out once. Once the average face model is trained, it can be utilised in any appropriate system for face recognition.
A 'reference face model' is used to model a particular face. For example, a reference face model might be used to model the face of a person permitted to gain access to a system. Such a reference face model is also trained using the method for training the average face model, but with much fewer images, where the images are all of that person's face. A system for face recognition preferably comprises a number of reference face models, at least one for each face, which it can identify. For example, a security system might have a database of reference face models, one for each of a number of employees who are to be permitted access to the system.
The images used to train the average face model and reference face model can be of any suitable image format, for example JPEG (Joint Photographic Experts Group), a standard commonly used for the compression of colour digital images, or some other suitable image format. The images can be obtained from an archive or generated with a camera expressly for the purpose of training. Equally, the test image of the person who is to be subjected to the identification procedure can also be obtained by means of a camera or video camera. An image obtained thus can be converted as necessary into a suitable electronic data format using an appropriate conversion tool. The test image is then processed to extract a matrix of local feature vectors, to derive a representation of the face in the test image that is invariant to the lighting conditions but still contains relevant information about the identity of the person.
To determine whether the test image can be matched to any of the reference face models, the test image is evaluated against each of the reference face models. First, the feature matrix of the test image is aligned to the average face model, which can be understood to be a type of mapping of the local facial features of the feature matrix to the states of the average model. To this end, an optimal path or alignment through the state sequences of the average face model is calculated for the feature matrix of the test image. This optimal path is commonly referred to as the 'best path'. Usually the Viterbi algorithm is applied to find the best path efficiently. According to the method of the present invention, the best path is then applied to each of the reference face models of the face recognition system, and a 'degree of similarity' is efficiently computed for each reference model. In the simplest case, the degree of similarity is a score, which is calculated for a reference model when evaluating the test image against the reference face model. The score is an indication of how well the test image can be applied to the reference face model, e.g. the score might denote the production probability of the image given the reference model. For efficiency reasons, an approximate score is computed using the best path through the average model. A high degree of similarity for a reference face model indicates a relatively close match between the reference face model and the test image, whereas a low degree of similarity indicates only a poor match.
The most evident advantage of the method of performing face recognition according to the present invention is its successful exploitation of the similarity between face images to speed up the recognition process. The calculation of the best path, a cost-intensive process requiring the greater part of the entire computational effort, need only be computed once for the average face model and can then used to evaluate an image against each reference face model of a face recognition system. Therefore, using the method according to the present invention, it is not necessary to perform the cost-intensive best-path computations for each reference face model. The quickest way to compute a degree of similarity is to apply the best path directly to a reference face model, so that it only remains to calculate the score. In a further embodiment of the invention, the best path of the average face model can first be modified or optimised for a particular reference face model, resulting in a somewhat greater computational effort, but a correspondingly more accurate score, thereby improving even further the accuracy of the face recognition system.
A relatively high score for a reference face model need not necessarily mean that that reference face model is an unequivocal match for the test image, since common lighting conditions also lead to higher scores because the features are usually not totally invariant to lighting conditions. However, the score on the average model will, in such a case, also generally be higher. Thus, the degree of similarity is preferably taken to be the ratio of the score for the reference face model to the score of the average face model. Therefore, in a preferred embodiment, a score is also calculated for the average face model, and the ratio of the highest reference face model score to the average face model score is computed. This ratio might then be compared to a threshold value. If the ratio is greater than the threshold value, the system may accept the corresponding reference face model, otherwise it should reject that reference face model. The fact that the reference model is derived from the average model using MAP parameter estimation supports the use of the ratio since the sensitivity of both models to the lighting conditions is similar.
The accuracy of state-of-the-art face recognition systems depends to some extent on a threshold level, used to decide whether to accept or reject a face model identified as most closely resembling the probe image. Face recognition systems to date use a single threshold value for all face models. If this threshold level is too high, a face model might be rejected, even if it is indeed the correct face model corresponding to the probe image. On the other hand, if the threshold level is too low, a face model unrelated to the probe image might incorrectly be accepted as the "correct" face model.
Therefore, in a particularly preferred embodiment of the invention, a unique similarity threshold value is assigned to each reference face model, improving the accuracy of the system's decision to accept or reject a reference face model.
A preferred method of calculating a similarity threshold value for a reference face model for use in a face recognition system comprises the steps of acquiring a reference face model based on a number of distinct images of the same face and acquiring a control group of unrelated face images. The reference face model is evaluated against each of the unrelated face images in the control group and an evaluation score is calculated for each of the unrelated face images. The evaluation scores are used to determine a similarity threshold value for this reference face model, which would cause a predefined majority of these unrelated face images to be rejected, were they to be evaluated against this reference face model.
The fixed threshold used by face recognition systems of the prior art can lead to incorrect decisions regarding the identification of a test image. The reason for this is that some faces resemble the average face model more closely than do other faces. Therefore, a test image of the face of such a person results in a high score when evaluated against the average face model. This in turn results in a low ratio of the score for the reference face model of that person's face to the average face model score. As a result, the reference face model for this person's face, and therefore this person, would be more likely to be rejected by such a system. Furthermore, a person whose face is very different from that of the average face model, but resembling to some extent one of the reference face models in a system, might erroneously be accepted.
These undesirable false rejection and false acceptance errors can be reduced to a minimum using the method described above for calculating a similarity threshold value for each reference face model in a face recognition system. To this end, each reference face model is evaluated against a control group of images. Each image is of a face different to that modelled by the reference face model, and the control group of images is preferably a representative selection of faces of varying similarity to the face modelled by the reference face model. An evaluation score is computed for each image of the control group, by finding the best path through an average face model and applying this best path to each of the images in the control group in order to evaluate each of them against the reference face model. The best path can also be applied to the reference face model to calculate its score. The scores of each of the images in the control group and the score of the reference face model can then be used to choose a threshold value that would ensure that, in a later face recognition procedure, a predefined majority - for example 99% - of these images would be rejected when evaluated against the reference face model.
Such a unique similarity threshold value may not only be used in the particular method of performing face recognition described above, but in any method of performing face recognition where in an identification procedure, a test image is evaluated against each of the reference face models, and the reference face model most closely resembling the test image is identified, and where the reference face model is subsequently accepted or rejected on the basis of the similarity threshold value of that reference face model, and therefore offers an independent contribution in addressing the underlying object of the invention.
An appropriate system for calculating a similarity threshold value for a reference face model for use in a face recognition system comprises a means for acquiring a reference face model based on a number of distinct images of the same face, and a means of acquiring a control group of unrelated face images. Furthermore, the system comprises an evaluation unit for evaluating the reference face model against each of the unrelated face images of the control group, and an evaluation score calculation unit for calculating an evaluation score for each of the unrelated face images. The system additionally comprises a similarity threshold value determination unit for determining a similarity threshold value for the reference face model on the basis of the evaluation scores, which would cause a predefined majority of these unrelated face images to be rejected were they to be evaluated against this reference face model.
Another characteristic of current approaches, resulting in slow and problematical face recognition, is that the effort required to train a model is quite large. The time invested in training a model is proportional to the number of images, yet it is desirable to use a relatively large number of images in training a model in order to obtain as great an accuracy as possible. Whenever a new image is introduced to further improve the accuracy of the model, the model must be retrained using all of the images. The entire process is therefore very slow, and accordingly expensive.
Therefore, preferably, a method of training a reference face model is used in the face recognition system, which method comprises the steps of acquiring an average face model based on a number of face images of different faces and acquiring a training image of the face for which the reference face model is to be trained. A training algorithm is applied to the average face model with the information obtained from the training image to give the reference face model.
The training image of the person which is to be used to train the reference face model for that person can be obtained, for example, by using a camera or video camera, or by scanning from a photograph, etc. The image can be converted as necessary into a suitable digital format such as those described above. Preferably, a number of training images are used to train the reference face model for the person, and all training images are of that person. A two-dimensional model, preferably a P2DHMM, is computed for each image using the method described above.
The training algorithm, preferably an algorithm using maximum a posteriori (MAP) techniques, uses a clone or copy of the average face model and adapts this to suit the face of the person by using a feature matrix generated for the training image. The adapted average face model becomes the reference face model for the person.
In a particularly preferred embodiment of the invention, a further training image of the person's face is used to refine or improve the reference face model. To this end, the training algorithm is applied to the old reference face model, the average face model, and the new training image to adapt the old reference model using any new image data. The new image data is thereby cumulatively added to the old reference face model.
Eventually, the reference face model will have reached a level, which cannot perceptibly be improved upon, so that it is not necessary to further refine it. Using the method of training a reference face model proposed herein, this level is generally attained after using about ten images of the person. Since new image data is cumulatively added, without having to train the reference face model using all the known images for this person, the training process is considerably faster than existing methods of training reference face models.
The average face model, trained using a selection of face images of different faces as described above, is preferably the same average face model used in the face recognition system. Therefore, the application of this training method together with the face recognition method according to the invention requires very little additional computational effort and is extremely advantageous. Furthermore, the training method can also be used independently with any other face recognition process, so that it offers an independent contribution in addressing the underlying object of the invention.
The average face model can be trained expressly for this system, or can be purchased from a supplier.
An appropriate system for training a reference face model comprises a means for acquiring an average face model and a means for acquiring a number of test images of the same face. Furthermore, the system comprises a reference face model generator for generating a reference face model from the training images, whereby the reference face model is based on the average face model.
Usually, images of faces, which are to be subject to an identification procedure, are not taken under ideal conditions. More generally, the lighting is less than perfect, with, for example, back lighting or strong lighting from the side, or poor lighting. These results in a face image, which might be subject to strong fluctuations in local intensity, for example one side of the face might be in relative shadow, while the other side is strongly illuminated. More importantly, different images of the same face can exhibit significant discrepancies in appearance, depending on the variation of the lighting conditions. Thus, a model trained from one image of a person may fail to achieve a high score on another image of the same person taken under different lighting conditions. Therefore, it is very important to transform the features into a form that is independent on the lighting conditions, otherwise, a test image of a person's face taken under less than ideal lighting conditions could result in a false rejection, or, perhaps even worse, a false acceptance.
To provide a more accurate face recognition, preferably a method of optimizing images is used in the face recognition process and/or training process, wherein the illumination intensity of an image is equalised by sub-dividing the image into smaller sub-images, preferably overlapping, calculating a feature vector for each sub-image, and modifying the feature vector of a sub-image by dividing each coefficient of that feature vector by a value representing the overall intensity of that sub-image. Normally, this value corresponds to the first coefficient of the feature vector. This first coefficient is then no longer required and can subsequently be discarded. Alternatively or additionally, the feature vector can be converted to a normalised vector.
In both of the methods proposed above, the feature vectors for each sub- image of the entire image are modified, or decorrelated, in order to remove the dependence on the local illumination intensity. Both techniques significantly improve the recognition performance.
These methods are not restricted for use with the method for face recognition according to the invention, but can also serve to improve face recognition accuracy in other, state of the art, face recognition systems and face model training systems, and therefore offer independent contributions in addressing the underlying object of the invention.
An appropriate system for optimizing an image for use in face recognition according to the methods proposed comprises a subdivision unit for subdividing the image into a number of sub-images, a feature vector determination unit for determining a local feature vector associated with each sub-image, and a feature vector modification unit for modifying the local feature vector associated with a sub-image by dividing each coefficient of that feature vector by a value representing the overall intensity of that sub-image, and/or by discarding a coefficient of the feature vector, and/or by converting that feature vector to a normalised vector.
Other objects and features of the present invention will become apparent from the following detailed descriptions considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention.
Fig. 1 is a block diagram of a system for performing face recognition;
Fig. 2 is a block diagram of a system for training an average face model for use in a face recognition system;
Fig. 3a is a block diagram of a system for training a reference face model for use in a face recognition system according to a first embodiment; Fig. 3b is a block diagram of a system for training a reference face model for use in a face recognition system according to a second embodiment;
Fig. 4 is a block diagram showing a system for calculating a similarity threshold level for a reference face model;
Fig. 5 is a block diagram showing a system for optimizing images for use in face recognition.
In the drawings, like numbers refer to like objects throughout.
Fig. 1 shows the main blocks of a system for face recognition. An image acquisition unit 2, such as a camera, video camera or closed circuit TV camera is used to capture a test image IT of the person to be identified. The image IT is processed in an image processing block 8, in which a matrix of feature vectors, or feature matrix, is calculated for the image IT, or simply extracted from the image IT, according to the image type. Also in this processing block 8, the feature vectors may be optimised to compensate for any uneven lighting effects in the image IT by modifying the feature vectors as appropriate. This modification or compensation step is described in more detail under Fig. 5.
Using the feature matrix, an optimal state sequence or best path 10 for the test image IT is calculated through the average face model MAv by applying the Viterbi algorithm in a method of alignment explained in the description above, in a best path calculation block 3. This best path 10 is then used in an evaluation unit 4 as a basis for calculating the degree of similarity, or score, for each of a number of reference face models M1, M2, ..., Mn retrieved from a database 6.
The highest score 11 is passed to a decision making unit 5, as is the score 12 for the average face model. The ratio of these two scores 11, 12 is calculated and compared to a threshold value 13 read from a file. In this case, the threshold value 13 is the threshold value corresponding to the reference face model, which attained the highest score 11 in evaluation against the test image IT. The manner in which such a threshold value can be obtained is described in detail in Fig. 4.
The output 14 of the decision-making unit 5 depends on the result of the comparison. If the ratio of the two scores 11, 12 falls below the threshold value 13, then even the closet fitting reference face model has failed, i.e. the system must conclude that the person whose face has been captured in the test image IT cannot be identified from among the reference face models in its database 6. In this case the output 14 might be a message to indicate identification failure. If the system is a security system, the person would be denied access. If the system is an archive searching system, it might report that the test image IT has not been located in the archive.
If the comparison has been successful, i.e. the ratio of the two scores 11, 12 lies above the threshold value 13, then that reference face model can be taken to match the person whose test image IT is undergoing the face recognition process. In this case, the person might be granted access to the system, or the system reports a successful search result, as appropriate.
Fig. 2 illustrates the creation of an average face model MAv for use in the face recognition system described above. A collection of unrelated face images F1, F2, ..., Fn from a number of different people, which should be as diverse as possible and a representative cross-section of all faces, is acquired. These images F1, F2, ..., Fn may be purchased from a supplier, or generated expressly for the training process. In an image processing unit 20, described in more detail under Fig. 5, a set of feature vectors 21, or feature vector matrix, is calculated for or extracted from the image, F1, F2, ..., Fn as necessary, and forwarded to a training unit 22.
In the training unit 22, a method of training is applied to the processed feature vectors 21 of each image F1, F2, ..., Fn. In this case, the training method uses the expectation maximization (EM) algorithm following a maximum likelihood (ML) criterion to find the model parameters for the average face model MAv- The average face model MAV , as a pseudo 2-dimensional Hidden Markov Model (P2DHMM), describes the general likelihood of each of the local features of a face. Faces with 'average' facial features will achieve a higher score than faces exhibiting more unusual facial features. A face image taken under common lighting situations will also achieve a higher score. The number of face images F1, F2, ..., Fn in the collection is chosen to give a satisfactory average face model MAV-
Fig. 3a shows a system for training a reference face model M1, preferably for use in the above mentioned face recognition system, for a particular person. Here, the training system is supplied with a number of training images T1, T2, ..., Tm, all of that person's face. In an image processing unit 31, a feature vector matrix is derived from each training image T1, T2, ..., Tm. To improve the quality of the reference face model M1 being created, the feature vectors for each training image T1, T2, ..., Tm can be first processed in the image processing unit 30, in a manner described in more detail under Fig. 5, to compensate for any uneven illumination effects.
A copy or clone of the average face model MAV is used, along with the information obtained from the training images T1, T2, ..., Tm, as input to a reference face model generator 31. In the reference face model generator 31, the average face model MAV is used as a starting point, and is modified using information extracted from the images T1, T2, ..., Tm under application of maximum a posteriori (MAP) parameter estimation in order to arrive at a reference face model M1 for the face depicted in the training images T1, T2, ..., Tm. The initial training of a person's reference face model M1 can take effect using a minimum of one image of that person's face, but evidently a greater number of images will give a better reference face model M1. One method of MAP parameter estimation for P2DHMM whose states are Gaussian mixtures is the following: the best path through the average model is computed for each training image. The feature vectors (also referred to as "features" in the following) are then assigned to the states of the P2DHMM according to the best path. Each feature assigned to a Gaussian mixture is then assigned to the closest Gaussian of the mixture. The mean of the Gaussian is set to a weighted average of the average model's mean and the mean of the features. The reference model has thus been altered to give a better representation of the appearance of the person in the training image. Other parameters of the P2DHMM can be altered in a similar manner, or can simply be copied from the average model, since the means are the most important parameters. The sum of the features - which was computed to estimate the mean of the features - and the total number, or count, of the features are stored with the Gaussian to enable the incremental training described below.
The reference face model M1 for a person can be further improved by refining it using additional image data Tnew of that person's face. In Fig. 3b, a further training image Tnew has been acquired for the person. The new training image Tnew is first processed in an image-processing unit 30 as described under Fig. 3a above. Image information from the new training image Tnew, along with the average face model MAv and a copy M1' of the reference face model for this person, is input to the reference face model generator 31, in which MAP parameter estimation is applied to the old and new data to give an improved reference face model M1 for this person. When using a P2DHMM whose states are Gaussian mixtures, the incremental MAP training can be implemented in the following way: the features of the new training images are assigned to the Gaussians as described above, where the average model is used for the assignment. The mean of the reference model's Gaussian has to be set to a weighted average of the average model's mean and the mean of all training features. The mean of all training features is easily computed since the sum and the count of the old features are stored along with the Gaussian. The sum and the count are updated by including the new features to enable further training sessions. Thus the same reference model will result, no matter in which order the training images arrive.
To improve the accuracy of the decision whether to accept or reject the reference face model identified as the closest match to the test image, each reference face model M1, M2, ..., Mn of a face recognition database can be supplied with its own specific similarity threshold value. Fig. 4 shows a system for generating a unique similarity threshold value for a reference face model Mn. An existing reference face model Mn for a particular person is acquired. A control group of unrelated face images G1, G2, ...Gk is also acquired. These images G1, G2, ...Gk are chosen as a representative selection of faces of varying degrees of similarity to the person modelled by the reference face model Mn. The images are first processed in an image-processing unit 42, described in more detail under Fig. 5, to extract a feature matrix 48 for each image.
In a best path calculation unit 40, the best path 47 is calculated through the average face model MAv for each image the score 43 on the average model MAv is also computed. The feature matrices 48, scores 43 and best paths 47 only have to get computed once since the average model never changes, and can be saved in a file F for later use. Unit 44 computes the degrees of similarity 49 from the reference model's scores and the average model's scores. The similarity threshold determination unit 45 requires the degrees of similarity 49 for all control group images G1, G2, ...Gk to find a threshold value Vn that will result in the rejection of the majority of the control group images G1, G2, ...Gk, when compared to the reference model Mn.The scores 43 for the reference model Mn are supplied by unit 41 which requires the best paths 47 and the feature matrices 48 of the control group images as well as those of the reference model Mn. The computationally expensive part is the computation of the best path 47 through the average model MAv- However, this step can be performed offline, whereas the actual calibration is very fast and can be performed online directly after training the reference face model Mn.
Any image used for face recognition, for training the average face model, for training a reference face model and for calculating a similarity threshold value for a reference face model can be optimised before use to transform it into a representation that is invariant to the illumination settings. Fig. 5 shows components of a system for image optimization, which can be used as the image processing units 8, 20, 30, 42 mentioned in the previous figure descriptions.
An image I is input to an image subdivision unit 50, which divides the image into smaller, overlapping sub-images. Allowing the sub-images to overlap to some extent improves the overall accuracy of a model, which will eventually be derived from the input image. The sub-images 53 are forwarded to a feature vector determination unit 51, which computes a local feature vector 54 for each sub-image 53. A possible method of computing the local features is to apply the discrete cosine transformation on the local sub-image and extract a sub set of the frequency coefficients. The illumination intensity of each sub-image 53 is then equalised by modifying its local feature vector 54 in a feature vector modification unit 52. This can be done by dividing each coefficient of the local feature vector 54 by a value representing the overall intensity of that sub-image, by discarding the first coefficient of the local feature vector 54, by normalising the local feature vector 54 to give a unit vector, or by a combination of these techniques. The output of the feature vector modification unit 52 is thus a matrix 55 of decorrelated local feature vectors describing the input image I.
This feature vector matrix is 55 used in the systems for training face models, for face recognition, and for similarity threshold value calculation, as described above.
Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention. In particular, the methods for face recognition, for training a reference face model, for optimizing images for use in a face recognition system, and for calculating similarity threshold values, and therefore also the corresponding systems for face recognition, for training a reference face model, for calculating a similarity threshold value for a reference face model, and for optimising an image for use in a face recognition system can be utilised in any suitable combination, even together with state-of-the-art face recognition systems ad training methods and systems, so that these combinations also underlie the scope of the invention.
For the sake of clarity, it is also to be understood that the use of "a" or "an" throughout this application does not exclude a plurality, and "comprising" does not exclude other steps or elements. A "unit" may comprise a number of blocks or devices, unless explicitly described as a single entity.

Claims

CLAIMS:
1. A method of performing face recognition, which method comprises the steps of generating an average face model (MAv) - comprising a matrix of states representing regions of the face - from a number of distinct face images (I1, 12, ...Ij); training a reference face model (M1, M2, ... , Mn) for each one of a number of known faces, where the reference face model (M1, M2,
..., Mn) is based on the average face model (MAV); acquiring a test image (IT) for a face to be identified; calculating a best path through the average face model (MAv) based on the test image (IT); evaluating a degree of similarity for each reference face model (M1,
M2, ... , Mn) against the test image (IT) by applying the best path of the average face model (MAv) to each reference face model (M1,
M2, ..., Mn); identifying the reference face model (M1, M2, ..., Mn) most similar to the test image (IT); accepting or rejecting the identified reference face model (M1, M2,
... , Mn) on the basis of the degree of similarity.
2. A method according to claim 1, wherein the best path through the average face model (MAv) is optimised with respect to a reference face model (M1, M2,
..., Mn) for evaluation of the degree of similarity for that reference face model (M1, M2, ..., Mn) against the test image (IT).
3. A method according to claim 1 or claim 2, wherein the step of evaluating a degree of similarity between a reference face model (M1, M2, ..., Mn) and a test image (IT) comprises applying the best path of the average face model (MAv) to the reference face model (M1, M2, ..., Mn) to calculate a reference face model score for that test image (IT), calculating the average face model score for that test image (IT), and obtaining the degree of similarity in the form of the ratio of the reference face model score to the average face model score and wherein the step of accepting or rejecting the identified reference face model (M1, M2, ..., Mn) comprises comparing the degree of similarity to a predefined similarity threshold value.
4. A method according to claim 3, wherein a unique similarity threshold value is used for each reference face model (M1, M2, ..., Mn) in making the decision to accept or reject the identified reference model (M1, M2, ..., Mn).
5. A method of training a reference face model (M1) for use in a face recognition system, comprising the steps of acquiring an average face model (MAV) based on a number of face images (I1, 12, ...Ij) of different faces;
- acquiring a number of test image (T1, T2, ... , Tm) of the face for which the reference face model (M1) is to be trained; applying a training algorithm to the average face model and information obtained from the test images (T1, T2, ..., Tm) to give the reference face model (M1).
6. A method according to claim 5, wherein the reference face model (M1) is improved by applying the training algorithm to the average face model (MAV), information obtained from a further test image (Tnew) of the same face and a copy of the reference model (M1') to give an improved reference model (M1).
7. A method of calculating a similarity threshold value for a reference face model (Mn) for use in a face recognition system, which method comprises the steps of acquiring a reference face model (Mn) based on a number of distinct images of the same face; acquiring a control group of unrelated face images (G1, G2, ...Gj); evaluating the reference face model (Mn) against each of the unrelated face images (G1, G2, ...G,) in the control group; calculating an evaluation score for each of the unrelated face images (G1, G2, ...Gj); using the evaluation scores to determine a similarity threshold value for this reference face model (Mn) which would cause a predefined majority of these unrelated face images (G1, G2, ...Gj) to be rejected were they to be evaluated against this reference face model (Mn).
8. A method of performing face recognition, which method comprises the steps of acquiring a number of reference face models (M1, M2, ... , Mn) for a number of different faces, where each reference face model (M1,
M2, ... , Mn) is based on a number of distinct images of the same face; determining a similarity threshold value for each reference face model (M1, M2, ..., Mn) using the method according to claim 7; acquiring a test image (IT); identifying the reference face model (M1, M2, ..., Mn) most similar to the test image (IT); accepting or rejecting the identified reference face model (M1, M2,
..., Mn) on the basis of the similarity threshold value.
9. A method of performing face recognition according to any of claims 1 to 4 and/or claim 8, wherein the reference face models (M1, M2, ..., Mn) are trained using a method according to claim 5 or claim 6.
10. A method of optimizing an image (I) for use in face recognition, wherein the illumination intensity of the image (I) is equalised by sub-dividing the image (I) into smaller sub-images, calculating a feature vector for each sub-image, and modifying the feature vector of a sub-image by dividing each coefficient of that feature vector by a value representing the overall intensity of that sub-image, and/or by discarding a coefficient of the feature vector, and/or by converting that feature vector to a normalised vector.
11. A method of performing face recognition according to any of claims 1 to 4 or claim 8 or claim 9, wherein the images (I, IT, IT , G1, G2, ...G,, T1, T2, ..., Tm, Tnew) used for training reference face models (M1, M2, ..., Mn) and/or for face recognition are first optimized according to the method of claim 10.
12. A system (1) for performing face recognition, comprising a number of reference face models (M1, M2, ... , Mn) and an average face model (M Av) where each face model (M1, M2, ..., Mn, MAv) comprises a matrix of states representing regions of the face; an acquisition unit (2) for acquiring a test image (IT); a best path calculator (3) for calculating a best path through the average face model (MAv); an evaluation unit (4) for applying the best path of the average face model (MAγ) to each reference face model (M1, M2, ..., Mn) in order to evaluate a degree of similarity between each reference face model
(M1, M2, ..., Mn) and the test image (IT); a decision making unit (5) for accepting or rejecting the reference face model (M1, M2, ..., Mn) with the greatest degree of similarity.
13. A system for training a reference face model (MR) comprising a means for acquiring an average face model (MAγ); a means for acquiring a number of training images (T1, T2, ... , Tn) of the same face; and a reference face model generator (22) for generating a reference face model (M1) from the training images (T1, T2, ..., Tn), whereby the reference face model (M1) is based on the average face model (MAV).
14. A system for calculating a similarity threshold value for a reference face model (Mn) for use in a face recognition system comprising a means for acquiring a reference face model (Mn) based on a number of distinct images of the same face; a means of acquiring a control group of unrelated face images (G1,
G2, ...Gk); an evaluation unit (41) for evaluating the reference face model (Mn) against each of the unrelated face images (G1, G2, ...Gk) of the control group; an evaluation score calculation unit (40) for calculating an evaluation score for each of the unrelated face images (G1, G2,
...Gk); a similarity threshold value determination unit (45) for determining a similarity threshold value for the reference face model (Mn), on the basis of the evaluation scores, which would cause a predefined majority of these unrelated face images (G1, G2, ...Gk) to be rejected were they to be evaluated against this reference face model (Mn).
15. A system for optimizing an image (I) for use in face recognition, comprising a subdivision unit (50) for sub-dividing the image (I) into number of sub-images; a feature vector determination unit (51) for determining a local feature vector associated with each sub-image; a feature vector modification unit (52) for modifying the local feature vector associated with a sub-image by dividing each coefficient of that local feature vector by a value representing the overall intensity of that sub-image, and/or by discarding a coefficient of the feature vector, and/or by converting that local feature vector to a normalised vector.
16. A system for performing face recognition, comprising a system for training a reference face model (MR) according to claim 13 and/or a system for calculating a similarity threshold value for a reference face model (MR) according to claim 14 and/or a system for optimizing an image (I) for use in a face recognition system, according to claim 15.
PCT/IB2006/050811 2005-03-18 2006-03-15 Method of performing face recognition WO2006097902A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2008501478A JP2008533606A (en) 2005-03-18 2006-03-15 How to perform face recognition
EP06711106A EP1864245A2 (en) 2005-03-18 2006-03-15 Method of performing face recognition
US11/908,443 US20080192991A1 (en) 2005-03-18 2006-03-15 Magnetic Resonance Imaging at Several Rf Frequencies
BRPI0608711-6A BRPI0608711A2 (en) 2005-03-18 2006-03-15 methods and systems for performing face recognition, for training a reference face model, for calculating a similarity threshold value for a reference face model, and for optimizing an image for use in face recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05102188 2005-03-18
EP05102188.9 2005-03-18

Publications (2)

Publication Number Publication Date
WO2006097902A2 true WO2006097902A2 (en) 2006-09-21
WO2006097902A3 WO2006097902A3 (en) 2007-03-29

Family

ID=36699079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/050811 WO2006097902A2 (en) 2005-03-18 2006-03-15 Method of performing face recognition

Country Status (7)

Country Link
US (1) US20080192991A1 (en)
EP (1) EP1864245A2 (en)
JP (1) JP2008533606A (en)
CN (1) CN101142586A (en)
BR (1) BRPI0608711A2 (en)
TW (1) TW200707313A (en)
WO (1) WO2006097902A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2091021A1 (en) * 2006-12-13 2009-08-19 Panasonic Corporation Face authentication device
EP2139225A1 (en) * 2007-04-23 2009-12-30 Sharp Kabushiki Kaisha Image picking-up device, computer readable recording medium including recorded program for control of the device, and control method
CN100589117C (en) * 2007-04-18 2010-02-10 中国科学院自动化研究所 Gender recognition method based on gait
CN102867173A (en) * 2012-08-28 2013-01-09 华南理工大学 Human face recognition method and system thereof
EP2688039A1 (en) * 2011-03-14 2014-01-22 Omron Corporation Image verification device, image processing system, image verification program, computer readable recording medium, and image verification method
US9286544B2 (en) 2010-01-29 2016-03-15 Nokia Technologies Oy Methods and apparatuses for facilitating object recognition
CN109614510A (en) * 2018-11-23 2019-04-12 腾讯科技(深圳)有限公司 A kind of image search method, device, graphics processor and storage medium

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4479756B2 (en) 2007-07-05 2010-06-09 ソニー株式会社 Image processing apparatus, image processing method, and computer program
CN101472133B (en) 2007-12-28 2010-12-08 鸿富锦精密工业(深圳)有限公司 Apparatus and method for correcting image
TWI419058B (en) * 2009-10-23 2013-12-11 Univ Nat Chiao Tung Image recognition model and the image recognition method using the image recognition model
US20110148857A1 (en) * 2009-12-23 2011-06-23 Microsoft Corporation Finding and sharing of digital images based on shared face models
US9465993B2 (en) 2010-03-01 2016-10-11 Microsoft Technology Licensing, Llc Ranking clusters based on facial image analysis
WO2011162050A1 (en) 2010-06-21 2011-12-29 ポーラ化成工業株式会社 Age estimation method and gender determination method
CN102332086B (en) * 2011-06-15 2013-04-03 湖南领创智能科技有限公司 Facial identification method based on dual threshold local binary pattern
CN102262729B (en) * 2011-08-03 2013-01-02 山东志华信息科技股份有限公司 Fused face recognition method based on integrated learning
CN102346846A (en) * 2011-09-16 2012-02-08 由田信息技术(上海)有限公司 Face snap-shooting and contour analysis system
KR101901591B1 (en) 2011-11-01 2018-09-28 삼성전자주식회사 Face recognition apparatus and control method for the same
TWI467498B (en) 2011-12-19 2015-01-01 Ind Tech Res Inst Method and system for recognizing images
US8855375B2 (en) 2012-01-12 2014-10-07 Kofax, Inc. Systems and methods for mobile image capture and processing
US11321772B2 (en) 2012-01-12 2022-05-03 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US8559684B1 (en) * 2012-08-15 2013-10-15 Google Inc. Facial recognition similarity threshold adjustment
CN103093216B (en) * 2013-02-04 2014-08-20 北京航空航天大学 Gender classification method and system thereof based on facial images
US10708545B2 (en) 2018-01-17 2020-07-07 Duelight Llc System, method, and computer program for transmitting face models based on face data points
CN103105922A (en) * 2013-02-19 2013-05-15 广东欧珀移动通信有限公司 Method and device for mobile terminal backlight control
US10127636B2 (en) 2013-09-27 2018-11-13 Kofax, Inc. Content-based detection and three dimensional geometric reconstruction of objects in image and video data
US10783615B2 (en) * 2013-03-13 2020-09-22 Kofax, Inc. Content-based object detection, 3D reconstruction, and data extraction from digital images
US9679224B2 (en) * 2013-06-28 2017-06-13 Cognex Corporation Semi-supervised method for training multiple pattern recognition and registration tool models
US20150317511A1 (en) * 2013-11-07 2015-11-05 Orbeus, Inc. System, method and apparatus for performing facial recognition
US10467465B2 (en) 2015-07-20 2019-11-05 Kofax, Inc. Range and/or polarity-based thresholding for improved data extraction
US10242285B2 (en) 2015-07-20 2019-03-26 Kofax, Inc. Iterative recognition-guided thresholding and data extraction
US20170039010A1 (en) * 2015-08-03 2017-02-09 Fuji Xerox Co., Ltd. Authentication apparatus and processing apparatus
CN105740808B (en) * 2016-01-28 2019-08-09 北京旷视科技有限公司 Face identification method and device
US9858296B2 (en) * 2016-03-31 2018-01-02 Adobe Systems Incorporated Representative image selection for image management using face recognition
CN106101771A (en) * 2016-06-27 2016-11-09 乐视控股(北京)有限公司 Method for processing video frequency, device and terminal
EP3381017B1 (en) * 2016-10-31 2019-11-06 Google LLC Face reconstruction from a learned embedding
WO2018116560A1 (en) * 2016-12-21 2018-06-28 パナソニックIpマネジメント株式会社 Comparison device and comparison method
CN109684899A (en) * 2017-10-18 2019-04-26 大猩猩科技股份有限公司 A kind of face recognition method and system based on on-line study
US10803350B2 (en) 2017-11-30 2020-10-13 Kofax, Inc. Object detection and image cropping using a multi-detector approach
CN108376434B (en) * 2018-02-22 2020-12-25 深圳市华科智能信息有限公司 Intelligent home control system based on Internet of things
CN108805048B (en) * 2018-05-25 2020-01-31 腾讯科技(深圳)有限公司 face recognition model adjusting method, device and storage medium
CN109034048A (en) * 2018-07-20 2018-12-18 苏州中德宏泰电子科技股份有限公司 Face recognition algorithms models switching method and apparatus
WO2020037681A1 (en) * 2018-08-24 2020-02-27 太平洋未来科技(深圳)有限公司 Video generation method and apparatus, and electronic device
CN109583332B (en) * 2018-11-15 2021-07-27 北京三快在线科技有限公司 Face recognition method, face recognition system, medium, and electronic device
CN114354236B (en) * 2022-03-15 2022-06-10 武汉顺源游乐设备制造有限公司 Method and system for monitoring running state of oscillating fly chair based on big data analysis

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7308133B2 (en) * 2001-09-28 2007-12-11 Koninklijke Philips Elecyronics N.V. System and method of face recognition using proportions of learned model
KR100442834B1 (en) * 2002-07-19 2004-08-02 삼성전자주식회사 Method and system for face detecting using classifier learned decision boundary with face/near-face images
US7171043B2 (en) * 2002-10-11 2007-01-30 Intel Corporation Image recognition using hidden markov models and coupled hidden markov models

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
F. CARDINAUX ET AL.: "Face Verification Using Adapted Generative Models" PROCEEDINGS OF THE SIXTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FGR'04), 2004, XP008067255 *
KIM M-S ET AL: "Face recognition using the embedded HMM with second-order block-specific observations" PATTERN RECOGNITION, ELSEVIER, KIDLINGTON, GB, vol. 36, no. 11, November 2003 (2003-11), pages 2723-2735, XP004453573 ISSN: 0031-3203 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2091021A4 (en) * 2006-12-13 2013-09-04 Panasonic Corp Face authentication device
EP2091021A1 (en) * 2006-12-13 2009-08-19 Panasonic Corporation Face authentication device
CN100589117C (en) * 2007-04-18 2010-02-10 中国科学院自动化研究所 Gender recognition method based on gait
EP2139225A1 (en) * 2007-04-23 2009-12-30 Sharp Kabushiki Kaisha Image picking-up device, computer readable recording medium including recorded program for control of the device, and control method
EP2139225A4 (en) * 2007-04-23 2011-01-05 Sharp Kk Image picking-up device, computer readable recording medium including recorded program for control of the device, and control method
US8780227B2 (en) 2007-04-23 2014-07-15 Sharp Kabushiki Kaisha Image pick-up device, control method, recording medium, and portable terminal providing optimization of an image pick-up condition
US9286544B2 (en) 2010-01-29 2016-03-15 Nokia Technologies Oy Methods and apparatuses for facilitating object recognition
EP2529334A4 (en) * 2010-01-29 2017-07-19 Nokia Technologies Oy Methods and apparatuses for facilitating object recognition
EP2688039A1 (en) * 2011-03-14 2014-01-22 Omron Corporation Image verification device, image processing system, image verification program, computer readable recording medium, and image verification method
US9058513B2 (en) 2011-03-14 2015-06-16 Omron Corporation Image verification device, image processing system, image verification program, computer readable recording medium, and image verification method
EP2688039A4 (en) * 2011-03-14 2015-04-08 Omron Tateisi Electronics Co Image verification device, image processing system, image verification program, computer readable recording medium, and image verification method
CN102867173A (en) * 2012-08-28 2013-01-09 华南理工大学 Human face recognition method and system thereof
CN109614510A (en) * 2018-11-23 2019-04-12 腾讯科技(深圳)有限公司 A kind of image search method, device, graphics processor and storage medium
CN109614510B (en) * 2018-11-23 2021-05-07 腾讯科技(深圳)有限公司 Image retrieval method, image retrieval device, image processor and storage medium

Also Published As

Publication number Publication date
EP1864245A2 (en) 2007-12-12
JP2008533606A (en) 2008-08-21
TW200707313A (en) 2007-02-16
CN101142586A (en) 2008-03-12
US20080192991A1 (en) 2008-08-14
BRPI0608711A2 (en) 2010-12-07
WO2006097902A3 (en) 2007-03-29

Similar Documents

Publication Publication Date Title
US20080192991A1 (en) Magnetic Resonance Imaging at Several Rf Frequencies
Erzin et al. Multimodal speaker identification using an adaptive classifier cascade based on modality reliability
Nandakumar et al. Likelihood ratio-based biometric score fusion
JP4606779B2 (en) Image recognition apparatus, image recognition method, and program causing computer to execute the method
JP4543423B2 (en) Method and apparatus for automatic object recognition and collation
US9684850B2 (en) Biological information processor
Soltane et al. Face and speech based multi-modal biometric authentication
CN111160313B (en) Face representation attack detection method based on LBP-VAE anomaly detection model
US20070286497A1 (en) System and Method for Comparing Images using an Edit Distance
JPH1055444A (en) Recognition of face using feature vector with dct as base
Thian et al. Improving face authentication using virtual samples
US20030113002A1 (en) Identification of people using video and audio eigen features
CN116977679A (en) Image acquisition method and system based on image recognition
Pigeon et al. Image-based multimodal face authentication
CN112395901A (en) Improved face detection, positioning and recognition method in complex environment
JP2006085289A (en) Facial authentication system and facial authentication method
Cetingul et al. Robust lip-motion features for speaker identification
JP4187494B2 (en) Image recognition apparatus, image recognition method, and program for causing computer to execute the method
De-la-Torre et al. Incremental update of biometric models in face-based video surveillance
Cheng et al. Multiple-sample fusion of matching scores in biometric systems
Lin et al. Person re-identification by optimally organizing multiple similarity measures
KR101213280B1 (en) Face Recognition System and Method with Pertubed Probe Images
Pigeon et al. Multiple experts for robust face authentication
TWI806030B (en) Processing circuit and processing method applied to face recognition system
Kryszczuk et al. On face image quality measures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006711106

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 11908443

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2008501478

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 200680008637.9

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 4108/CHENP/2007

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Ref document number: RU

WWP Wipo information: published in national office

Ref document number: 2006711106

Country of ref document: EP

ENP Entry into the national phase

Ref document number: PI0608711

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20070914