Sketch-Photo Recognition
Field of the Invention
This invention relates to the field of matching a sketch with a photo in a photo database or vice versa using an eigenface method.
Background of the Invention
Due to growing demands in such application areas as law enforcement, video surveillance, banking and security system access authentication, automatic face recognition has attracted great attention in recent years. The advantages of facial identification over alternative methods, such as fingerprint identification, are based primarily on user convenience and the cost, since face recognition can be corrected in uncertain cases by people without extensive training.
An important application of face recognition is to assist law enforcement. For example, automatic retrieval of photos of suspects from police mug-shot databases can help police narrow down potential suspects quickly. However, in most cases, the photo image of a suspect is not available. The best substitute available is often an artist drawing based on the recollection of an eyewitness. Searching an image database by using a sketch drawing is potentially very useful. It will not only help the police to locate a group of potential suspects, but may also help the witness and the artist to modify the sketch drawing of the suspect interactively based on similar images retrieved.
Despite the great need of such a sketch-based photo retrieval system, little research can be found in this areaOO, probably due to the difficulties in building a large data set suitable for a facial sketch database.
Two tradition methods used in the matching of a particular photo in a photo database are described in the following section.
A. Geometrical Measures
Geometrical feature method is intuitively the most straightforward method. A great amount of geometrical face recognition research focus on extracting relative positions and other parameters of face components such as eyes, mouth, and chin. Although the geometrical features are easy to understand, they do not seem to contain enough information for stable face recognition. Especially, geometrical features change with different facial expressions and scales, thus vary greatly for different images of the same person. A recent comparison between geometric features and template features greatly favors the template features 0.
B. Eigenface method
One of the most successful methods at this time for face image recognition may be the eigenface method [9]. It has been ranked among the most effective methods by the comprehensive FERET test 0[6], confirming similar findings in the survey and the comparison study by Zhang et. al. 0. Even though an eigenface method may be sensitive to illumination, expression, and rotation changes, these may not be as important for application focusing on mug-shot photo identification.
The eigenface approach uses the Karhunen-Loeve Transform (KLT) for the representation and recognition of faces. Once a set of eigenvectors, also called eigenfaces, is computed from the ensemble face covariance matrix, a face image can be approximately reconstructed using a weighted combination of the eigenfaces. The weights that characterize the expansion of the given image in terms of eigenfaces constitute the feature vector. When a new test image is given, the weights are computed by projecting the image onto the eigenface vectors. The classification is then carried out by comparing the distances between the weight vectors of the test image and the images from the database.
Although the Karhunen-Loeve Transform has been illustrated in detail in many
textbooks and articles, this method will again be discussed here, particularly directing to
photo recognition. To compute the Karhunen-Loeve Transform, let ^' be a column vector representation of a sample face image with the mean face computed as v 1 A v mp = M ∑ 1=1 Ql 5 where M is the number of training samples in a photo set AP. Removing v v v
P = 0 - m the mean face from each image, we have ' ' p . The photo training set then forms
^ _ r p V p V p V -, an N by M matrix p L ! ' 2'""' M ^ , where N is the total number of pixels in the image. The sample covariance matrix can be estimated by
where APT is the transpose matrix of AP.
Given the large size of a photo image, direct computation of the eigenvectors of W is not practical with current computation capabilities. The dominant eigenvector estimation method is generally used. Because of the relatively small sample image number M, the
rank of W will only be - . So the eigenvector of the smaller matrix
p p can be computed first,
(Ap TAp)Vp = VpAP ) (2)
where V p is the unit eigenvector matrix and A is the diagonal eigenvalue matrix.
Multiplying both sides by AP, we have
( /K* \>. (3) Therefore, the orthonormal eigenvector matrix, or the eigen space UP of the covariance matrix w is,
For a new face photo P * , its projection coefficients in the eigenvector space form
the vector
, which is used as a feature vector for the classification.
Because of the structural similarity across all face images, strong correlation exists
among face images. Through the KLT, the eigenface method takes advantage of such a high correlation to produce a highly compressed representation of face images, thus greatly improves the face classification efficacy.
However, because of the great difference between face photos and sketches, direct application of the eigenface method for sketch based photo identification may not perform well or may not even work. The difference between a photo and a sketch of the same person is in general much larger than the difference between two photos of two different persons.
Objects of the Invention
Therefore, it is an object of this invention to provide a method and/or system that can match sketches to photos more efficiently and/or effectively. It is also an object of this invention to resolve at least one or more of the problems as set forth in the prior art. As a minimum, it is an object of this invention to provide the public with a useful choice.
Summary of the invention
Accordingly, this invention provides a method for generating a pseudo-sketch Sr for a photo Pk using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up is computed from Ap. The method including the steps of: a) projecting Pk onto Up to compute projection coefficient bp, such that Pk = Upbp; and b) mapping As with bp to generate Sr.
Another aspect of this invention also provides a method for generating a pseudo-photo Pr for a sketch Sk using a sketch set As and a corresponding photo set Ap having M samples of Si and Pi respectively such that As = [SI, S2, , SM] and Ap =
[PI, P2, , PM], wherein a sketch eigen space Us is computed from As, said method including the steps of:
a) projecting Sk onto Us to compute projection coefficient bs, such that Sk = Usbs; and b) mapping Ap with bs to generate Sr.
It is yet another aspect of this invention to provide a method for matching a sketch
Sk with a most closely matched photo Pk in a photo gallery having a plurality of photos p each denoted by G< using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, PM] and As = [SI,
S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of: p
- generating a pseudo-sketch Sr for each of the photo G> in the photo gallery by
P P a) projecting G> onto Up to compute projection coefficient bp, such that Gf
= Upbp; and b) mapping As with bp to generate Sr -recognizing the most closely matched Pk in the photo gallery by comparing the pseudo-sketches Sr with Sk to identify the corresponding most closely matched
pseudo-sketch sr κ .
A fourth aspect of this invention also provides a method for matching a sketch Sk with a most closely matched photo Pk in a photo gallery having a plurality of photos each p denoted by G< using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of: - generating a pseudo-photo Pr for Sk by a) projecting Sk onto Us to compute projection coefficient bs, such that Sk = Usbs; and b) mapping Ap with bs to generate Pr
- identify the most closely matched Pk by comparing the pseudo-photo Pr with the
photos in the photo gallery.
An fifth aspect of this invention provides a method for matching a photo Pk with a most closely matched sketch Sk in a sketch gallery having a plurality of sketches each
denoted by ' using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of:
- generating a pseudo-photo Pr for each of the sketch G' in the sketch gallery by
a) projecting s G> onto Us to compute projection coefficient bs, such that s G>
- Usbs; and b) mapping Ap with bs to generate Pr
- recognizing the most closely matched Sk in the sketch gallery by comparing the pseudo-photos Pr with the photo Pk to identify the corresponding most closely matched
Pr pseudo-photo κ .
Yet another aspect of this invention provides a method for matching a photo Pk with a most closely matched sketch Sk in a sketch gallery having a plurality of sketches
each denoted by ' using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI,
S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of:
- generating a pseudo-sketch Sr for Pk by a) projecting Pk onto Up to compute projection coefficient bp, such that Pk = Usbp; and b) mapping As with bp to generate Sr
- identifying the most closely matched Sk by comparing the pseudo-sketch Sr with the sketches in the sketch gallery.
This invention further provides computer systems incorporating an algorithm as sets forth by any one of the above methods.
Various options and alterations of this invention will be described in the following sections and may become understandable to one skilled in the art.
Brief description of the drawings
Preferred embodiments of the present invention will now be explained by way of example and with reference to the accompany drawings in which: Figure 1 shows sample face photos (top two rows) and sketches (bottom two rows).
Figure 2 shows the general photo to sketch transformation algorithm of this invention.
Figure 3 shows the photo to sketch/sketch to photo transformation examples. Figure 4 shows the comparison of cumulative match score between various automatic recognition methods and human performance.
Detail Description of Preferred Embodiments
This invention is now described by ways of example with reference to the figures in the following sections.
Although not specifically stated above, it shall be obvious to one skilled in the art that the photos and sketches involved shall be all digitised by suitable input equipments like scanner or digital camera with a reasonable resolution. Further, the computation processes involved shall be carried out by computer systems having sufficient processing power and memory that incorporate the appropriate algorithm.
This invention requires a photo training set and a corresponding sketch training set, denoted hereafter as AP and AS respectively, to work. Each of AP and AS has M samples of respective Pi and Si. Although M can be any value larger than 1, and it is
preferred to have M - 80 to improve accuracy. Each of AP and AS will be used to compute the corresponding eigen space U as illustrated above.
1 V v p $ S
For each training photo image ' , there is a corresponding sketch ' , where ' is
V a column vector representation of a sample sketch with the mean sketch '"* removed. r V V V -,
A = \ P P ... P Similar to p l l' 2'""' MJ for photo image training set, we have a corresponding
Sl,S2,...,SMj sketch training set,
Photo-to-Sketch/Sketch-to-photo Transformation and Recognition 1. Photo to Sketch Transformation
As stated above, for the conventional eigenface method, a face image can be reconstructed from the eigenfaces by,
= U>*> . (5)
- rι TJ
Similar, a sketch image can be reconstructed by r ~ s s , where Us is the sketch → eigen space and s is the projection coefficients in the sketch eigen space. Even though the above reconstructions can be done with modern computation power, such will be . difficult to correlate two corresponding photo and sketch sets, and reduces the accuracy of photo-to-sketch/sketch-to-photo recognition significantly.
U = A V A
2 To resolve this problem, it is realized in this invention that since
p p p p the reconstructed photo can be represented by, P
r (6)
V - •Vv r r -\T c = V , here Pp P : p P ,c w p p L CP Aι , CPPι L l P"MM .^ is found to be a column vector of dimension M. Accordingly, Equation (6) can then be rewritten in summation form, r v * , r
Pr = APCP = ΣCpPi '=1 . (7)
This shows that the reconstructed photo is in fact the best approximation of the original image with the least mean-square-error using an optimal linear combination of the v M training sample images. The coefficients in p describe the contribution weight of each sample image. The reconstructed photos generated by this method are shown in the column labeled "Reconstructed Photo" of Figure 3.
Mapping each sample photo image P ' in equation (7) to its corresponding sketch c ' , as illustrated in Figure 2, we get,
V M V v -lv
Sr = ∑cp Si = AsZp = AsVpA/bp
■ (8)
Given the structural resemblance between photos and sketches, it is reasonable to
1 c expect the reconstructed sketch r to resemble the real sketch. For such a reconstruction,
O a sample sketch '' contributes more weight to the reconstruction if its corresponding p photo sample ' contributes more weight to the reconstructed face photo. For an extreme p C = \ example, if a reconstructed photo r has a unit weight Pk for a particular sample p photo k and zero weights for all other sample photos, i.e. the reconstructed photo looks p exactly like the sample photo k , then the reconstructed sketch s<- is simply
S reconstructed by replacing it with the corresponding sketch k . Through such a mapping, a photo image may now be transformed into a pseudo-sketch.
In summary, the photo to sketch transformation is computed through the following steps:
1. Compute the photo training set eigenvector matrix p by first computing
the eigenvectors p and p of p " . However, this step only needs to be performed once.
v
2. Project P k in the eigenspace u p to compute the eigenface weight vector b — T/ Λ ~1 2h p . Additionally, cp can be computed by c^ ~ V p p .
3. Reconstruct the pseudo-sketch Sr by mapping As with cp. The c — c = A y ~1 2b pseudo-sketch Sr may be generated by r s p s p p p if cp is computed.
As stated earlier, preferably, the average or mean photo image may be removed v from each raw photo Q before computation by computing the average photo image p for v the photo training set, and the average sketch * for the sketch training set. In such a case, the follows additional steps may be required: v v m O
4. Remove the photo mean p from the input photo image ** to get v v v
Pk = Qk -mp v
5. Finally, add back the average sketch s to get the final viewable
Figure 3 shows the comparison between the real sketch and the reconstructed sketch. The similarity between the two can now be seen.
Although the above discussion refers mainly to photo-to-sketch transformation, it should be apparent that the reverse transformation can also be done by the same method. For instance, a pseudo-photo can be obtained from a training sketch set by the formula
Pr = Apcs = APVs As ~V2bs _
Sketch Recognition
After such a photo-to-sketch transformation, sketch recognition from a plurality of photos may now become easier.
The detail algorithm can be summarized as follows:
Use p , which is pre-determined from AP, to compute the corresponding ό p pseudo-sketch r for each photo ' in the photo gallery by the sketch transformation algorithm described in earlier. Note that the photo gallery does not need to be the same as the photo training set AP. p n Recognize the most closely matched * in the photo gallery by comparing the v X pseudo-sketches r with the probe sketch k to identify the corresponding most closely
matched pseudo-sketch κ .
The pseudo-sketches and the probe sketch can be compared using conventional eigenface method or any other suitable method. For example, elastic graph matching method can be used for the recognition 00.
As an example to show one of the various comparison methods, one may first ό compute the eigenvectors using the sketch training samples. Then the probe sketch k ό and the generated pseudo-sketches >■ from the photo gallery are projected onto the eigensketch vectors. The projection coefficients are then used as feature vectors for final classification. The detail comparison algorithm can be summarized as follows:
V τ V V T T J ό ό
Compute the eigensketch weight vector s ~ s k for the probe sketch k by
projecting S k in the sketch eigenspace U s .
Compute the distance between s and each " , where Sr = Usbr for each pseudo sketch, generated from the photo gallery, the sketch is classified as the face with minimum distance between the two vectors.
hi the above algorithm, the photos in the galley are first transformed to ό pesudo-sketches r based on the photo eigenspace UP. Then the recognition is conducted in the sketch eigenspace US. Alternatively, we can reverse the process by transforming
each probe sketch S k into a pseudo-photo x r κ based on the sketch eigenspace, then use the photo eigenspace for recognition using conventional eigenface method or any other suitable method.
v v c c
For both approaches, we rely on two sets of reconstruction coefficients p and s , v where p represents the weights for reconstructing a photo using the photo training set v and s represents the weights for reconstructing a sketch using the sketch training set. In fact, to compare a photo with a sketch, we can also use their corresponding reconstruction v v
C C coefficients p and s directly as feature vectors for recognition.
As shown in the previous section, for an input photo, its reconstruction coefficient
V — V v c = K Λ 2b b vector on the photo training set is p p p p , where p is the projection weight vector of the photo in the photo eigenspace. Similarly, for an input sketch, its
V — V v reconstruction coefficient vector on the sketch training set is c s = V sA s 2b s , where b s is the projection weight vector of the input sketch in the sketch eigenspace. If we compare •
X c c photo with a sketch using p and s directly, the distance is defined as
If we first generate a pseudo-sketch for a photo, then calculate the distance in the
V V V sketch eigenspace, the distance is defined as 2 ~" r ~ s " , where "r is the weight
V vector of the generated pseudo-sketch projected in the sketch eigenspace, and s is the
V τ V weight vector of the real sketch projected in the sketch eigenspace. Given r ~ s r ,
-- X _ . V v s - s s s and r ! P ) We can compute r as, v -I v
f AT
Since V> ^ A^V° ~ Λ% we have,
"r = Λ Jy¥s ysΓC P . (11) b v v , we can use relation c -iv V ~ rV
To compute s ' = V s s 2b s to get b s = Λ s2F s c s . Finally, the distance dι becomes,
where cp = contribution weight for each photo for photo reconstruction by the photo-training set, and cs = contribution weight for the probe sketch Sk in the sketch eigen space Us.
Alternatively, if we first generate a pseudo-photo for a sketch, then calculate the
distance in the photo eigenspace, the distance 3 can be obtained by
where cp = contribution weight of each photo for photo reconstruction by the photo-training set, and cs = contribution weight for the probe sketch Sk in the eigen space
Us.
The distances for recognition are different for the three cases, and their performances will be compared in the experiments that will be described later.
Again, as may be obvious to one skilled in the art, the methods described can be used to match a photo PK in a sketch gallery. It shall be obvious to one skilled in the art. Once again, we may have the two options:
a. Converting all of the sketches in the sketch gallery into pseudo-photos, and then comparing with the photo Pk. The comparison can be done by comparing bp and br, where bp = projection coefficients of Pk onto Up, and br = projection coefficients of each of the pseudo-photos onto Up. The "distance formula" (12) should now be rewritten to
, where cp = contribution weight for Pk onto Up, and cs contribution weight for each of the sketches in the sketch gallery onto Us.
b. Converting the photo Pk into a pseudo-sketch Sk, then comparing with all of the sketches in the sketch gallery. The comparison can be done by comparing bs and br, where bs = projection coefficient of each of the sketches in the sketch gallery onto Us, and br = projection coefficients of the pseudo-sketch Sk onto Us. The "distance formula" (12) d< = should now be rewritten to *5 Λ/
/2J (c
s -c ,
, where cp = contribution weight for reconstructing Pk in Up, and cs = contribution weight of each of the sketches for sketch reconstruction in Us.
Experiments
In order to demonstrate the effectiveness of the new algorithm, a set of experiments are conducted to compare with the geometrical measures and the conventional eigenface method. A database containing 188 pairs of photo and sketch of 188 different people was used for the experiment. Eighty-eight photo-sketch pairs were used as training data, and the other 100 photo-sketch pairs were used for testing.
Convention geometrical method is used in the experiments. The recognition test protocol used in FERET [5]0 is used and the gallery set used in the experiment consists of 100 face photos. The probe set consists of 100 face sketches. The cumulative match score is used to evaluate the performance of the algorithms. It measures the percentage of "the correct answer is in the top n matches", where n is called the rank.
A. Comparison with Traditional Methods Table 1 shows the cumulative match scores of the first ten ranks for the three methods.
Table 1. Cumulative match score for the three methods.
Both the geometrical method and the eigenface method perform poorly in the experiment. Only around 30% accuracy is obtained for the first match. The accuracy for the tenth rank match is 70%. The poor performance of the eigenface method can be expected given the large differences between photo and sketch. As for the geometrical measure, the results show that the reason that photo and sketch look alike is not mainly because of the geometrical similarity of the facial components. Like caricature, a sketch exaggerates the sizes of facial components. If a person has a larger than average nose, the sketch will depict an even larger nose. On the contrary, if a person has a smaller than normal nose, he will be drawn with a nose with further reduced size. The results demonstrate the effect of such exaggeration.
The eigensketch transform method greatly improves the recognition accuracy to 96% for the top ten match. The first match more than doubles the other two methods. This clearly shows the advantage of the new approach. The results also depend on the quality of the sketch drawings that are preferred to be prepared by the same draftsman to improve accuracy. As shown in Fig. 1, not all sketches look exactly like the original photo. The first row of sketches in Fig. 1 are quite similar to their corresponding photos, yet sketches in the second row are much less so. The significance of the results lies upon the large gap between the new methods and the traditional face recognition methods.
B. Comparison of the Three Distance Measures
In this section, we conduct a set of experiments to compare the performance of the three distance measures dl, d2, and d3 as described above. The same dataset described above is used for the comparison. Experimental results are shown in Table 2.
Table 2. Cumulative match score using three different distances.
From the results one can see that d i =1 \1\ cp - c _ I iIi js jeast effective among the three
distances. This is not surprising, smce both c p and c * represent coefficients projected in non-orthogonal spaces spanned by the training photos and sketches respectively, therefore
cannot properly reflect the distance between face images. Both άl and 3 are distances computed in orthogonal eigen-spaces thus give much better performance. An interesting observation is that d2 is consistently better than d3. This seems to suggest that the sketch eigenspace can characterize the difference among different people better than photo eigenspace. This may be possible since in the drawing process, an artist tends to capture and highlight the distinct characteristics of a face thus makes it easier to be distinguished. li e - c II The above experiment seems to confirm this point, since p s " has better recognition performance after projection to the sketch eigensapce than to the photo eigenspace.
There may be another explanation for the better performance of d2. In order to compute d2 a photo needs to be transformed into a pseudo-sketch, while to compute d3, a sketch has to be converted to a pseudo-photo. In general, to compress more information into a smaller compact representation is more stable than to enlarge a compact representation to full representation. Since photos contain much more detail information than sketches, it should be easier to convert a photo into sketch. For an extreme example, suppose the sketch only contains some simple outlines of facial features, it is quite easy to draw the outlines from the face photo, however it will be very difficult to reconstruct the
photo from the simple line drawings. Therefore, for "">■ computation, better performance is achieved because of the more stable photo-to-sketch transformation.
C. Comparison with Human Performance Two experiments were conducted to compare the new method with sketch recognition by human beings. Such a comparison is important since in current law enforcement application, the sketch of a suspect is usually widely distributed through mass media. It is expected that a match with the real person can be found by people who have seen the sketch. If we can demonstrate that automatic recognition by computers can perform as effectively as human beings, we can then use computers to systematically conduct large-scale search in a large photo-ID database.
In the first experiment, a sketch is shown to a human test candidate for a period of time, then the sketch is taken away before the photo search starts. The candidate tries to memorize the sketch, then go on to search the photo database without the sketch reference in front. The candidate can go through the database and are allowed to select up to 10 photos that are similar to the sketch. He can then rank the selected photos according to the similarity level to the sketch. This is closer to real application scenario. Since, people usually see the sketch of a criminal suspect in a newspaper or on TV briefly, then they have to rely on their memory to match the sketch with the suspect in real life.
For the second experiment, we allow the test candidate to look at the sketch while they search through the photo database. The result can be considered as a benchmark for the automatic recognition system to match. Experimental results of both tests are shown in Fig. 4. The human performance for the first experiment is much lower than the computer recognition result. This is not only because of the difference between photo and sketch, but also because of the memory distortion, since it is difficult to precisely memorize the sketch. In fact, people are very good at distinguishing familiar faces, such as relatives and famous public figures, but are not very good at distinguishing strangers. Without putting the sketch and photo together for detail comparison, it is hard for a person to recognize the two.
When the candidate is allowed to see the sketch while searching through the database, the accuracy rate rises to 73%, which is comparable to the computer recognition rate. However, unlike the computer recognition rate that increases to 96% for the tenth rank, the human performance does not increase much with the rank. These encouraging results show that a computer can perform sketch matching with accuracy at least as comparable as that obtained by a human being. Given this, we can now perform automatic searching of a large database using a sketch just like using a regular photo. This is extremely important for law enforcement application where a photo is often not available.
A novel face sketch recognition algorithm is developed in this invention utilizing a novel photo-to-sketch/sketch-to-photo transformation. The photo-to-sketch transformation method is shown to be a more effective approach for automatic matching between a photo and a sketch. Surprisingly, the recognition performance of the new approach can be even better than that of human beings, let alone the improvement on speed and efficiency.
Inclusion of hair portion in the photos or sketches may enhance accuracy, as the hair portion is an additional characterizing feature of a human face. However, this may not be desirable in some cases as hairstyle can be changed relatively easily. It may be a matter of design choice to be left to the system operator or designer to decide.
Although the above discussion focuses on human face sketch-photo and/or photo-sketch recognition, it may be obvious to one skilled in the art that the methods of this invention can also be used for other types of recognition, say buildings, animals, or other objects. Even though it is believed that the major application may lie in law enforcement, use in other areas may be possible.
While the preferred embodiment of the present invention has been described in detail by the examples, it is apparent that modifications and adaptations of the present invention will occur to those skilled in the art. It is to be expressly understood, however,
that such modifications and adaptations are within the scope of the present invention, as set forth in the following claims. Furthermore, the embodiments of the present invention shall not be interpreted to be restricted by the examples or figures only.
References P. J. Benson and D. I. Perrett, "Perception and Recognition of photographic quality facial caricatures: implications for the recognition of natural images," European Journal of Cognitive Psychology, vol. 3, no. 1, pp. 105-135, 1991.
V. Bruce, E. Hanna, N. Dench, P. Healy, and A. M. Burton, "The importance of 'mass' in line drawings of faces," Applied Cognitive Psychology, vol. 6, pp. 619-628, 1992.
R. Brunelli and T. Poggio, "Face recognition: features versus template, IEEE Trans, on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042-1052, Oct. 1993.
M. Lades, J. C. Vorbriiggen, J. Buhmann, J. Lange, C. Malsburg, R. P. Wurtz, and W. Konen, "Distortion invariant object Recognition in the dynamic link architecture," IEEE Trans, on Computers, vol. 42, no. 3, pp.300-311, March 1993.
H. Moon and P. J. Phillips, "Analysis of PCA-based face recognition algorithms", in Empirical Evaluation Techniques in Computer Vision, K. W. Bowyer and P. J. Phillips, Eds., IEEE Computer Society Press, Los Alamitos, CA, 1998.
P. J. Phillips, H. Moon, S. A. Rizvi, and P. J. Rauss, "The FERET Evaluation", in Face Recognition: From Theory to Applications, H. Wechsler, P. J. Phillips, V. Bruce, F.F. Soulie and T.S. Huang, Eds., Berlin: Springer-Verlag, 1998.
L. Wiskott, J. Fellous, N. Kruger, and C. Malsburg, "Face recognition by elastic bunch graph matching," IEEE Trans, on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 775-779, July 1997. J. Zhang, Y. Yan, and M. Lades, "Face recognition: Eigenface, Elastic matching, and Neural nets," Proceedings of the IEEE, vol. 85, no. 9, pp. 1423-1435, Sept. 1997.
M. Turk and A. Pentland, "Eigenface for Recongintion", Journal of Cognitive Neurosciences, Vol. 3, No. 1, pg. 71-86, 1991.