WO2004027692A1 - Sketch-photo recognition - Google Patents

Sketch-photo recognition Download PDF

Info

Publication number
WO2004027692A1
WO2004027692A1 PCT/CN2003/000797 CN0300797W WO2004027692A1 WO 2004027692 A1 WO2004027692 A1 WO 2004027692A1 CN 0300797 W CN0300797 W CN 0300797W WO 2004027692 A1 WO2004027692 A1 WO 2004027692A1
Authority
WO
WIPO (PCT)
Prior art keywords
sketch
photo
pseudo
closely matched
gallery
Prior art date
Application number
PCT/CN2003/000797
Other languages
French (fr)
Inventor
Xiaoou Sean Tang
Original Assignee
Xiaoou Sean Tang
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaoou Sean Tang filed Critical Xiaoou Sean Tang
Priority to AU2003271508A priority Critical patent/AU2003271508A1/en
Publication of WO2004027692A1 publication Critical patent/WO2004027692A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • This invention relates to the field of matching a sketch with a photo in a photo database or vice versa using an eigenface method.
  • An important application of face recognition is to assist law enforcement. For example, automatic retrieval of photos of suspects from police mug-shot databases can help police narrow down potential suspects quickly. However, in most cases, the photo image of a suspect is not available. The best substitute available is often an artist drawing based on the recollection of an eyewitness. Searching an image database by using a sketch drawing is potentially very useful. It will not only help the police to locate a group of potential suspects, but may also help the witness and the artist to modify the sketch drawing of the suspect interactively based on similar images retrieved.
  • Geometrical feature method is intuitively the most straightforward method.
  • a great amount of geometrical face recognition research focus on extracting relative positions and other parameters of face components such as eyes, mouth, and chin.
  • the geometrical features are easy to understand, they do not seem to contain enough information for stable face recognition.
  • geometrical features change with different facial expressions and scales, thus vary greatly for different images of the same person.
  • a recent comparison between geometric features and template features greatly favors the template features 0.
  • eigenface method [9]. It has been ranked among the most effective methods by the comprehensive FERET test 0[6], confirming similar findings in the survey and the comparison study by Zhang et. al. 0. Even though an eigenface method may be sensitive to illumination, expression, and rotation changes, these may not be as important for application focusing on mug-shot photo identification.
  • the eigenface approach uses the Karhunen-Loeve Transform (KLT) for the representation and recognition of faces.
  • KLT Karhunen-Loeve Transform
  • APT is the transpose matrix of AP.
  • V p is the unit eigenvector matrix and A is the diagonal eigenvalue matrix.
  • the vector which is used as a feature vector for the classification.
  • the eigenface method takes advantage of such a high correlation to produce a highly compressed representation of face images, thus greatly improves the face classification efficacy.
  • An fifth aspect of this invention provides a method for matching a photo Pk with a most closely matched sketch Sk in a sketch gallery having a plurality of sketches each
  • Yet another aspect of this invention provides a method for matching a photo Pk with a most closely matched sketch Sk in a sketch gallery having a plurality of sketches
  • This invention further provides computer systems incorporating an algorithm as sets forth by any one of the above methods.
  • Figure 1 shows sample face photos (top two rows) and sketches (bottom two rows).
  • Figure 2 shows the general photo to sketch transformation algorithm of this invention.
  • Figure 3 shows the photo to sketch/sketch to photo transformation examples.
  • Figure 4 shows the comparison of cumulative match score between various automatic recognition methods and human performance.
  • This invention requires a photo training set and a corresponding sketch training set, denoted hereafter as AP and AS respectively, to work.
  • AP and AS has M samples of respective Pi and Si. Although M can be any value larger than 1, and it is preferred to have M - 80 to improve accuracy.
  • M can be any value larger than 1, and it is preferred to have M - 80 to improve accuracy.
  • Each of AP and AS will be used to compute the corresponding eigen space U as illustrated above.
  • V a column vector representation of a sample sketch with the mean sketch '"* removed.
  • a face image can be reconstructed from the eigenfaces by,
  • Equation (6) can then be rewritten in summation form, r v * , r
  • a sample sketch ' ' contributes more weight to the reconstruction if its corresponding p photo sample ' contributes more weight to the reconstructed face photo.
  • p C ⁇ example, if a reconstructed photo r has a unit weight Pk for a particular sample p photo k and zero weights for all other sample photos, i.e. the reconstructed photo looks p exactly like the sample photo k , then the reconstructed sketch s ⁇ - is simply
  • pseudo-sketch Sr Reconstruct the pseudo-sketch Sr by mapping As with cp.
  • the c — c A y ⁇ 1 2
  • pseudo-sketch Sr may be generated by r s p s p p p if cp is computed.
  • the average or mean photo image may be removed v from each raw photo Q before computation by computing the average photo image p for v the photo training set, and the average sketch * for the sketch training set.
  • the follows additional steps may be required: v v m O
  • Figure 3 shows the comparison between the real sketch and the reconstructed sketch. The similarity between the two can now be seen.
  • the pseudo-sketches and the probe sketch can be compared using conventional eigenface method or any other suitable method.
  • elastic graph matching method can be used for the recognition 00.
  • the photos in the galley are first transformed to ⁇ pesudo-sketches r based on the photo eigenspace UP. Then the recognition is conducted in the sketch eigenspace US.
  • we can reverse the process by transforming each probe sketch S k into a pseudo-photo x r ⁇ based on the sketch eigenspace, then use the photo eigenspace for recognition using conventional eigenface method or any other suitable method.
  • V — V v c K ⁇ 2 b b vector on the photo training set is p p p p , where p is the projection weight vector of the photo in the photo eigenspace.
  • V V V sketch eigenspace the distance is defined as 2 ⁇ " r ⁇ s " , where "r is the weight
  • V vector of the generated pseudo-sketch projected in the sketch eigenspace, and s is the
  • the distance 3 can be obtained by
  • the recognition test protocol used in FERET [5]0 is used and the gallery set used in the experiment consists of 100 face photos.
  • the probe set consists of 100 face sketches.
  • the cumulative match score is used to evaluate the performance of the algorithms. It measures the percentage of "the correct answer is in the top n matches", where n is called the rank.
  • Table 1 shows the cumulative match scores of the first ten ranks for the three methods.
  • the eigensketch transform method greatly improves the recognition accuracy to 96% for the top ten match.
  • the first match more than doubles the other two methods. This clearly shows the advantage of the new approach.
  • the results also depend on the quality of the sketch drawings that are preferred to be prepared by the same draftsman to improve accuracy. As shown in Fig. 1, not all sketches look exactly like the original photo. The first row of sketches in Fig. 1 are quite similar to their corresponding photos, yet sketches in the second row are much less so. The significance of the results lies upon the large gap between the new methods and the traditional face recognition methods.
  • a sketch is shown to a human test candidate for a period of time, then the sketch is taken away before the photo search starts.
  • the candidate tries to memorize the sketch, then go on to search the photo database without the sketch reference in front.
  • the candidate can go through the database and are allowed to select up to 10 photos that are similar to the sketch. He can then rank the selected photos according to the similarity level to the sketch. This is closer to real application scenario. Since, people usually see the sketch of a criminal suspect in a newspaper or on TV briefly, then they have to rely on their memory to match the sketch with the suspect in real life.
  • a novel face sketch recognition algorithm is developed in this invention utilizing a novel photo-to-sketch/sketch-to-photo transformation.
  • the photo-to-sketch transformation method is shown to be a more effective approach for automatic matching between a photo and a sketch.
  • the recognition performance of the new approach can be even better than that of human beings, let alone the improvement on speed and efficiency.
  • hair portion in the photos or sketches may enhance accuracy, as the hair portion is an additional characterizing feature of a human face. However, this may not be desirable in some cases as hairstyle can be changed relatively easily. It may be a matter of design choice to be left to the system operator or designer to decide.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Automatic retrieval of face images from police mug-shot databases is critically important for law enforcement agencies. It can help investigators to locate or narrow down potential suspects efficiently. However, in many cases, the photo image of a suspect is not available and the best substitute is often a sketch drawing based on the recollection of an eyewitness. This invention presents a novel photo retrieval system using face sketches. By a novel method of transforming a photo image into a sketch or vice versa, the difference between photo and sketch may be reduced. This may allow matching between the photo and sketch. Experiments also demonstrate the efficacy of the algorithm.

Description

Sketch-Photo Recognition
Field of the Invention
This invention relates to the field of matching a sketch with a photo in a photo database or vice versa using an eigenface method.
Background of the Invention
Due to growing demands in such application areas as law enforcement, video surveillance, banking and security system access authentication, automatic face recognition has attracted great attention in recent years. The advantages of facial identification over alternative methods, such as fingerprint identification, are based primarily on user convenience and the cost, since face recognition can be corrected in uncertain cases by people without extensive training.
An important application of face recognition is to assist law enforcement. For example, automatic retrieval of photos of suspects from police mug-shot databases can help police narrow down potential suspects quickly. However, in most cases, the photo image of a suspect is not available. The best substitute available is often an artist drawing based on the recollection of an eyewitness. Searching an image database by using a sketch drawing is potentially very useful. It will not only help the police to locate a group of potential suspects, but may also help the witness and the artist to modify the sketch drawing of the suspect interactively based on similar images retrieved.
Despite the great need of such a sketch-based photo retrieval system, little research can be found in this areaOO, probably due to the difficulties in building a large data set suitable for a facial sketch database.
Two tradition methods used in the matching of a particular photo in a photo database are described in the following section. A. Geometrical Measures
Geometrical feature method is intuitively the most straightforward method. A great amount of geometrical face recognition research focus on extracting relative positions and other parameters of face components such as eyes, mouth, and chin. Although the geometrical features are easy to understand, they do not seem to contain enough information for stable face recognition. Especially, geometrical features change with different facial expressions and scales, thus vary greatly for different images of the same person. A recent comparison between geometric features and template features greatly favors the template features 0.
B. Eigenface method
One of the most successful methods at this time for face image recognition may be the eigenface method [9]. It has been ranked among the most effective methods by the comprehensive FERET test 0[6], confirming similar findings in the survey and the comparison study by Zhang et. al. 0. Even though an eigenface method may be sensitive to illumination, expression, and rotation changes, these may not be as important for application focusing on mug-shot photo identification.
The eigenface approach uses the Karhunen-Loeve Transform (KLT) for the representation and recognition of faces. Once a set of eigenvectors, also called eigenfaces, is computed from the ensemble face covariance matrix, a face image can be approximately reconstructed using a weighted combination of the eigenfaces. The weights that characterize the expansion of the given image in terms of eigenfaces constitute the feature vector. When a new test image is given, the weights are computed by projecting the image onto the eigenface vectors. The classification is then carried out by comparing the distances between the weight vectors of the test image and the images from the database.
Although the Karhunen-Loeve Transform has been illustrated in detail in many textbooks and articles, this method will again be discussed here, particularly directing to
photo recognition. To compute the Karhunen-Loeve Transform, let ^' be a column vector representation of a sample face image with the mean face computed as v 1 A v mp = M ∑ 1=1 Ql 5 where M is the number of training samples in a photo set AP. Removing v v v
P = 0 - m the mean face from each image, we have ' ' p . The photo training set then forms
^ _ r p V p V p V -, an N by M matrix p L ! ' 2'""' M ^ , where N is the total number of pixels in the image. The sample covariance matrix can be estimated by
where APT is the transpose matrix of AP.
Given the large size of a photo image, direct computation of the eigenvectors of W is not practical with current computation capabilities. The dominant eigenvector estimation method is generally used. Because of the relatively small sample image number M, the
Figure imgf000004_0001
rank of W will only be - . So the eigenvector of the smaller matrix p p can be computed first,
(Ap TAp)Vp = VpAP ) (2)
where V p is the unit eigenvector matrix and A is the diagonal eigenvalue matrix.
Multiplying both sides by AP, we have
( /K* \>. (3) Therefore, the orthonormal eigenvector matrix, or the eigen space UP of the covariance matrix w is,
Figure imgf000004_0002
For a new face photo P * , its projection coefficients in the eigenvector space form
the vector
Figure imgf000004_0003
, which is used as a feature vector for the classification.
Because of the structural similarity across all face images, strong correlation exists among face images. Through the KLT, the eigenface method takes advantage of such a high correlation to produce a highly compressed representation of face images, thus greatly improves the face classification efficacy.
However, because of the great difference between face photos and sketches, direct application of the eigenface method for sketch based photo identification may not perform well or may not even work. The difference between a photo and a sketch of the same person is in general much larger than the difference between two photos of two different persons.
Objects of the Invention
Therefore, it is an object of this invention to provide a method and/or system that can match sketches to photos more efficiently and/or effectively. It is also an object of this invention to resolve at least one or more of the problems as set forth in the prior art. As a minimum, it is an object of this invention to provide the public with a useful choice.
Summary of the invention
Accordingly, this invention provides a method for generating a pseudo-sketch Sr for a photo Pk using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up is computed from Ap. The method including the steps of: a) projecting Pk onto Up to compute projection coefficient bp, such that Pk = Upbp; and b) mapping As with bp to generate Sr.
Another aspect of this invention also provides a method for generating a pseudo-photo Pr for a sketch Sk using a sketch set As and a corresponding photo set Ap having M samples of Si and Pi respectively such that As = [SI, S2, , SM] and Ap =
[PI, P2, , PM], wherein a sketch eigen space Us is computed from As, said method including the steps of: a) projecting Sk onto Us to compute projection coefficient bs, such that Sk = Usbs; and b) mapping Ap with bs to generate Sr.
It is yet another aspect of this invention to provide a method for matching a sketch
Sk with a most closely matched photo Pk in a photo gallery having a plurality of photos p each denoted by G< using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, PM] and As = [SI,
S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of: p
- generating a pseudo-sketch Sr for each of the photo G> in the photo gallery by
P P a) projecting G> onto Up to compute projection coefficient bp, such that Gf
= Upbp; and b) mapping As with bp to generate Sr -recognizing the most closely matched Pk in the photo gallery by comparing the pseudo-sketches Sr with Sk to identify the corresponding most closely matched
pseudo-sketch sr κ .
A fourth aspect of this invention also provides a method for matching a sketch Sk with a most closely matched photo Pk in a photo gallery having a plurality of photos each p denoted by G< using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of: - generating a pseudo-photo Pr for Sk by a) projecting Sk onto Us to compute projection coefficient bs, such that Sk = Usbs; and b) mapping Ap with bs to generate Pr
- identify the most closely matched Pk by comparing the pseudo-photo Pr with the photos in the photo gallery.
An fifth aspect of this invention provides a method for matching a photo Pk with a most closely matched sketch Sk in a sketch gallery having a plurality of sketches each
denoted by ' using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of:
- generating a pseudo-photo Pr for each of the sketch G' in the sketch gallery by
a) projecting s G> onto Us to compute projection coefficient bs, such that s G>
- Usbs; and b) mapping Ap with bs to generate Pr
- recognizing the most closely matched Sk in the sketch gallery by comparing the pseudo-photos Pr with the photo Pk to identify the corresponding most closely matched
Pr pseudo-photo κ .
Yet another aspect of this invention provides a method for matching a photo Pk with a most closely matched sketch Sk in a sketch gallery having a plurality of sketches
each denoted by ' using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI,
S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of:
- generating a pseudo-sketch Sr for Pk by a) projecting Pk onto Up to compute projection coefficient bp, such that Pk = Usbp; and b) mapping As with bp to generate Sr
- identifying the most closely matched Sk by comparing the pseudo-sketch Sr with the sketches in the sketch gallery. This invention further provides computer systems incorporating an algorithm as sets forth by any one of the above methods.
Various options and alterations of this invention will be described in the following sections and may become understandable to one skilled in the art.
Brief description of the drawings
Preferred embodiments of the present invention will now be explained by way of example and with reference to the accompany drawings in which: Figure 1 shows sample face photos (top two rows) and sketches (bottom two rows).
Figure 2 shows the general photo to sketch transformation algorithm of this invention.
Figure 3 shows the photo to sketch/sketch to photo transformation examples. Figure 4 shows the comparison of cumulative match score between various automatic recognition methods and human performance.
Detail Description of Preferred Embodiments
This invention is now described by ways of example with reference to the figures in the following sections.
Although not specifically stated above, it shall be obvious to one skilled in the art that the photos and sketches involved shall be all digitised by suitable input equipments like scanner or digital camera with a reasonable resolution. Further, the computation processes involved shall be carried out by computer systems having sufficient processing power and memory that incorporate the appropriate algorithm.
This invention requires a photo training set and a corresponding sketch training set, denoted hereafter as AP and AS respectively, to work. Each of AP and AS has M samples of respective Pi and Si. Although M can be any value larger than 1, and it is preferred to have M - 80 to improve accuracy. Each of AP and AS will be used to compute the corresponding eigen space U as illustrated above.
1 V v p $ S
For each training photo image ' , there is a corresponding sketch ' , where ' is
V a column vector representation of a sample sketch with the mean sketch '"* removed. r V V V -,
A = \ P P ... P Similar to p l l' 2'""' MJ for photo image training set, we have a corresponding
Sl,S2,...,SMj sketch training set,
Photo-to-Sketch/Sketch-to-photo Transformation and Recognition 1. Photo to Sketch Transformation
As stated above, for the conventional eigenface method, a face image can be reconstructed from the eigenfaces by,
= U>*> . (5)
- rι TJ
Similar, a sketch image can be reconstructed by r ~ s s , where Us is the sketch → eigen space and s is the projection coefficients in the sketch eigen space. Even though the above reconstructions can be done with modern computation power, such will be . difficult to correlate two corresponding photo and sketch sets, and reduces the accuracy of photo-to-sketch/sketch-to-photo recognition significantly.
U = A V A 2 To resolve this problem, it is realized in this invention that since p p p p the reconstructed photo can be represented by, Pr (6)
Figure imgf000009_0001
V - Vv r r -\T c = V , here Pp P : p P ,c w p p L CP Aι , CPPι L l P"MM .^ is found to be a column vector of dimension M. Accordingly, Equation (6) can then be rewritten in summation form, r v * , r
Pr = APCP = ΣCpPi '=1 . (7) This shows that the reconstructed photo is in fact the best approximation of the original image with the least mean-square-error using an optimal linear combination of the v M training sample images. The coefficients in p describe the contribution weight of each sample image. The reconstructed photos generated by this method are shown in the column labeled "Reconstructed Photo" of Figure 3.
Mapping each sample photo image P ' in equation (7) to its corresponding sketch c ' , as illustrated in Figure 2, we get,
V M V v -lv
Sr = ∑cp Si = AsZp = AsVpA/bp
■ (8)
Given the structural resemblance between photos and sketches, it is reasonable to
1 c expect the reconstructed sketch r to resemble the real sketch. For such a reconstruction,
O a sample sketch '' contributes more weight to the reconstruction if its corresponding p photo sample ' contributes more weight to the reconstructed face photo. For an extreme p C = \ example, if a reconstructed photo r has a unit weight Pk for a particular sample p photo k and zero weights for all other sample photos, i.e. the reconstructed photo looks p exactly like the sample photo k , then the reconstructed sketch s<- is simply
S reconstructed by replacing it with the corresponding sketch k . Through such a mapping, a photo image may now be transformed into a pseudo-sketch.
In summary, the photo to sketch transformation is computed through the following steps:
1. Compute the photo training set eigenvector matrix p by first computing
the eigenvectors p and p of p " . However, this step only needs to be performed once. v
2. Project P k in the eigenspace u p to compute the eigenface weight vector b — T/ Λ ~1 2h p . Additionally, cp can be computed by c^ ~ V p p .
3. Reconstruct the pseudo-sketch Sr by mapping As with cp. The c — c = A y ~1 2b pseudo-sketch Sr may be generated by r s p s p p p if cp is computed.
As stated earlier, preferably, the average or mean photo image may be removed v from each raw photo Q before computation by computing the average photo image p for v the photo training set, and the average sketch * for the sketch training set. In such a case, the follows additional steps may be required: v v m O
4. Remove the photo mean p from the input photo image ** to get v v v
Pk = Qk -mp v
5. Finally, add back the average sketch s to get the final viewable
reconstructed sketch
Figure imgf000011_0001
.
Figure 3 shows the comparison between the real sketch and the reconstructed sketch. The similarity between the two can now be seen.
Although the above discussion refers mainly to photo-to-sketch transformation, it should be apparent that the reverse transformation can also be done by the same method. For instance, a pseudo-photo can be obtained from a training sketch set by the formula
Pr = Apcs = APVs As ~V2bs _
Sketch Recognition
After such a photo-to-sketch transformation, sketch recognition from a plurality of photos may now become easier.
The detail algorithm can be summarized as follows: Use p , which is pre-determined from AP, to compute the corresponding ό p pseudo-sketch r for each photo ' in the photo gallery by the sketch transformation algorithm described in earlier. Note that the photo gallery does not need to be the same as the photo training set AP. p n Recognize the most closely matched * in the photo gallery by comparing the v X pseudo-sketches r with the probe sketch k to identify the corresponding most closely
matched pseudo-sketch κ .
The pseudo-sketches and the probe sketch can be compared using conventional eigenface method or any other suitable method. For example, elastic graph matching method can be used for the recognition 00.
As an example to show one of the various comparison methods, one may first ό compute the eigenvectors using the sketch training samples. Then the probe sketch k ό and the generated pseudo-sketches > from the photo gallery are projected onto the eigensketch vectors. The projection coefficients are then used as feature vectors for final classification. The detail comparison algorithm can be summarized as follows:
V τ V V T T J ό ό
Compute the eigensketch weight vector s ~ s k for the probe sketch k by
projecting S k in the sketch eigenspace U s .
Compute the distance between s and each " , where Sr = Usbr for each pseudo sketch, generated from the photo gallery, the sketch is classified as the face with minimum distance between the two vectors.
hi the above algorithm, the photos in the galley are first transformed to ό pesudo-sketches r based on the photo eigenspace UP. Then the recognition is conducted in the sketch eigenspace US. Alternatively, we can reverse the process by transforming each probe sketch S k into a pseudo-photo x r κ based on the sketch eigenspace, then use the photo eigenspace for recognition using conventional eigenface method or any other suitable method.
v v c c
For both approaches, we rely on two sets of reconstruction coefficients p and s , v where p represents the weights for reconstructing a photo using the photo training set v and s represents the weights for reconstructing a sketch using the sketch training set. In fact, to compare a photo with a sketch, we can also use their corresponding reconstruction v v
C C coefficients p and s directly as feature vectors for recognition.
As shown in the previous section, for an input photo, its reconstruction coefficient
V — V v c = K Λ 2b b vector on the photo training set is p p p p , where p is the projection weight vector of the photo in the photo eigenspace. Similarly, for an input sketch, its
V — V v reconstruction coefficient vector on the sketch training set is c s = V sA s 2b s , where b s is the projection weight vector of the input sketch in the sketch eigenspace. If we compare •
X c c photo with a sketch using p and s directly, the distance is defined as
If we first generate a pseudo-sketch for a photo, then calculate the distance in the
V V V sketch eigenspace, the distance is defined as 2 ~" r ~ s " , where "r is the weight
V vector of the generated pseudo-sketch projected in the sketch eigenspace, and s is the
V τ V weight vector of the real sketch projected in the sketch eigenspace. Given r ~ s r ,
-- X _ . V v s - s s s and r ! P ) We can compute r as, v -I v
f AT
Since V> ^ A^V° ~ Λ% we have, "r = Λ Jy¥s ysΓC P . (11) b v v , we can use relation c -iv V ~ rV
To compute s ' = V s s 2b s to get b s = Λ s2F s c s . Finally, the distance dι becomes,
Figure imgf000014_0001
where cp = contribution weight for each photo for photo reconstruction by the photo-training set, and cs = contribution weight for the probe sketch Sk in the sketch eigen space Us.
Alternatively, if we first generate a pseudo-photo for a sketch, then calculate the
distance in the photo eigenspace, the distance 3 can be obtained by
Figure imgf000014_0002
where cp = contribution weight of each photo for photo reconstruction by the photo-training set, and cs = contribution weight for the probe sketch Sk in the eigen space
Us.
The distances for recognition are different for the three cases, and their performances will be compared in the experiments that will be described later.
Again, as may be obvious to one skilled in the art, the methods described can be used to match a photo PK in a sketch gallery. It shall be obvious to one skilled in the art. Once again, we may have the two options:
a. Converting all of the sketches in the sketch gallery into pseudo-photos, and then comparing with the photo Pk. The comparison can be done by comparing bp and br, where bp = projection coefficients of Pk onto Up, and br = projection coefficients of each of the pseudo-photos onto Up. The "distance formula" (12) should now be rewritten to
Figure imgf000014_0003
, where cp = contribution weight for Pk onto Up, and cs contribution weight for each of the sketches in the sketch gallery onto Us. b. Converting the photo Pk into a pseudo-sketch Sk, then comparing with all of the sketches in the sketch gallery. The comparison can be done by comparing bs and br, where bs = projection coefficient of each of the sketches in the sketch gallery onto Us, and br = projection coefficients of the pseudo-sketch Sk onto Us. The "distance formula" (12) d< = should now be rewritten to *5 Λ//2J (cs -c ,
, where cp = contribution weight for reconstructing Pk in Up, and cs = contribution weight of each of the sketches for sketch reconstruction in Us.
Experiments
In order to demonstrate the effectiveness of the new algorithm, a set of experiments are conducted to compare with the geometrical measures and the conventional eigenface method. A database containing 188 pairs of photo and sketch of 188 different people was used for the experiment. Eighty-eight photo-sketch pairs were used as training data, and the other 100 photo-sketch pairs were used for testing.
Convention geometrical method is used in the experiments. The recognition test protocol used in FERET [5]0 is used and the gallery set used in the experiment consists of 100 face photos. The probe set consists of 100 face sketches. The cumulative match score is used to evaluate the performance of the algorithms. It measures the percentage of "the correct answer is in the top n matches", where n is called the rank.
A. Comparison with Traditional Methods Table 1 shows the cumulative match scores of the first ten ranks for the three methods.
Table 1. Cumulative match score for the three methods.
Figure imgf000016_0001
Both the geometrical method and the eigenface method perform poorly in the experiment. Only around 30% accuracy is obtained for the first match. The accuracy for the tenth rank match is 70%. The poor performance of the eigenface method can be expected given the large differences between photo and sketch. As for the geometrical measure, the results show that the reason that photo and sketch look alike is not mainly because of the geometrical similarity of the facial components. Like caricature, a sketch exaggerates the sizes of facial components. If a person has a larger than average nose, the sketch will depict an even larger nose. On the contrary, if a person has a smaller than normal nose, he will be drawn with a nose with further reduced size. The results demonstrate the effect of such exaggeration.
The eigensketch transform method greatly improves the recognition accuracy to 96% for the top ten match. The first match more than doubles the other two methods. This clearly shows the advantage of the new approach. The results also depend on the quality of the sketch drawings that are preferred to be prepared by the same draftsman to improve accuracy. As shown in Fig. 1, not all sketches look exactly like the original photo. The first row of sketches in Fig. 1 are quite similar to their corresponding photos, yet sketches in the second row are much less so. The significance of the results lies upon the large gap between the new methods and the traditional face recognition methods.
B. Comparison of the Three Distance Measures
In this section, we conduct a set of experiments to compare the performance of the three distance measures dl, d2, and d3 as described above. The same dataset described above is used for the comparison. Experimental results are shown in Table 2. Table 2. Cumulative match score using three different distances.
Figure imgf000017_0001
From the results one can see that d i =1 \1\ cp - c _ I iIi js jeast effective among the three
distances. This is not surprising, smce both c p and c * represent coefficients projected in non-orthogonal spaces spanned by the training photos and sketches respectively, therefore
cannot properly reflect the distance between face images. Both άl and 3 are distances computed in orthogonal eigen-spaces thus give much better performance. An interesting observation is that d2 is consistently better than d3. This seems to suggest that the sketch eigenspace can characterize the difference among different people better than photo eigenspace. This may be possible since in the drawing process, an artist tends to capture and highlight the distinct characteristics of a face thus makes it easier to be distinguished. li e - c II The above experiment seems to confirm this point, since p s " has better recognition performance after projection to the sketch eigensapce than to the photo eigenspace.
There may be another explanation for the better performance of d2. In order to compute d2 a photo needs to be transformed into a pseudo-sketch, while to compute d3, a sketch has to be converted to a pseudo-photo. In general, to compress more information into a smaller compact representation is more stable than to enlarge a compact representation to full representation. Since photos contain much more detail information than sketches, it should be easier to convert a photo into sketch. For an extreme example, suppose the sketch only contains some simple outlines of facial features, it is quite easy to draw the outlines from the face photo, however it will be very difficult to reconstruct the photo from the simple line drawings. Therefore, for "">■ computation, better performance is achieved because of the more stable photo-to-sketch transformation.
C. Comparison with Human Performance Two experiments were conducted to compare the new method with sketch recognition by human beings. Such a comparison is important since in current law enforcement application, the sketch of a suspect is usually widely distributed through mass media. It is expected that a match with the real person can be found by people who have seen the sketch. If we can demonstrate that automatic recognition by computers can perform as effectively as human beings, we can then use computers to systematically conduct large-scale search in a large photo-ID database.
In the first experiment, a sketch is shown to a human test candidate for a period of time, then the sketch is taken away before the photo search starts. The candidate tries to memorize the sketch, then go on to search the photo database without the sketch reference in front. The candidate can go through the database and are allowed to select up to 10 photos that are similar to the sketch. He can then rank the selected photos according to the similarity level to the sketch. This is closer to real application scenario. Since, people usually see the sketch of a criminal suspect in a newspaper or on TV briefly, then they have to rely on their memory to match the sketch with the suspect in real life.
For the second experiment, we allow the test candidate to look at the sketch while they search through the photo database. The result can be considered as a benchmark for the automatic recognition system to match. Experimental results of both tests are shown in Fig. 4. The human performance for the first experiment is much lower than the computer recognition result. This is not only because of the difference between photo and sketch, but also because of the memory distortion, since it is difficult to precisely memorize the sketch. In fact, people are very good at distinguishing familiar faces, such as relatives and famous public figures, but are not very good at distinguishing strangers. Without putting the sketch and photo together for detail comparison, it is hard for a person to recognize the two. When the candidate is allowed to see the sketch while searching through the database, the accuracy rate rises to 73%, which is comparable to the computer recognition rate. However, unlike the computer recognition rate that increases to 96% for the tenth rank, the human performance does not increase much with the rank. These encouraging results show that a computer can perform sketch matching with accuracy at least as comparable as that obtained by a human being. Given this, we can now perform automatic searching of a large database using a sketch just like using a regular photo. This is extremely important for law enforcement application where a photo is often not available.
A novel face sketch recognition algorithm is developed in this invention utilizing a novel photo-to-sketch/sketch-to-photo transformation. The photo-to-sketch transformation method is shown to be a more effective approach for automatic matching between a photo and a sketch. Surprisingly, the recognition performance of the new approach can be even better than that of human beings, let alone the improvement on speed and efficiency.
Inclusion of hair portion in the photos or sketches may enhance accuracy, as the hair portion is an additional characterizing feature of a human face. However, this may not be desirable in some cases as hairstyle can be changed relatively easily. It may be a matter of design choice to be left to the system operator or designer to decide.
Although the above discussion focuses on human face sketch-photo and/or photo-sketch recognition, it may be obvious to one skilled in the art that the methods of this invention can also be used for other types of recognition, say buildings, animals, or other objects. Even though it is believed that the major application may lie in law enforcement, use in other areas may be possible.
While the preferred embodiment of the present invention has been described in detail by the examples, it is apparent that modifications and adaptations of the present invention will occur to those skilled in the art. It is to be expressly understood, however, that such modifications and adaptations are within the scope of the present invention, as set forth in the following claims. Furthermore, the embodiments of the present invention shall not be interpreted to be restricted by the examples or figures only.
References P. J. Benson and D. I. Perrett, "Perception and Recognition of photographic quality facial caricatures: implications for the recognition of natural images," European Journal of Cognitive Psychology, vol. 3, no. 1, pp. 105-135, 1991.
V. Bruce, E. Hanna, N. Dench, P. Healy, and A. M. Burton, "The importance of 'mass' in line drawings of faces," Applied Cognitive Psychology, vol. 6, pp. 619-628, 1992.
R. Brunelli and T. Poggio, "Face recognition: features versus template, IEEE Trans, on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042-1052, Oct. 1993.
M. Lades, J. C. Vorbriiggen, J. Buhmann, J. Lange, C. Malsburg, R. P. Wurtz, and W. Konen, "Distortion invariant object Recognition in the dynamic link architecture," IEEE Trans, on Computers, vol. 42, no. 3, pp.300-311, March 1993.
H. Moon and P. J. Phillips, "Analysis of PCA-based face recognition algorithms", in Empirical Evaluation Techniques in Computer Vision, K. W. Bowyer and P. J. Phillips, Eds., IEEE Computer Society Press, Los Alamitos, CA, 1998.
P. J. Phillips, H. Moon, S. A. Rizvi, and P. J. Rauss, "The FERET Evaluation", in Face Recognition: From Theory to Applications, H. Wechsler, P. J. Phillips, V. Bruce, F.F. Soulie and T.S. Huang, Eds., Berlin: Springer-Verlag, 1998.
L. Wiskott, J. Fellous, N. Kruger, and C. Malsburg, "Face recognition by elastic bunch graph matching," IEEE Trans, on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 775-779, July 1997. J. Zhang, Y. Yan, and M. Lades, "Face recognition: Eigenface, Elastic matching, and Neural nets," Proceedings of the IEEE, vol. 85, no. 9, pp. 1423-1435, Sept. 1997.
M. Turk and A. Pentland, "Eigenface for Recongintion", Journal of Cognitive Neurosciences, Vol. 3, No. 1, pg. 71-86, 1991.

Claims

1. A method for generating a pseudo-sketch Sr for a photo Pk using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up is computed from Ap, said method including the steps of: a) projecting Pk onto Up to compute projection coefficient bp, such that Pk - Upbp; and b) mapping As with bp to generate Sr.
2. The method of Claim 1 further including the steps of
a) calculating
Figure imgf000021_0001
p' = contribution
M
Pk = ApCp = z^ CpPj weight of each photo Pi, to reconstruct Pk by i=l , where
Vp = unit eigenvector matrix of ApTAp
p = eigenvalue matrix of ApTAp
M
Sr = AsVpAp~υ2bp = AscP = ∑ p S, b) generating Sr by ,=1
3. The method of Claim 1 , wherein M ≥ 80.
4. The method of Claim 1, wherein all of the sketches of As are prepared by a single draft man.
5. The method of Claim 1 , wherein
Pi = Qi-mp, where Qi = raw photo of Pi and mp = mean photo
Figure imgf000021_0002
; and
Si = Ti-ms, where Ti = raw sketch of Si and ms
Figure imgf000021_0003
6. The method of Claim 5 further including the step of generating a viewable pseudo sketch Tr by Tr=Sr+ms.
7. A method for generating a pseudo-photo Pr for a sketch Sk using a sketch set
As and a corresponding photo set Ap having M samples of Si and Pi respectively such that
As = [SI, S2, , SM] and Ap = [PI, P2, , PM], wherein a sketch eigen space Us is computed from As, said method including the steps of: a) projecting Sk onto Us to compute projection coefficient bs, such that Sk = Usbs; and b) mapping Ap with bs to generate Sr.
8. The method of Claim 7 further including the steps of
a) calculating c s = V s A s _1 2b s = f I C st > C s2 >H^S» C su J I ^ C s, = contribution
M
weight of each sketch Si, to reconstruct Sk by '=1 , where
Vs = unit eigenvector matrix of AsTAs
5 = eigenvalue matrix of AsTAs
M
Pr = APVS As ~mbs = Apcs = ∑cs Pt b) generating Pr by '=1
9. The method of Claim 7, wherein M ≥ 80.
10. The method of Claim 7, wherein all of the sketches of As are prepared by a single draft man.
11. The method of Claim 7, wherein ι M Pi = Qi-mp, where Qi = raw photo of Pi and mp = mean photo m <=I ; and
Si = Ti-ms, where Ti = raw sketch of Si and ms = mean sketch
Figure imgf000022_0001
12. The method of Claim 11 further including the step of generating a viewable pseudo photo Qr by Qr=Pr+mp.
13. A method for matching a sketch Sk with a most closely matched photo Pk in a p photo gallery having a plurality of photos each denoted by G> using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap =
[PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of: p
- generating a pseudo-sketch Sr for each of the photo Gi in the photo gallery by
P P a) projecting G' onto Up to compute projection coefficient bp, such that G<
= Upbp; and b) mapping As with bp to generate Sr -recognizing the most closely matched Pk in the photo gallery by comparing the pseudo-sketches Sr with Sk to identify the corresponding most closely matched
pseudo-sketch sr κ .
14. The method of Claim 13, wherein M ≥ 80.
15. The method of Claim 13, wherein all of the sketches of As are prepared by a single draft man.
16. The method of Claim 13 , wherein
Pι ;
Si =
Figure imgf000023_0001
Ti-ms, where Ti = raw sketch of Si and ms = mean sketch ; and
PG = Qr - mD , Qr „ Pr
G< ^σ' p , where ^G< = raw photo of G< .
17. The method of Claim 13, wherein most closely matched pseudo-sketch
κ is recognized by
- for each pseudo-sketch Sr, projecting Sr onto Us to compute corresponding proj ection coefficient br by Sr = Usbr;
-projecting Sk onto Us to compute projection coefficient bS by Sk = USbS; and
- recognizing the most closely matched Pk by identifying the pseudo-sketch sr κ with least difference between the two coefficients bS and br.
18. The method of Claim 13, wherein the pseudo-sketch Sr is generated for each p photo G> in the photo gallery by further including the steps of
_ y . -ι u _ r _β lr a) calculating p ~ p p p ~ ^ ' °p ,E* °PM > , Cp> ; = contribution
weight of each photo Pi in the photo set Ap, to reconstruct Pr ' by
M
*G, ~ PCp = J cpPi i=1 , where Vp = unit eigenvector matrix of ApTAp
p = eigenvalue matrix of ApTAp
M
Sr = AsVpAp ~l/2bp = AscP = ∑cPiSt b) generating Sr by '=1
19. The method of Claim 18, wherein most closely matched pseudo-sketch sr κ i .s recognized by
- projecting Sk onto Us to compute projection coefficient bS by Sk = USbS;
—1 /2 r \
- calculating s ~ s s s - [ s^ s2 >E£3> sM J ^ si = contπbution weight
M
of each sketch Si, in the sketch set As such that , where
Vs = unit eigenvector matrix of AsTAs s = eigenvalue matrix of AsTAs
-recognizing the most closely matched Pk by identifying the pseudo-sketch sr κ with the least value of d2 according to the formula
Figure imgf000025_0001
20. A method for matching a sketch Sk with a most closely matched photo Pk in a p photo gallery having a plurality of photos each denoted by G< using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap =
[PI, P2, , PM] and As = [SI, S2, SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of:
- generating a pseudo-photo Pr for Sk by a) projecting Sk onto Us to compute projection coefficient bs, such that Sk = Usbs; and b) mapping Ap with bs to generate Pr
- identifying the most closely matched Pk by comparing the pseudo-photo Pr with the photos in the photo gallery.
21. The method of Claim 20, wherein M ≥ 80.
22. The method of Claim 20, wherein all of the sketches of As are prepared by a single draft man.
23. The method of Claim 20, wherein
Figure imgf000025_0002
Pi = Qi-mp, where Qi = raw photo of Pi and mp = mean photo =ι ;
Si = Ti-ms, where Ti = raw sketch of Si and ms = mean sketch
Figure imgf000025_0003
; and
Pr G, = Q ^rσ; - m P„ ? where Q ^rG> = raw photo of „ Pr σ< .
24. The method of Claim 20, wherein most closely matched photo Pk is recognized by
P P
- for each photo G' in the photo gallery, projecting G' onto Up to compute p corresponding projection coefficient bp by G> = Upbp;
-projecting the pseudo photo Pr onto Up to compute corresponding projection coefficient br by Pr =Upbr; and
- recognizing the most closely matched Pk with least difference between the two coefficients br and bp.
25. The method of Claim 24 further including the steps of
calculating c ~,s = V ' ssA* -ss~V2b "s s = [cS ,s ι> ,' c " S^ι )'E raa,' cS 5 u* ] , 5< = contribution weight
of each photo Si in the sketch set As, to reconstruct Pr by
Figure imgf000026_0001
, where
Vs = unit eigenvector matrix of AsTAs
s = eigenvalue matrix of AsTAs p
for each photo G' in the photo gallery, calculating
Cp — Vp p Op — [c ] , C 2 ,^, CpM J cPj _ contribution weight of each photo Pi in
M p *Gi = ApCp = / jCp j the photo set Ap, to reconstruct G> by ,=1 , where
Vp = unit eigenvector matrix of ApTAp
p = eigenvalue matrix of ApTAp
- recognizing the most closely matched Pk by the least value of d3 according to the formula d3 = AP V2Vp T(zp - Cs)
26. A method for matching a photo Pk with a most closely matched sketch Sk in a sketch gallery having a plurality of sketches each denoted by G> using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap
= [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of:
- generating a pseudo-photo Pr for each of the sketch Sr ' in the sketch gallery by ύ r ύ r a) projecting ' onto Us to compute projection coefficient bs, such that '
= Usbs; and b) mapping Ap with bs to generate Pr - recognizing the most closely matched Sk in the sketch gallery by comparing the pseudo-photos Pr with the photo Pk to identify the corresponding most closely matched
Pr pseudo-photo κ .
27. The method of Claim 26, wherein M ≥ 80.
28. The method of Claim 26, wherein all of the sketches of As are prepared by a single draft man.
29. The method of Claim 26, wherein
Figure imgf000027_0001
Pi = Q -mp, where Q = raw p oto of P an mp = mean photo ,=1 ;
Si = Ti-ms, where Ti = raw sketch of Si and ms = mean sketch
Figure imgf000027_0002
; and
Sr ~ Tr — τns , Ta , , „ Sr / < s , where "> = raw sketch of ^ .
30. The method of Claim 26, wherein most closely matched pseudo-photo Pr κ is recognized by
- for each pseudo-photo Pr, projecting Pr onto Up to compute corresponding projection coefficient br by Pr = Upbr;
-projecting Pk onto Up to compute projection coefficient bp by Pk = Upbp; and
- recognizing the most closely matched Sk by identifying the pseudo-sketch Pr with least difference between the two coefficients bp and br.
31. The method of Claim 26, wherein the pseudo-photo Pr is generated for each
sketch G; in the sketch gallery by further including the steps of
a) calculating s ~ s s
Figure imgf000028_0001
s, _ contribution
weight of each sketch Si in the sketch set As, to reconstruct Sc ' by
M
SGi = Λscs = cs Si »'=1 , where
Vs = unit eigenvector matrix of AsTAs
s = eigenvalue matrix of AsTAs
M
Pr = APVS As ~υ2bs = Apcs = ∑cs P{ b) generating Pr by /=1
Pr
32. The method of Claim 31, wherein most closely matched pseudo-photo κ is recognized by
- projecting Pk onto Up to compute projection coefficient bp by Pk = Upbp;
- calculating Cp = VpAp b* = Lc cϋ ** cjJ , ^ = contribution
weight of each photo Pi, in the photo set Ap such that
Figure imgf000028_0002
, where Vp = unit eigenvector matrix of ApTAp
ΛP = eigenvalue matrix of ApTAp
-recognizing the most closely matched Sk by identifying the pseudo-sketch Pr with the least value of d4 according to the formula d4 = Λ„ VP ( , ~ cs)\
33. A method for matching a photo Pk with a most closely matched sketch Sk in a
sketch gallery having a plurality of sketches each denoted by Sr ' using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap
= [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, said method including the steps of:
- generating a pseudo-sketch Sr for Pk by a) projecting Pk onto Up to compute projection coefficient bp, such that Sk = Usbp; and b) mapping As with bp to generate Sr
- identifying the most closely matched Sk by comparing the pseudo-sketch Sr with the sketches in the sketch gallery.
34. The method of Claim 33, wherein M≥ 80.
35. The method of Claim 33, wherein all of the sketches of As are prepared by a single draft man.
36. The method of Claim 33 , wherein
Pi = Qi-mp, where Qi
Figure imgf000029_0001
;
1 M
= — Tt Si = Ti-ms, where Ti = raw sketch of Si and ms = mean sketch M ; and
Sr = Tn — m. , Tr „ , „ S ; ' , where "' = raw sketch of ' .
37. The method of Claim 33, wherein most closely matched sketch Sk is recognized by
S S
-for each sketch G< , projecting G> onto Us to compute corresponding
projection coefficient bs by Gι = Usbs; -projecting the pseudo-sketch Sr onto Us to compute corresponding projection coefficient br by Sr = Usbr; and
- recognizing the most closely matched Sk with least difference between the two coefficients br and bs.
38. The method of Claim 37 further including the steps of
a) calculating CP = VrA? h P = [CP, > CP 2 >^> CP ? , = contribution
weight of each photo Pi in the photo set Ap, to reconstruct Sr by
Figure imgf000030_0001
, where Vp = unit eigenvector matrix of ApTAp
p = eigenvalue matrix of ApTAp
b) for each sketch Sr ' in the sketch gallery, calculating
_ y . ~1 2 h _ [ _ « ]τ s - s s s - [csl > cs2 >E&> csM J s cst = contribution weight of each sketch Si in
the sketch set As, to reconstruct
Figure imgf000030_0002
, where Vs = unit eigenvector matrix of AsTAs
s = eigenvalue matrix of AsTAs - recognizing the most closely matched Sk by the least value of d5 according to the formula
Figure imgf000030_0003
39. A computer system for generating a pseudo-sketch Sr for a photo Pk using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2 , SM], wherein a photo eigen space Up is computed from Ap, incorporating the algorithm sets forth in the method of Claim 1.
40. A computer system for generating a pseudo-photo Pr for a sketch Sk using a sketch set As and a corresponding photo set Ap having M samples of Si and Pi respectively such that As = [SI, S2, , SM] and Ap = [PI, P2, , PM], wherein a sketch eigen space Us is computed from As, incoφorating the algorithm sets forth in the method of Claim 7.
41. A computer system for matching a sketch Sk with a most closely matched p photo Pk in a photo gallery having a plurality of photos each denoted by G< using a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space
Up and a sketch eigen space Us are computed from Ap and As respectively, incorporating the algorithm sets forth in the method of Claims 13 or 20.
42. A computer system for matching a photo Pk with a most closely matched
sketch Sk in a sketch gallery having a plurality of sketches each denoted by Sr / usmg a photo set Ap and a corresponding sketch set As having M samples of Pi and Si respectively such that Ap = [PI, P2, , PM] and As = [SI, S2, , SM], wherein a photo eigen space Up and a sketch eigen space Us are computed from Ap and As respectively, incorporating the algorithm sets forth in the method of Claims 26 or 33.
PCT/CN2003/000797 2002-09-19 2003-09-19 Sketch-photo recognition WO2004027692A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003271508A AU2003271508A1 (en) 2002-09-19 2003-09-19 Sketch-photo recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
HK02106852A HK1052831A2 (en) 2002-09-19 2002-09-19 Sketch-photo recognition
HK02106852.2 2002-09-19

Publications (1)

Publication Number Publication Date
WO2004027692A1 true WO2004027692A1 (en) 2004-04-01

Family

ID=30130369

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2003/000797 WO2004027692A1 (en) 2002-09-19 2003-09-19 Sketch-photo recognition

Country Status (4)

Country Link
CN (1) CN1327386C (en)
AU (1) AU2003271508A1 (en)
HK (1) HK1052831A2 (en)
WO (1) WO2004027692A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034849A (en) * 2012-12-19 2013-04-10 香港应用科技研究院有限公司 Perception deviation level estimate for hand-drawn sketches in matching of sketches and photos
WO2017020140A1 (en) * 2015-08-03 2017-02-09 Orand S.A. System for searching for images by sketches using histograms of cell orientations and extraction of contours based on mid-level features
CN108805951A (en) * 2018-05-30 2018-11-13 上海与德科技有限公司 A kind of projected image processing method, device, terminal and storage medium
US10339177B2 (en) 2014-03-28 2019-07-02 Huawei Technologies Co., Ltd. Method and a system for verifying facial data
WO2020009877A1 (en) 2018-07-02 2020-01-09 Stowers Institute For Medical Research Facial image recognition using pseudo-images

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159064B (en) * 2007-11-29 2010-09-01 腾讯科技(深圳)有限公司 Image generation system and method for generating image
CN107004136B (en) * 2014-08-20 2018-04-17 北京市商汤科技开发有限公司 Method and system for the face key point for estimating facial image
CN106412590B (en) * 2016-11-21 2019-05-14 西安电子科技大学 A kind of image processing method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835616A (en) * 1994-02-18 1998-11-10 University Of Central Florida Face detection using templates
WO2000033240A1 (en) * 1998-12-02 2000-06-08 The Victoria University Of Manchester Face sub-space determination

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980703120A (en) * 1995-03-20 1998-10-15 조안나 티. 라우 Image Identification System and Method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835616A (en) * 1994-02-18 1998-11-10 University Of Central Florida Face detection using templates
WO2000033240A1 (en) * 1998-12-02 2000-06-08 The Victoria University Of Manchester Face sub-space determination

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034849A (en) * 2012-12-19 2013-04-10 香港应用科技研究院有限公司 Perception deviation level estimate for hand-drawn sketches in matching of sketches and photos
CN103034849B (en) * 2012-12-19 2016-01-13 香港应用科技研究院有限公司 Estimate for the perception variance level of cartographical sketching in sketch mates with photo
US10339177B2 (en) 2014-03-28 2019-07-02 Huawei Technologies Co., Ltd. Method and a system for verifying facial data
WO2017020140A1 (en) * 2015-08-03 2017-02-09 Orand S.A. System for searching for images by sketches using histograms of cell orientations and extraction of contours based on mid-level features
US10866984B2 (en) 2015-08-03 2020-12-15 Orand S.A. Sketch-based image searching system using cell-orientation histograms and outline extraction based on medium-level features
CN108805951A (en) * 2018-05-30 2018-11-13 上海与德科技有限公司 A kind of projected image processing method, device, terminal and storage medium
CN108805951B (en) * 2018-05-30 2022-07-19 重庆辉烨物联科技有限公司 Projection image processing method, device, terminal and storage medium
WO2020009877A1 (en) 2018-07-02 2020-01-09 Stowers Institute For Medical Research Facial image recognition using pseudo-images
US11157721B2 (en) 2018-07-02 2021-10-26 Stowers Institute For Medical Research Facial image recognition using pseudo-images
US11769316B2 (en) 2018-07-02 2023-09-26 Stowers Institute For Medical Research Facial image recognition using pseudo-images

Also Published As

Publication number Publication date
CN1327386C (en) 2007-07-18
AU2003271508A1 (en) 2004-04-08
HK1052831A2 (en) 2003-09-05
CN1701339A (en) 2005-11-23

Similar Documents

Publication Publication Date Title
Tang et al. Face sketch recognition
Tang et al. Face photo recognition using sketch
Chen et al. Cross-age reference coding for age-invariant face recognition and retrieval
Okada et al. The Bochum/USC face recognition system and how it fared in the FERET phase III test
US8842891B2 (en) Ultra-low dimensional representation for face recognition under varying expressions
Fu et al. Multiple feature fusion by subspace learning
Liu et al. A nonlinear approach for face sketch synthesis and recognition
Chen et al. Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset
Gao et al. Face recognition using line edge map
Delac et al. Image compression effects in face recognition systems
US7391889B2 (en) Method and apparatus for extracting feature vector used for face recognition and retrieval
Moghaddam et al. Bayesian modeling of facial similarity
US20050105779A1 (en) Face meta-data creation
Guillamet et al. Classifying faces with nonnegative matrix factorization
Lata et al. Facial recognition using eigenfaces by PCA
Poh et al. An evaluation of video-to-video face verification
Ramasubramanian et al. Encoding and recognition of faces based on the human visual model and DCT
Günther et al. 2D face recognition: An experimental and reproducible research survey
Moghaddam et al. Beyond linear eigenspaces: Bayesian matching for face recognition
WO2004027692A1 (en) Sketch-photo recognition
Kim et al. Face retrieval using 1st-and 2nd-order PCA mixture model
Moghaddam et al. Beyond Euclidean eigenspaces: Bayesian matching for visual recognition
Rajapakse et al. NMF vs ICA for face recognition
JP2004272326A (en) Probabilistic facial component fusion method for face description and recognition using subspace component feature
Tefas et al. Variants of dynamic link architecture based on mathematical morphology for frontal face authentication

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 20038252570

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP

122 Ep: pct application non-entry in european phase