CN1701339A - Portrait-photo recognition - Google Patents
Portrait-photo recognition Download PDFInfo
- Publication number
- CN1701339A CN1701339A CNA038252570A CN03825257A CN1701339A CN 1701339 A CN1701339 A CN 1701339A CN A038252570 A CNA038252570 A CN A038252570A CN 03825257 A CN03825257 A CN 03825257A CN 1701339 A CN1701339 A CN 1701339A
- Authority
- CN
- China
- Prior art keywords
- portrait
- photo
- calculate
- projection
- collection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 113
- 239000013598 vector Substances 0.000 claims description 34
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 19
- 230000008878 coupling Effects 0.000 claims description 7
- 238000010168 coupling process Methods 0.000 claims description 7
- 238000005859 coupling reaction Methods 0.000 claims description 7
- 239000004576 sand Substances 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims 2
- 238000012549 training Methods 0.000 description 20
- 238000012360 testing method Methods 0.000 description 9
- 230000001186 cumulative effect Effects 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 239000012141 concentrate Substances 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000002421 finishing Substances 0.000 description 2
- 230000008521 reorganization Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000005303 weighing Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000631 nonopiate Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- RLLPVAHGXHCWKJ-UHFFFAOYSA-N permethrin Chemical compound CC1(C)C(C=C(Cl)Cl)C1C(=O)OCC1=CC=CC(OC=2C=CC=CC=2)=C1 RLLPVAHGXHCWKJ-UHFFFAOYSA-N 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a new portrait-based photo retrieval system. The new method can greatly reduce the difference between the picture and the portrait and effectively match the picture and the portrait. Experimental data also confirm the effectiveness of this algorithm.
Description
Technical field
For judicial department, in the picture data storehouse in police office, carry out the automatic retrieval of human face photo and discern of crucial importance.The scope that it can help the investigator to confirm suspect or dwindle suspect effectively.But under most of situation, be difficult to obtain suspect's photo.Best substitute is based on the suspect's that eye witness's description draws portrait.
The invention relates to the method for utilizing eigenface and in the picture data storehouse, seek the photo that is complementary with portrait, or in the portrait database, seek the portrait that is complementary with photo.
Background technology
In recent years, because the increase of application demand in the fields such as the administration of justice, video monitoring, bank and security system, the Automatic face recognition technology had attracted extensive concern.Compare with other technologies (as fingerprint recognition), the advantage of recognition of face is that the user is easy to use, and cost is low.The operator need can not revise the identification error of face identification system through the special training machine.
An important application of face recognition technology is to assist judicial department to solve a case.For example, the photo in the picture data storehouse, police office is retrieved the scope that can help to dwindle fast suspect automatically.Yet under most situation, judicial department can't obtain suspect's photo.Best substitute is suspect's portrait of drawing according to eye witness's description.Utilizing portrait to search corresponding with it photo in database has very big potential using value, because it can not only help the policeman to seek suspect, and can help eyewitness and artist to utilize the photo of giving for change from database that portrait is made amendment.
Although portrait-photo searching system is had the important use demand, the research in this field is [1] [2] seldom.This may be very difficult because set up the database of large-scale human face portrait.
There are two traditional methods to be used in database, to carry out coupling between photo and the photo.To being described below of these two kinds of methods.
A. geometric properties method
The geometric properties method is the most direct method.Mostly concentrate on based on the recognition of face of geometric properties research and to extract face characteristic, as relative position and other parameters of eyes, mouth, chin.Although the very easy understanding of geometric properties method, it can't comprise the enough information of people's face being stablized identification.Particularly geometric properties changes along with the variation of facial expression and ratio.Also may be very greatly even same individual's different photo geometric properties change.One piece of nearest article compares geometric properties method and template matching method, and its conclusion is tended to template matching method [3].
B. eigenface method
At present one of successful method that people's face is discerned may be eigenface method [9].The FERET test report is through behind the comprehensive relatively the whole bag of tricks classifying it as one of effective method [6].Similarly conclusion is at Zhang et al[8] in also can see.Although the eigenface method is thrown light on, expression, and posture influence, these factors are unimportant for the identification of standard identification photographs.
The eigenface method characterizes people's face and is used for identification with Karhunen-Loeve Transform (KLT).In case try to achieve an eigenvectors from people's data set covariance matrix that looks familiar, also claim eigenface, facial image just can come reconstruct by the suitable weighted linear combination to eigenface.For a given image, its weighting coefficient on eigenface has just constituted a stack features vector.For a new test pattern, its weighting coefficient can calculate to each eigenface by this image of projection.Classify according to the distance between the weighting coefficient eigenvector of each image in the weighting coefficient eigenvector of compare test image and the database then.
Although Karhunen-Loeve Transform is set forth in a lot of teaching materials and paper, we simply discuss at the identification of photo again here, particularly.When calculating K arhunen-LoeveTransform, use column vector
Represent a sample facial image, its average face is by formula
Draw, wherein M is photograph collection A
PThe quantity of training sample.From each image, deduct average face, obtain
The training set of photo is formed the matrix of N * M
Wherein N is the quantity of whole pixels in the image.Sample covariance matrix is estimated as
W=A
pA
p T????????????(1)
A wherein
P TBe A
PThe commentaries on classics order.
Consider that the picture image data amount is very big, the proper vector of directly calculating W according to the calculated capacity of present permission is also unrealistic.Usually use the method for main proper vector assessment.Because sample image quantity M is less relatively, the order of W is M-1.So at first calculate a less matrix A
P TA
PProper vector,
(A
p TA
p)V
p=V
pΛ
p????????????????(2)
Wherein Vp is the unit character vector matrix, Λ
pBe the diagonalization eigenvalue matrix.Ap is all multiply by on the formula both sides, and we obtain
(A
pA
p T)A
pV
p=A
pV
pΛ
p???????????(3)
So the standard orthogonal characteristic vector matrix of covariance matrix W, or feature space Up is
For a new human face photo
Its coefficient that is projected in the characteristic vector space forms vector
It is used as eigenvector and classifies.
Yet, because the greatest differences of human face photo and portrait directly is applied to the eigenface method may not have good effect based on the photo identification of portrait.In general, different two different photos that are greater than from different people of same individual's photo and portrait.
Summary of the invention
One of purpose of the present invention provides a kind of method or system is used to solve the problem of more effectively mating between portrait and the photo.
Another object of the present invention is to solve the problem that one or more methods in the past propose.At least, it provides a useful selection can for the public more.
For achieving the above object, the invention provides a kind of method of utilizing photograph collection Ap and corresponding portrait collection As to generate the pseudo-Sr of portrait for photo Pk.Ap and As have M sample Pi and Si respectively, are expressed as Ap=[P1, P2 ..., PM], and As=[S1, S2 ..., SM], wherein the feature space Up of photo calculates from Ap.The method may further comprise the steps:
A) projection P k calculates projection coefficient bp to Up, obtains Pk=Upbp;
B) utilize As and bp to generate Sr.
Another aspect of the present invention provides a kind of method of utilizing portrait collection As and corresponding photograph collection Ap to generate pseudo-photo Pr for portrait Sk.As and Ap have M sample Si and Pi respectively, are expressed as As=[S1, S2 ..., SM] and Ap=[P1, P2 ..., PM], wherein Hua Xiang feature space Us calculates from As.The method may further comprise the steps:
A) projection Sk calculates projection coefficient bs to Us, obtains Sk=Usbs;
B) utilize Ap and bs to generate Pr.
Another aspect of the present invention is to utilize photograph collection Ap and corresponding portrait collection As to choose the photo Pk that mates most with portrait Sk from Photo Library, a large amount of photos is arranged in this Photo Library, and every photo indicates with PGi, and the two has M sample Pi and Si respectively Ap and As, be expressed as Ap=[P1, P2 ..., PM] and As=[S1, S2, ..., SM], wherein photo feature space Up and portrait feature space Us calculate from Ap and As respectively.The method may further comprise the steps:
--for every in Photo Library photo PGi generates pseudo-portrait Sr, by
A) projection P Gi calculates projection coefficient bp to Up, obtains PGi=Upbp;
B) utilize As and bp to generate Sr;
--differentiate that by relatively pseudo-portrait Sr and Sk the puppet portrait Srk of corresponding coupling the most, photo of its correspondence in Photo Library are and draw a portrait the photo Pk that Sk mates most.
The 4th aspect of the present invention is to utilize photograph collection Ap and corresponding portrait collection As to choose the photo Pk that mates most with portrait Sk from Photo Library, a large amount of photos is arranged in this Photo Library, and every photo represents that with PGi Ap and As have M sample Pi and Si respectively, be expressed as Ap=[P1, P2 ..., PM] and As=[S1, S2, ..., SM], the feature space Up of photo and the feature space Us of portrait calculate from Ap and As respectively.The method may further comprise the steps:
--Sk generates a pseudo-photo Pr for portrait, by
A) projection Sk calculates projection coefficient bs to Us, obtains Sk=Usbs;
B) utilize Ap and bs to generate Pr;
--by with Photo Library in photo compare, find out the photo Pk that mates most with pseudo-photo Pr.
The 5th aspect of the present invention is to utilize photograph collection Ap and corresponding portrait collection As to select the portrait Sk that mates most with photo Pk from the portrait picture library, in this portrait picture library a large amount of portraits is arranged, every portrait represents that with SGi Ap and As have M sample Pi and Si respectively, be expressed as Ap=[P1, P2 ..., PM] and As=[S1, S2, ..., SM], the feature space Up of photo and the feature space Us of portrait calculate from Ap and As respectively.The method may further comprise the steps:
--for every portrait SGi in the portrait picture library generates pseudo-photo Pr, by
A) projection SGi calculates projection coefficient bs to Us, obtains SGi=Usbs;
B) utilize Ap and bs to generate Pr;
--find the corresponding pseudo-photo Prk of coupling by pseudo-photo Pr and photo Pk, its corresponding portrait in the portrait picture library is the portrait Sk that mates the most with photo Pk.
The 6th aspect of the present invention is to utilize photograph collection Ap and corresponding portrait collection As to select the portrait Sk that mates most with photo Pk from the portrait picture library, in this portrait picture library a large amount of portraits is arranged, every portrait represents that with SGi Ap and As have M sample Pi and Si respectively, be expressed as Ap=[P1, P2 ..., PM] and As=[S1, S2, ..., SM], the feature space Up of photo and the feature space Us of portrait calculate from Ap and As respectively.The method may further comprise the steps:
--for photo Pk generates a pseudo-portrait Sr, by
A) projection P k calculates projection coefficient bp to Up, obtains Pk=Upbp;
B) utilize As and bp to generate Sr;
--by comparing, find out the portrait Sk that mates most with pseudo-portrait Sr with the portrait of portrait in the picture library.
This invention comprises with computer system and realizes above-mentioned any algorithm.
Various selection of the present invention and variation are described in the part in the back, so that the people who is familiar with this area is appreciated that.
Below in conjunction with drawings and embodiments, technical scheme of the present invention is described in further detail.
Description of drawings
Fig. 1 is human face photo of the present invention (last two row) and portrait (two row down) example.
Fig. 2 is converted to the algorithm of portrait for photo of the present invention.
Fig. 3 for photo of the present invention to drawing a portrait/draw a portrait the transform instances of photo.
Fig. 4 is the comparison of the cumulative matches rate of different automatic identifying method of the present invention and human eye identification.
Embodiment
To specify the method that adopts of the present invention with better embodiment and synoptic diagram thereof below.
Although not concrete elaboration the in foregoing, but those of ordinary skill in the art should know portrait and photo the digitized processing of certain image resolving rate will be arranged with input equipments such as scanner or digital cameras, and the computer system that is used for executive routine to there be sufficient service ability and storage space.
The present invention needs one group of photo training set and corresponding portrait training set, represents with Ap and As respectively.Each Ap and As have M Pi and Si sample respectively, although M can be than 1 big any value, first-selected M answers 〉=80 to improve accuracy.Each Ap and As are used to calculate the above-mentioned corresponding feature space U that mentions.
For each training photograph image
A corresponding portrait is all arranged
Be that a sample portrait deducts average portrait
After a column vector.Be similar to the photo training set
We have corresponding image training set
The conversion of photo-portrait/portrait-photo and identification
1. photo is converted to portrait
As mentioned above, with traditional eigenface method, a facial image can pass through formula by eigenface
Reconstruct,
Wherein Up is the photo feature space,
Be the projection coefficient of Pr at the photo feature space, same, width of cloth portrait can pass through formula
Reconstruct, wherein Us is the portrait feature space,
Be the projection coefficient of Sr at the portrait feature space.But be difficult to the photo of two correspondences is associated with the projection coefficient of portrait feature space, thereby greatly reduce photo-portrait, the recognition capability of portrait-photo.
In order to address this problem, since the present invention finds to have formula
The photo of reconstruct just can be expressed as so
Wherein
It is the column vector of M dimension.Therefore, formula (6) can be summarized as
This shows that the photo of reconstruct is actually the best approximation that makes up the original image with minimum average B configuration variance that obtains with the optimum linear of M training sample image,
In coefficient the proportion of each sample image contribution is described, the reconstruct photo that is generated by this method is presented in the row that Fig. 3 indicates " reconstruct photo ".
Each the sample photograph image in the formula (7)
Replacement is corresponding portrait
As shown in Figure 2, we obtain formula,
Because the similar of photo and portrait, so the portrait of reconstruct
Should to really draw a portrait similar.If photo sample
Human face photo contribution to its reconstruct is a lot, and its corresponding sample portrait Si will be a lot of to the contribution of reconstruct portrait so.For an extreme example, to a special sample photo
It is to the reconstruct photo
It is c that a unit weights is arranged
Pk=1, the weight of other all sample photos is zero, i.e. this reconstruct photo and this sample photo
Just the same, its reconstruct is drawn a portrait so
Only need simply to draw a portrait with its correspondence
Replacement is a restructural.By such replacement, a photograph image promptly can be converted to pseudo-portrait.
In brief, photo is converted to portrait and can be undertaken by following step:
1. at first calculate A
p TA
pProper vector V
pAnd Λ
P, so that calculate the training set eigenvectors matrix U of photo
p
2. by projection
To feature space U
pIn, calculate the weighing vector of its eigenface
In addition, c
pBy calculating
Obtain;
3. utilize As and c
pReconstruct portrait Sr.If c
pBy calculating, then pseudo-portrait Sr can pass through operational formula
Draw;
The front was once mentioned, and before computing begins, deduct the average photograph image that generates from former photo Q from the photo training set
For the portrait training set, deduct average portrait
Step below so just needing:
Fig. 3 shows the comparative example of real portrait and reconstruct portrait.Can be obvious the two similarity.
Although above-mentioned discussion is mainly the conversion of photo to portrait, clearly, opposite conversion can use the same method and accomplish.For example, a pseudo-photo can pass through formula
Obtain.
Portrait identification
Passing through by photo after the conversion of portrait, the identification portrait has just become easily from a large amount of photos.
Its specific algorithm is summarized as follows:
By previously described by photo to the transfer algorithm of drawing a portrait, use U
pBe every in Photo Library photo
Calculate the puppet portrait of mapping
U wherein
pCalculate from Ap.Here Photo Library not necessarily wants the same with photo training set Ap.Certainly, as can improve arithmetic accuracy;
2. comparison query is drawn a portrait
Draw a portrait with puppet
Draw a portrait in order to the puppet that identification is mated most
And then find in the Photo Library with
The photo that mates most;
The comparison of pseudo-portrait and inquiry portrait can be with traditional eigenface method or other method that is fit to realizations, for example, and elastic graph matching method [4] [7];
Be that example describes with a kind of traditional people's face comparative approach below, can calculate proper vector earlier with the portrait training sample.To inquire about portrait then
With the puppet portrait that in photograph collection, generates
Project on the portrait proper vector.Projection coefficient is used as the eigenvector of last classification.Concrete comparison algorithm is summarized as follows:
1. by projection inquiry portrait
To portrait feature space U
sCalculate the inquiry portrait
Weighing vector
2. calculate
And each
Between distance,
The puppet portrait that is every photo generation from Photo Library is calculated, and portrait is identified as people's face that minor increment is arranged between two vectors.
In above-mentioned algorithm, based on photo feature space Up, the photo in the picture library is converted into pseudo-portrait earlier
In portrait feature space Us, discern then.We also can draw a portrait each inquiry based on the portrait feature space conversely
Be converted to pseudo-photo
Discern with the photo feature space by eigenface method or any other suitable method then.
For two kinds of methods, we have used two groups of reconstruction coefficients
With
Wherein
The representative weight of photo training set reconstruct photo,
The expression weight of the portrait of portrait training set reconstruct.In fact, compare a photo and portrait, the reconstruction coefficients of our also available their correspondences
With
Directly discern as eigenvector.
Stated as former that for the photo of an input, its reconstruction coefficients vector in the photo training set was
Wherein
Be the projection weight vectors of photo in the photo feature space.Similarly, for the portrait of an input, its reconstruction coefficients vector in the portrait training set is
Wherein
It is the projection weight vectors of the input portrait in the portrait feature space.If we use
With
Directly compare photo and portrait, its decipherment distance is defined as
Generate a pseudo-portrait if we are earlier a photo, calculate their distances at the portrait feature space again, this distance then is
Wherein
Be the weight vectors that is projected onto the puppet portrait of portrait feature space,
It is the weight vectors that projects the real portrait of portrait feature space.Because
We calculate
For,
If
We obtain,
We use formula
Obtain
At last, apart from d
2For,
On the contrary, generate a pseudo-photo, calculate its distance again at the photo feature space if we are earlier a portrait, such apart from d
3Can calculate by following formula
The distance of discerning under three kinds of situations is different, and their performance will give comparison in the test of back.
For those persons skilled in the art, this method can be used to concentrate the portrait of selecting with photo PK coupling from a portrait.Here, we also have other two kinds of selections:
A. whole portraits that portrait is concentrated convert pseudo-photo to, then with photo P
kRelatively.By comparing b
pAnd b
rFinish comparison, wherein b
p=P
kProjection coefficient in Up, b
rThe projection coefficient of the pseudo-photo of=each generation in the U spectrum, range formula can be rewritten as now
B. photo P
kConvert pseudo-portrait S to
k, then with whole portrait picture libraries in portrait relatively.By comparing b
sAnd b
rFinish comparison, wherein b
s=the projection coefficient of each portrait in Us in the portrait picture library, b
r=pseudo-portrait S
kProjection coefficient in Us.Range formula can be written as now
Checking
In order to prove the validity of new algorithm, we do one group of experiment and traditional geometric properties method and comparison of eigenface method.We set up a database that 188 comparison films and corresponding portrait are arranged, and they come from 188 different people respectively, and wherein 88 comparison film-portraits are used as training data, and other 100 comparison film-portraits are used for test.
The identification protocol [6] among the FERET is adopted in this experiment.The Photo Library collection that is used to test is made up of 100 human face photos.Query set is made up of 100 human face portraits.The cumulative matches rate is used for assessing operation result.It detects the number percent in " correct option is preceding n coupling ", and n is called as rank.
A. with the comparison of classic method
Table 1 has shown preceding ten accumulation matching rates that draw with three kinds of methods.
The accumulation matching rate of three kinds of methods of table 1.
??Rank | ??1 | ??2 | ??3 | ??4 | ??5 | ??6 | ??7 | ??8 | ??9 | ??10 |
Geometric method | ??30 | ??37 | ??45 | ??48 | ??53 | ??59 | ??62 | ??66 | ??67 | ??70 |
The eigenface method | ??31 | ??43 | ??48 | ??55 | ??61 | ??63 | ??65 | ??65 | ??67 | ??67 |
The portrait transformation approach | ??71 | ??78 | ??81 | ??84 | ??88 | ??90 | ??94 | ??94 | ??95 | ??96 |
The experimental result of geometric method and eigenface method is undesirable.30% accuracy is only arranged in first matching rate.The tenth cumulative matches rate is 70%.Because the greatest differences of photo and portrait can be anticipated the experimental result that the eigenface method is very poor.From the experimental result of geometric properties method we can to draw photo similar to portrait not merely because the geometric similarity of people's face.The same with caricature, the size of the usually exaggerative human face of portrait.If someone nose is greater than average-size, caricature can draw greater than the nose of average-size so.On the contrary, if someone nose less than normal size, his nose will further be dwindled, thereby reaches exaggeration.
Feature portrait transformation approach has improved identification accuracy widely, and the tenth cumulative matches rate reaches 96%.The first cumulative matches rate degree of accuracy has also surpassed the twice of other two kinds of methods.Clearly illustrated the superiority of new method.This result also depends on the quality of portrait, and portrait can improve degree of accuracy for same high-level artist's hand.As shown in Figure 1, not every portrait is all very similar to photo, and the first row portrait of Fig. 1 is very alike with their corresponding photo, but second has capablely just had very big difference.This result's importance is to have shown that new method is better than traditional face identification method greatly.
B. the comparison of three kinds of range observations
This part, three range observation d1 that we described relatively in the past with one group of experiment, d2, d3.Here adopt and top identical data set.Experimental result sees Table 2.
The cumulative matches rate that table 2. draws with three kinds of different distance
??rank | ??1 | ??2 | ??3 | ??4 | ??5 | ??6 | ??7 | ??8 | ??9 | ??10 |
??d 1 | ??20 | ??49 | ??59 | ??65 | ??69 | ??73 | ??75 | ??76 | ??81 | ??82 |
??d 2 | ??71 | ??78 | ??81 | ??84 | ??88 | ??90 | ??94 | ??94 | ??95 | ??96 |
??d 3 | ??57 | ??70 | ??77 | ??79 | ??83 | ??84 | ??85 | ??86 | ??87 | ??88 |
We see from test findings, in three kinds of distances
Effect is the poorest.This no wonder, because
With
Represented respectively that to project to training photo and portrait be the coefficient in nonopiate space of benchmark, so can not correctly react the distance between facial image.d
2And d
3Be the distance of calculating from the orthogonal characteristic space, so provided better result.An interesting result is that d2 is better than d3 all the time.This looks contrasts the difference that the sheet feature space can show the different people face better as the portrait feature space.This may be because the artist tends to pounce in drawing a picture and catches and emphasize people's face obvious characteristics and its easier quilt is distinguished.Above-mentioned test looks and confirmed this point, because
Being mapped to portrait feature space ratio is mapped to the photo feature space better recognition result is arranged.
d
2Having preferably, the result can have another kind of the explanation.In order to calculate d
2Photo will be converted into pseudo-portrait, and calculates d
3, portrait must be converted into pseudo-photo.In general, compressed information is more stable than amplification message.Because containing, photo gestures the abundanter information of picture, so the conversion photo is easier to portrait.For an extreme example, suppose that portrait only comprises the simple profile of face characteristic, be easy to from human face photo this profile that draws, but very difficultly from simple lines, reconstruct photo.Therefore, to d
2Calculating, draw better result and be because photo can more stably be converted to portrait.
C. with the comparison of people's naked eyes identification
Come new method more of the present invention and the recognition capability of human eye with two experiments below to drawing a portrait.This more important, because in police and judicial, normally in mass media, extensively disseminate by portrait with suspect.To expect to recognize true man after people see portrait.Be equal to mutually with human eye identification portrait ability if can confirm the automatic recognition capability of computing machine, we just can carry out the large tracts of land retrieval of system with computing machine in large-scale picture data storehouse with portrait.
In first experiment, see a period of time for a testee in a portrait, before beginning to see photo, take portrait away then.The testee remembers portrait as far as possible, under the situation that does not have portrait, searches in the picture data storehouse.Tested people can select 10 photos similar to portrait from whole photos.Then according to arranging the photo of selecting with the similarity of portrait.The method is near reality, because people just see suspect's portrait momently on TV or newspaper, must find in actual life and draws a portrait similar people according to memory then.
Second experiment, we allow the testee to see that when the search Photo Library portrait, this result are conduct and automatic recognition system benchmark relatively.The results are shown among Fig. 4 of two tests.The human eye recognition result of first experiment is more much lower than the recognition result of computing machine.This is not only because photo and portrait different, also be owing to very difficult accurately remember to draw a portrait cause the distortion remembered.In fact, people are easy to distinguish familiar people's face, such as relatives or famous public figure, but are not easy to distinguish the stranger.Portrait and photo are not put together, people are difficult to discern the two.
When tested person person allows the contrast portrait during at searching database, its accurate rate rises to 73%, and is similar with the discrimination of computing machine.Yet human recognition capability can not increase because of the increase of rank, and the tenth cumulative matches rate of computing machine is increased to 96%.This shows that computing machine is similar with the mankind at least for the portrait recognition capability.Therefore, we now can be as searching for automatically in big database with portrait with photo.It is extremely important under the situation that can't obtain photo this method to be applied to judicial department.
The present invention utilizes photo-portrait/portrait-photo conversion, proposes a new human face portrait recognizer.It is more effective in the automatic coupling of photo and portrait that photo converts portrait to.Except having improved recognition speed and efficient, the recognition capability of new method even be better than people's naked eyes.
Although above-mentioned discussion only concentrates on the identification of human face photo-portrait/portrait-photo, also can also be used for the identification of other kind for the very easy discovery the present invention of person skilled in the art, such as building or other objects.Let it be to the greatest extent is mainly used in legal department, and it also is possible being used in other field.
In portrait identification, utilize the information of hair, can improve discrimination sometimes, but, should not utilize under a lot of situations owing to the hair mutability.Whether utilize the information of hair, can determine by actual conditions.
Purposes of the present invention is set forth particularly by example.Clearly, those skilled in the art may make finishing and reorganization to present invention.But be noted that these finishings and reorganization also belong in the scope of the present invention, belong in the claim of mentioning subsequently.And example or legend that purposes of the present invention should only not explained by this paper limit.
Claims (42)
1. portrait-photo conversion method, this method is to utilize a photograph collection A
pPortrait collection A with its correspondence
sBe photo P
kGenerate a pseudo-portrait S
r, it is characterized in that: A
pAnd A
sM P arranged respectively
iAnd S
iSample, i.e. A
p=[P
1, P
2..., P
M], A
s=[S
1, S
2..., S
M], the feature space U of photo
pBy A
pCalculate, the step of the method comprises:
A) projection P
kTo U
pThe middle projection coefficient b that calculates
p, obtain P
k=U
pb
p
B) utilize A
sAnd b
pGenerate S
r.
2. as the described portrait-photo conversion method in the claim 1, wherein, further may further comprise the steps:
A) calculate
C wherein
Pi=each photo P
iTo the weighting coefficient of image reconstruction, use
Reconstruct P
k:
V
p=A
p TA
pThe unit character vector matrix;
Λ
P=A
p TA
pEigenvalue matrix;
B) pass through formula
Obtain S
r
3. as the described portrait-photo conversion method in the claim 1, wherein, M 〉=80.
4. as the described portrait-photo conversion method in the claim 1, wherein, all draw a portrait A
sPrepare by same artist.
5. as the described portrait-photo conversion method in the claim 1, wherein,
P
i=Q
i-m
p, Q wherein
i=P
iOriginal photo,
S
i=T
i-m
s, T wherein
i=S
iOriginal portrait,
6. as the described portrait-photo conversion method in the claim 5, wherein, also comprise visual puppet portrait T
rThe generation step: T
r=S
r+ m
s.
7. portrait-photo conversion method, this method are to utilize a portrait collection A
sPhotograph collection A with correspondence
pBe portrait S
kGenerate a pseudo-photo P
r, method, it is characterized in that: A
sAnd A
pM S arranged respectively
iAnd P
iSample, i.e. A
s=[S
1, S
2..., S
M] and A
p=[P
1, P
2..., P
M], wherein draw a portrait feature space U
sFrom A
sCalculate, the method may further comprise the steps:
A) projection S
kTo U
sThe middle projection coefficient b that calculates
s, S
k=U
sb
s
B) utilize A
pAnd b
sGenerate P
r
8. portrait as claimed in claim 7-photo conversion method, wherein, this method comprises the steps:
A) calculate
c
Si=each photo S
iReconstruct S
kWeighting coefficient, use
Reconstruct S
k
V
s=A
s TA
sThe unit character vector matrix
Λ
s=A
s TA
sEigenvalue matrix;
B) pass through formula
Generate P
r
9. portrait as claimed in claim 7-photo conversion method, wherein, M 〉=80.
10. portrait as claimed in claim 7-photo conversion method, wherein, all draw a portrait A
sPrepare by same artist.
11. portrait as claimed in claim 7-photo conversion method, wherein,
P
i=Q
i-m
p, Q wherein
i=P
iOriginal photo,
S
i=T
i-m
s, T wherein
i=S
iOriginal portrait,
12. portrait as claimed in claim 11-photo conversion method, wherein, this method also comprises visual pseudo-photo Q
rThe generation step: Q
r=P
r+ m
p
13. a portrait-photo recognition methods, this method are to utilize photograph collection A
pPortrait collection A with correspondence
sIn a Photo Library, find and draw a portrait S
kThe photo P that mates most
k, it is characterized in that: each the photo P in this Photo Library
GiExpression, A
pAnd A
sM P arranged respectively
iAnd S
iSample, i.e. A
p=[P
1, P
2..., P
M] and A
s=[S
1, S
2..., S
M], photo feature space U
pWith portrait feature space U
sRespectively from A
pAnd A
sCalculate, the method may further comprise the steps:
--be each the photo P in the Photo Library
GiGenerate a pseudo-portrait S
r, by
A) projection P
GiTo U
pThe middle projection coefficient b that calculates
p, P
Gi=U
pb
p
B) utilize A
sAnd b
pGenerate S
r
--by relatively pseudo-portrait S
rWith S
kDiscern the corresponding puppet portrait S that mates most
RK, its pairing photo in Photo Library is the P that will look for
k
14. portrait as claimed in claim 13-photo recognition methods, wherein, M 〉=80.
15. portrait as claimed in claim 13-photo recognition methods, wherein, all draw a portrait A
sPrepare by same artist.
17. portrait as claimed in claim 13-photo recognition methods, wherein, the puppet portrait S that identification is mated most
RKBe by:
--for each pseudo-portrait S
r, projection S
rTo U
sTo calculate corresponding projection coefficient b
r,
S
r=U
sb
r;
--projection S
kTo U
sTo calculate corresponding projection coefficient b
S, S
k=U
sb
S
--find pseudo-portrait S
RK, make its projection coefficient b
rAnd b
sMinimum difference is arranged, then S
RKPairing photo is and S in Photo Library
kThe photo P that mates most
k
18. portrait as claimed in claim 13-photo recognition methods, wherein, puppet portrait S wherein
rBe by each the photo P in the Photo Library
GiGenerate, may further comprise the steps:
A) calculate
For rebuilding P
GiAt photograph collection A
pIn each photo P
iWeighting coefficient,
Wherein
V
p=A
p TA
pThe unit character vector matrix
Λ
P=A
p TA
pEigenvalue matrix;
B) by
Obtain S
r
19. portrait as claimed in claim 18-photo recognition methods, wherein, the puppet portrait S that identification is mated most
RKMethod be:
--projection S
kTo U
sIn, calculate projection coefficient b
S, S
k=U
Sb
S
--calculate
For rebuilding S
kEach draws a portrait S
iAt portrait collection A
sWeighting coefficient,
V
s=A
s TA
sThe unit character vector matrix
Λ
s=A
s TA
sEigenvalue matrix;
--find and S
kMinimum d is arranged
2The puppet portrait S of value
RK, discern the P that mates most with this
k, pass through formula
20. a portrait-photo recognition methods, this method are to utilize photograph collection A
pWith the portrait collection A corresponding with it
s, in a Photo Library, find and draw a portrait S
kThe photo P that mates most
k, it is characterized in that: each the photo P in the picture library
GiMark, A
pAnd A
sM P arranged respectively
iAnd S
iSample, i.e. A
p=[P
1, P
2..., P
M] and A
s=[S
1, S
2..., S
M], photo feature space U
pWith portrait feature space U
sRespectively by A
pAnd A
sCalculate, step is as follows:
--be S
kGenerate a pseudo-photo, by
A) projection S
kTo U
sIn, calculate projection coefficient b
s, S
k=U
sb
s,
B) utilize A
pAnd b
sGenerate P
r
--by pseudo-photo P
rCompare the P that identification is mated most with the photo in the Photo Library
k
21. the portrait described in claim 20-photo recognition methods, wherein, M 〉=80.
22. the portrait described in claim 20-photo recognition methods, wherein, all draw a portrait A
sPrepare by same artist.
24. the portrait described in claim 20-photo recognition methods, wherein, the photo P that identification is mated most
kBy:
A) each photo P among the comparison film collection figure
Gi, projection P
GiTo U
pIn, calculate corresponding projection coefficient b
p, make
B) the pseudo-photo P of projection
rTo U
pIn, calculate corresponding projection coefficient b
r, make P
r=U
pb
r
C) P that mates most
kBe its coefficient b
pAnd b
rMinimum difference is arranged.
25. the portrait described in claim 24-photo recognition methods, wherein this method further may further comprise the steps:
A) calculate
Each draws a portrait S
iAt portrait collection A
sWeighting coefficient, come reconstruct P
r,
V
s=A
s TA
sThe unit character vector matrix
Λ
s=A
s TA
sEigenvalue matrix
B) be each photo P in Photo Library
Gi, calculate
Each photo P
iAt photograph collection A
pIn weight vectors, come reconstruct P
Gi,
V
p=A
p TA
pThe unit character vector matrix
Λ
P=A
p TA
pEigenvalue matrix
--with minimum d
3The P that value identification is mated most
k, pass through formula
26. a portrait-photo recognition methods, this method are to utilize photograph collection A
pWith the portrait collection A corresponding with it
sIn a portrait picture library, find and photo P
kThe portrait S that mates most
k, each the portrait S in the picture library
GiMark, A
pAnd A
sM sample P arranged respectively
iAnd S
i, i.e. A
p=[P
1, P
2..., P
M] and A
s=[S
1, S
2..., S
M].Photo feature space U
pWith portrait feature space U
sRespectively by A
pAnd A
sCalculate, step is as follows:
--for being each portrait S in the portrait picture library
GiGenerate a pseudo-photo P
r, by
A) projection S
GiTo U
sIn, calculate projection coefficient b
s, S
Gi=U
sb
s
B) utilize A
pAnd b
sGenerate P
r
--by pseudo-photo P
rWith P
kDiscern the corresponding pseudo-photo P that mates most
RK, its pairing portrait in the portrait picture library is the S that will look for
k
27. the portrait described in claim 26-photo recognition methods, wherein, M 〉=80.
28. the portrait described in claim 26-photo recognition methods, wherein, all draw a portrait A
sPrepare by same artist.
30. the portrait described in claim 26-photo recognition methods, wherein, the pseudo-photo P that identification is mated most
RKBy:
--for each pseudo-photo P
r, projection P
rTo U
pIn, calculate corresponding projection coefficient b
r, make P
r=U
pb
r
--projection P
kTo U
pIn, calculate corresponding projection coefficient b
p, make P
k=U
pb
p
--find pseudo-photo P
RK, make its projection coefficient b
rAnd b
pMinimum difference is arranged, then P
RKPairing portrait is and S in the portrait picture library
k
31. the portrait described in claim 26-photo recognition methods wherein, is each portrait S from the portrait storehouse
GiGenerate pseudo-photo P
rMay further comprise the steps:
A) calculate
Each draws a portrait S
iAt portrait collection A
sWeighting coefficient, come reconstruct S
Gi,
V
s=A
s TA
sThe unit character vector matrix
Λ
s=A
s TA
sEigenvalue matrix
B) pass through
Generate pseudo-photo P
r
32. the portrait one photo recognition methods described in claim 31, wherein, the pseudo-photo P that identification is mated most
Rk, by:
--projection P
kTo U
pCalculate projection coefficient b
p, P
k=U
pb
P
--calculate
Each photo P
iAt the weighting coefficient of portrait collection Λ p,
V
p=A
p TA
pThe unit character vector matrix
Λ
P=A
p TA
pEigenvalue matrix;
--with minimum d is arranged
kThe puppet portrait P of value
rThe S that identification is mated most
k, pass through formula
33. a portrait one photo recognition methods, this method is to utilize photograph collection A
pWith the portrait collection A corresponding with it
sIn a portrait picture library, find and photo P
kThe portrait S that mates most
r, each the portrait S in the picture library
GiMark, A
pAnd A
sM P arranged respectively
iAnd S
iSample, i.e. A
p=[P
1, P
2..., P
M] and A
s=[S
1, S
2..., S
M], photo feature space U
pWith portrait feature space U
sRespectively by A
pAnd A
sCalculate, step is as follows:
--be P
kGenerate a pseudo-portrait S
r, by
A) projection P
kTo U
pIn, calculate projection coefficient b
p, make P
k=U
pb
p,,
B) utilize A
sAnd b
pGenerate S
r
--by puppet being drawn a portrait S
rWith the portrait picture library in portrait relatively, identification is the S of coupling
k
34. the portrait one photo recognition methods described in claim 33, wherein, M 〉=80.
35. the portrait one photo recognition methods described in claim 33, wherein, all draw a portrait A
s, prepare by same artist.
37. the portrait described in claim 33-photo recognition methods is wherein discerned the portrait S that mates most
kBy:
--to each portrait S
Gi, projection S
GiTo U
sIn, calculate corresponding projection coefficient b
s, make
--the pseudo-portrait of projection S
rTo U
sIn, calculate corresponding projection coefficient b
r, make S
r=U
sb
r
--the S that mates most
kPromptly be that minimum coefficient b is arranged
sAnd b
rThe portrait of difference.
38. the portrait described in claim 37-photo recognition methods, wherein, this method further may further comprise the steps:
A) calculate
Each photo P
iAt photograph collection A
pWeighting coefficient, come reconstruct S
r,
V
p=A
p TA
pThe unit character vector matrix
Λ
P=A
p TA
pEigenvalue matrix;
B) to each S in the portrait picture library
Gi, calculate
Each draws a portrait S
iAt portrait collection A
sWeighting coefficient, reconstruct S
Gi,
V
s=A
s TA
sThe unit character vector matrix
Λ
s=A
s TA
sEigenvalue matrix;
--with minimum d is arranged
5The S that value identification is mated most
k, pass through formula
39. a portrait-photo converting system, this system are to utilize a photograph collection A
pWith the portrait collection A corresponding with it
sBe photo P
kGenerate a pseudo-portrait S
R_, photograph collection A
pWith portrait collection A
sM P arranged respectively
iAnd A
iSample, i.e. A
p=[P
1, P
2..., P
M] and A
s=[S
1, S
2..., S
M], photo feature space U
pBe from A
pCalculate, utilize the algorithm of setting forth in the claim 1.
40. a portrait-photo coordinate conversion computer system, this system utilize a portrait collection A
sWith its corresponding photograph collection A
pBe portrait S
kGenerate a pseudo-photo P
r, A
pAnd A
sM P arranged respectively
iAnd S
iSample, i.e. A
s=[S
1, S
2..., S
M] and A
p=[P
1, P
2..., P
M], portrait feature space U
sBe from A
sCalculate, utilize the algorithm of setting forth in the claim 7.
41. a portrait-photo identification computer system, this system utilizes photograph collection A
pWith the portrait collection A corresponding with it
sIn a Photo Library, be portrait S
kLook for the photo P that mates most
k, a large amount of photos is arranged, each photo P in the Photo Library
GiMark, A
pAnd A
sM P arranged respectively
iAnd S
iSample, i.e. A
p=[P
1, P
2..., P
M] and A
s=[S
1, S
2..., S
M], photo feature space U
pWith portrait feature space U
sRespectively from A
pAnd A
sCalculate, utilize the algorithm of setting forth in claim 13 and 20.
42. a portrait-photo identification computer system, this system utilizes photograph collection A
pWith the portrait collection A corresponding with it
sIn a portrait picture library, be photo P
kLook for the portrait S that mates most
k, a large amount of portraits is arranged, each portrait S in the portrait picture library
GiMark, A
pAnd A
sM P arranged respectively
iAnd S
iSample, i.e. A
p=[P
1, P
2..., P
M] and A
s=[S
1, S
2..., S
M], photo feature space U
pWith portrait feature space U
sRespectively from A
pAnd A
sCalculate, utilize the algorithm of setting forth in claim 26 and 33.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
HK02106852A HK1052831A2 (en) | 2002-09-19 | 2002-09-19 | Sketch-photo recognition |
HK02106852.2 | 2002-09-19 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1701339A true CN1701339A (en) | 2005-11-23 |
CN1327386C CN1327386C (en) | 2007-07-18 |
Family
ID=30130369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB038252570A Expired - Lifetime CN1327386C (en) | 2002-09-19 | 2003-09-19 | Portrait-photo recognition |
Country Status (4)
Country | Link |
---|---|
CN (1) | CN1327386C (en) |
AU (1) | AU2003271508A1 (en) |
HK (1) | HK1052831A2 (en) |
WO (1) | WO2004027692A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159064B (en) * | 2007-11-29 | 2010-09-01 | 腾讯科技(深圳)有限公司 | Image generation system and method for generating image |
WO2015143580A1 (en) * | 2014-03-28 | 2015-10-01 | Huawei Technologies Co., Ltd | Method and system for verifying facial data |
WO2016026064A1 (en) * | 2014-08-20 | 2016-02-25 | Xiaoou Tang | A method and a system for estimating facial landmarks for face image |
CN106412590A (en) * | 2016-11-21 | 2017-02-15 | 西安电子科技大学 | Image processing method and device |
CN112368708A (en) * | 2018-07-02 | 2021-02-12 | 斯托瓦斯医学研究所 | Facial image recognition using pseudo-images |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034849B (en) * | 2012-12-19 | 2016-01-13 | 香港应用科技研究院有限公司 | Estimate for the perception variance level of cartographical sketching in sketch mates with photo |
WO2017020140A1 (en) * | 2015-08-03 | 2017-02-09 | Orand S.A. | System for searching for images by sketches using histograms of cell orientations and extraction of contours based on mid-level features |
CN108805951B (en) * | 2018-05-30 | 2022-07-19 | 重庆辉烨物联科技有限公司 | Projection image processing method, device, terminal and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5835616A (en) * | 1994-02-18 | 1998-11-10 | University Of Central Florida | Face detection using templates |
KR19980703120A (en) * | 1995-03-20 | 1998-10-15 | 조안나 티. 라우 | Image Identification System and Method |
ATE258322T1 (en) * | 1998-12-02 | 2004-02-15 | Univ Manchester | DETERMINATION OF FACIAL UNDERSPACES |
-
2002
- 2002-09-19 HK HK02106852A patent/HK1052831A2/en not_active IP Right Cessation
-
2003
- 2003-09-19 CN CNB038252570A patent/CN1327386C/en not_active Expired - Lifetime
- 2003-09-19 WO PCT/CN2003/000797 patent/WO2004027692A1/en active Application Filing
- 2003-09-19 AU AU2003271508A patent/AU2003271508A1/en not_active Abandoned
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159064B (en) * | 2007-11-29 | 2010-09-01 | 腾讯科技(深圳)有限公司 | Image generation system and method for generating image |
WO2015143580A1 (en) * | 2014-03-28 | 2015-10-01 | Huawei Technologies Co., Ltd | Method and system for verifying facial data |
CN106663184A (en) * | 2014-03-28 | 2017-05-10 | 华为技术有限公司 | Method and system for verifying facial data |
US10339177B2 (en) | 2014-03-28 | 2019-07-02 | Huawei Technologies Co., Ltd. | Method and a system for verifying facial data |
WO2016026064A1 (en) * | 2014-08-20 | 2016-02-25 | Xiaoou Tang | A method and a system for estimating facial landmarks for face image |
CN107004136A (en) * | 2014-08-20 | 2017-08-01 | 北京市商汤科技开发有限公司 | For the method and system for the face key point for estimating facial image |
CN107004136B (en) * | 2014-08-20 | 2018-04-17 | 北京市商汤科技开发有限公司 | Method and system for the face key point for estimating facial image |
CN106412590A (en) * | 2016-11-21 | 2017-02-15 | 西安电子科技大学 | Image processing method and device |
CN106412590B (en) * | 2016-11-21 | 2019-05-14 | 西安电子科技大学 | A kind of image processing method and device |
CN112368708A (en) * | 2018-07-02 | 2021-02-12 | 斯托瓦斯医学研究所 | Facial image recognition using pseudo-images |
CN112368708B (en) * | 2018-07-02 | 2024-04-30 | 斯托瓦斯医学研究所 | Facial image recognition using pseudo-images |
Also Published As
Publication number | Publication date |
---|---|
WO2004027692A1 (en) | 2004-04-01 |
CN1327386C (en) | 2007-07-18 |
AU2003271508A1 (en) | 2004-04-08 |
HK1052831A2 (en) | 2003-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1886255B1 (en) | Using photographer identity to classify images | |
CN109961051A (en) | A kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic | |
US7555148B1 (en) | Classification system for consumer digital images using workflow, face detection, normalization, and face recognition | |
US7551755B1 (en) | Classification and organization of consumer digital images using workflow, and face detection and recognition | |
US8199979B2 (en) | Classification system for consumer digital images using automatic workflow and face detection and recognition | |
US7558408B1 (en) | Classification system for consumer digital images using workflow and user interface modules, and face detection and recognition | |
CN100342399C (en) | Method and apparatus for extracting feature vector used for face recognition and retrieval | |
US7587068B1 (en) | Classification database for consumer digital images | |
US8553949B2 (en) | Classification and organization of consumer digital images using workflow, and face detection and recognition | |
CN1278280C (en) | Method and device for detecting image copy of contents | |
US6807286B1 (en) | Object recognition using binary image quantization and hough kernels | |
US20110170781A1 (en) | Comparison of visual information | |
JP2002081922A (en) | Image processing device, image processing method, and program | |
JP5018614B2 (en) | Image processing method, program for executing the method, storage medium, imaging device, and image processing system | |
Farinella et al. | Scene classification in compressed and constrained domain | |
CN1701339A (en) | Portrait-photo recognition | |
JP4539519B2 (en) | Stereo model generation apparatus and stereo model generation method | |
CN1801180A (en) | Identity recognition method based on eyebrow recognition | |
CN1881211A (en) | Graphic retrieve method | |
US20190026424A1 (en) | Information processing apparatus, suspect information generation method and program | |
Proenca et al. | SHREC’15 Track: Retrieval of Oobjects captured with kinect one camera | |
Korsunov et al. | Recognition method of near-duplicate images based on the perceptual hash and image key points using | |
JP2010073194A (en) | Image processing device, image processing method, and program | |
Wannous et al. | Place recognition via 3d modeling for personal activity lifelog using wearable camera | |
Shen et al. | Photo selection for family album using deep neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CX01 | Expiry of patent term | ||
CX01 | Expiry of patent term |
Granted publication date: 20070718 |