CN1327386C - Portrait-photo recognition - Google Patents

Portrait-photo recognition Download PDF

Info

Publication number
CN1327386C
CN1327386C CNB038252570A CN03825257A CN1327386C CN 1327386 C CN1327386 C CN 1327386C CN B038252570 A CNB038252570 A CN B038252570A CN 03825257 A CN03825257 A CN 03825257A CN 1327386 C CN1327386 C CN 1327386C
Authority
CN
China
Prior art keywords
portrait
photo
calculate
projection
pseudo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB038252570A
Other languages
Chinese (zh)
Other versions
CN1701339A (en
Inventor
汤晓鸥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN1701339A publication Critical patent/CN1701339A/en
Application granted granted Critical
Publication of CN1327386C publication Critical patent/CN1327386C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a novel photo retrieval system based on an portrait. The new method can greatly reduce the difference between the picture and the portrait and effectively match the picture and the portrait. Experimental data also confirm the effectiveness of this algorithm.

Description

The identification of portrait-photo
Technical field
For judicial department, in the picture data storehouse in police office, carry out the automatic retrieval of human face photo and discern of crucial importance.The scope that it can help the investigator to confirm suspect or dwindle suspect effectively.But under most of situation, be difficult to obtain suspect's photo.Best substitute is based on the suspect's that eye witness's description draws portrait.
The invention relates to the method for utilizing eigenface and in the picture data storehouse, seek the photo that is complementary with portrait, or in the portrait database, seek the portrait that is complementary with photo.
Background technology
In recent years, because the increase of application demand in the fields such as the administration of justice, video monitoring, bank and security system, the Automatic face recognition technology had attracted extensive concern.Compare with other technologies (as fingerprint recognition), the advantage of recognition of face is that the user is easy to use, and cost is low.The operator need can not revise the identification error of face identification system through the special training machine.
An important application of face recognition technology is to assist judicial department to solve a case.For example, the photo in the picture data storehouse, police office is retrieved the scope that can help to dwindle fast suspect automatically.Yet under most situation, judicial department can't obtain suspect's photo.Best substitute is suspect's portrait of drawing according to eye witness's description.Utilizing portrait to search corresponding with it photo in database has very big potential using value, because it can not only help the policeman to seek suspect, and can help eyewitness and artist to utilize the photo of giving for change from database that portrait is made amendment.
Although portrait-photo searching system is had the important use demand, the research in this field is [1] [2] seldom.This may be very difficult because set up the database of large-scale human face portrait.
There are two traditional methods to be used in database, to carry out coupling between photo and the photo.To being described below of these two kinds of methods.
A. geometric properties method
The geometric properties method is the most direct method.Mostly concentrate on based on the recognition of face of geometric properties research and to extract face characteristic, as relative position and other parameters of eyes, mouth, chin.Although the very easy understanding of geometric properties method, it can't comprise the enough information of people's face being stablized identification.Particularly geometric properties changes along with the variation of facial expression and ratio.Also may be very greatly even same individual's different photo geometric properties change.One piece of nearest article compares geometric properties method and template matching method, and its conclusion is tended to template matching method [3].
B. eigenface method
At present one of successful method that people's face is discerned may be eigenface method [9].The FERET test report is through behind the comprehensive relatively the whole bag of tricks classifying it as one of effective method [6].Similarly conclusion is at Zhang et al[8] in also can see.Although the eigenface method is thrown light on, expression, and posture influence, these factors are unimportant for the identification of standard identification photographs.
The eigenface method characterizes people's face and is used for identification with Karhunen-Loeve Trans form (KLT).In case try to achieve an eigenvectors from people's data set covariance matrix that looks familiar, also claim eigenface, facial image just can come reconstruct by the suitable weighted linear combination to eigenface.For a given image, its weighting coefficient on eigenface has just constituted a stack features vector.For a new test pattern, its weighting coefficient can calculate to each eigenface by this image of projection.Classify according to the distance between the weighting coefficient eigenvector of each image in the weighting coefficient eigenvector of compare test image and the database then.
Although Karhunen-Loeve Trans form is set forth in a lot of teaching materials and paper, we simply discuss at the identification of photo again here, particularly.When calculating K arhunen-LoeveTransform, use column vector
Figure C0382525700121
Represent a sample facial image, its average face is by formula
Figure C0382525700122
Draw, wherein M is photograph collection A PThe quantity of training sample.From each image, deduct average face, obtain The training set of photo is formed the matrix of N * M , wherein N is the quantity of whole pixels in the image.Sample covariance matrix is estimated as
W=A pA p T (1)
A wherein P TBe A PThe commentaries on classics order.
Consider that the picture image data amount is very big, the proper vector of directly calculating W according to the calculated capacity of present permission is also unrealistic.Usually use the method for main proper vector assessment.Because sample image quantity M is less relatively, the order of W is M-1.So at first calculate a less matrix A P TA PProper vector,
(A p TA p)V p=V pΛ p (2)
Wherein Vp is the unit character vector matrix, Λ pBe the diagonalization eigenvalue matrix.Ap is all multiply by on the formula both sides, and we obtain
(A pA p T)A pV p=A pV pΛ p (3)
So the standard orthogonal characteristic vector matrix of covariance matrix W, or feature space Up is
U p = A p V p Λ p - 1 2 - - - ( 4 )
For a new human face photo
Figure C0382525700132
,, its coefficient that is projected in the characteristic vector space forms vector It is used as eigenvector and classifies.
Yet, because the greatest differences of human face photo and portrait directly is applied to the eigenface method may not have good effect based on the photo identification of portrait.In general, different two different photos that are greater than from different people of same individual's photo and portrait.
Summary of the invention
One of purpose of the present invention provides a kind of method or system is used to solve the problem of more effectively mating between portrait and the photo.
Another object of the present invention is to solve the problem that one or more methods in the past propose.At least, it provides a useful selection can for the public more.
For achieving the above object, the invention provides a kind of method of utilizing photograph collection Ap and corresponding portrait collection As to generate the pseudo-Sr of portrait for photo Pk.Ap and As have M sample Pi and Si respectively, are expressed as Ap=[P1, P2 ..., PM], and As=[S1, S2 ..., SM], wherein the feature space Up of photo calculates from Ap.The method may further comprise the steps:
A) projection P k calculates projection coefficient bp to Up, obtains Pk=Up bp;
B) utilize As and bp to generate Sr.
Another aspect of the present invention provides a kind of method of utilizing portrait collection As and corresponding photograph collection Ap to generate pseudo-photo Pr for portrait Sk.As and Ap have M sample Si and Pi respectively, are expressed as As=[S1, S2 ..., SM] and Ap=[P1, P2 ..., PM], wherein Hua Xiang feature space Us calculates from As.The method may further comprise the steps:
A) projection Sk calculates projection coefficient bs to Us, obtains Sk=Usbs;
B) utilize Ap and bs to generate Pr.
Another aspect of the present invention is to utilize photograph collection Ap and corresponding portrait collection As to choose the photo Pk that mates most with portrait Sk from Photo Library, a large amount of photos is arranged in this Photo Library, and every photo indicates with PGi, and the two has M sample Pi and Si respectively Ap and As, be expressed as Ap=[P1, P2 ..., PM] and As=[S1, S2, ..., SM], wherein photo feature space Up and portrait feature space Us calculate from Ap and As respectively.The method may further comprise the steps:
--for every in Photo Library photo PGi generates pseudo-portrait Sr, by
A) projection P Gi calculates projection coefficient bp to Up, obtains PGi=Upbp;
B) utilize As and bp to generate Sr;
--differentiate that by relatively pseudo-portrait Sr and Sk the puppet portrait Srk of corresponding coupling the most, photo of its correspondence in Photo Library are and draw a portrait the photo Pk that Sk mates most.
The 4th aspect of the present invention is to utilize photograph collection Ap and corresponding portrait collection As to choose the photo Pk that mates most with portrait Sk from Photo Library, a large amount of photos is arranged in this Photo Library, and every photo represents that with PGi Ap and As have M sample Pi and Si respectively, be expressed as Ap=[P1, P2 ..., PM] and As=[S1, S2, ..., SM], the feature space Up of photo and the feature space Us of portrait calculate from Ap and As respectively.The method may further comprise the steps:
--Sk generates a pseudo-photo Pr for portrait, by
A) projection Sk calculates projection coefficient bs to Us, obtains Sk=Usbs;
B) utilize Ap and bs to generate Pr;
--by with Photo Library in photo compare, find out the photo Pk that mates most with pseudo-photo Pr.
The 5th aspect of the present invention is to utilize photograph collection Ap and corresponding portrait collection As to select the portrait Sk that mates most with photo Pk from the portrait picture library, in this portrait picture library a large amount of portraits is arranged, every portrait represents that with SGi Ap and As have M sample Pi and Si respectively, be expressed as Ap=[P1, P2 ..., PM] and As=[S1, S2, ...., SM], the feature space Up of photo and the feature space Us of portrait calculate from Ap and As respectively.The method may further comprise the steps:
--for every portrait SGi in the portrait picture library generates pseudo-photo Pr, by
A) projection SGi calculates projection coefficient bs to Us, obtains SGi=Usbs;
B) utilize Ap and bs to generate Pr;
--find the corresponding pseudo-photo Prk of coupling by pseudo-photo Pr and photo Pk, its corresponding portrait in the portrait picture library is the portrait Sk that mates the most with photo Pk.
The 6th aspect of the present invention is to utilize photograph collection Ap and corresponding portrait collection As to select the portrait Sk that mates most with photo Pk from the portrait picture library, in this portrait picture library a large amount of portraits is arranged, every portrait represents that with SGi Ap and As have M sample Pi and Si respectively, be expressed as Ap=[P1, P2 ..., PM] and As=[S1, S2, ..., SM], the feature space Up of photo and the feature space Us of portrait calculate from Ap and As respectively.The method may further comprise the steps:
--for photo Pk generates a pseudo-portrait Sr, by
A) projection P k calculates projection coefficient bp to Up, obtains Pk=Upbp;
B) utilize As and bp to generate Sr;
--by comparing, find out the portrait Sk that mates most with pseudo-portrait Sr with the portrait of portrait in the picture library.
This invention comprises with computer system and realizes above-mentioned any algorithm.
Various selection of the present invention and variation are described in the part in the back, so that the people who is familiar with this area is appreciated that.
Below in conjunction with drawings and embodiments, technical scheme of the present invention is described in further detail.
Description of drawings
Fig. 1 is human face photo of the present invention (last two row) and portrait (two row down) example.
Fig. 2 is converted to the algorithm of portrait for photo of the present invention.
Fig. 3 for photo of the present invention to drawing a portrait/draw a portrait the transform instances of photo.
Fig. 4 is the comparison of the cumulative matches rate of different automatic identifying method of the present invention and human eye identification.
Embodiment
To specify the method that adopts of the present invention with better embodiment and synoptic diagram thereof below.
Although not concrete elaboration the in foregoing, but those of ordinary skill in the art should know portrait and photo the digitized processing of certain image resolving rate will be arranged with input equipments such as scanner or digital cameras, and the computer system that is used for executive routine to there be sufficient service ability and storage space.
The present invention needs one group of photo training set and corresponding portrait training set, represents with Ap and As respectively.Each Ap and As have M Pi and Si sample respectively, although M can be than 1 big any value, first-selected M answers 〉=80 to improve accuracy.Each Ap and As are used to calculate the above-mentioned corresponding feature space U that mentions.
For each training photograph image
Figure C0382525700161
, a corresponding portrait is all arranged A sample portrait deducts average portrait
Figure C0382525700163
After a column vector.Be similar to the photo training set
Figure C0382525700164
We have corresponding image training set
The conversion of photo-portrait/portrait-photo and identification
1. photo is converted to portrait
As mentioned above, with traditional eigenface method, a facial image can pass through formula by eigenface
Reconstruct,
Wherein Up is the photo feature space,
Figure C0382525700167
Be the projection coefficient of Pr at the photo feature space, same, width of cloth portrait can pass through formula S r = U s b → s Reconstruct, wherein Us is the portrait feature space,
Figure C0382525700172
Be the projection coefficient of Sr at the portrait feature space.But be difficult to the photo of two correspondences is associated with the projection coefficient of portrait feature space, thereby greatly reduce photo-portrait, the recognition capability of portrait-photo.
In order to address this problem, since the present invention finds to have formula U p = A p V p Λ p - 1 2 , The photo of reconstruct just can be expressed as so
Wherein It is the column vector of M dimension.Therefore, formula (6) can be summarized as
P → r = A p c ‾ p = Σ i = 1 M c p i P → i - - - ( 7 )
This shows that the photo of reconstruct is actually the best approximation that makes up the original image with minimum average B configuration variance that obtains with the optimum linear of M training sample image, In coefficient the proportion of each sample image contribution is described, the reconstruct photo that is generated by this method is presented in the row that Fig. 3 indicates " reconstruct photo ".
Each the sample photograph image in the formula (7)
Figure C0382525700178
Replacement is corresponding portrait
Figure C0382525700179
As shown in Figure 2, we obtain formula,
Because the similar of photo and portrait, so the portrait of reconstruct Should to really draw a portrait similar.If photo sample
Figure C03825257001712
Human face photo contribution to its reconstruct is a lot, and its corresponding sample portrait Si will be a lot of to the contribution of reconstruct portrait so.For an extreme example, to a special sample photo
Figure C03825257001713
, it is to the reconstruct photo
Figure C03825257001714
There is a unit weights to be c p k = 1 , The weight of other all sample photos is zero, i.e. this reconstruct photo and this sample photo
Figure C03825257001716
Just the same, its reconstruct is drawn a portrait so
Figure C03825257001717
Only need simply to draw a portrait with its correspondence
Figure C03825257001718
Replacement is a restructural.By such replacement, a photograph image promptly can be converted to pseudo-portrait.
In brief, photo is converted to portrait and can be undertaken by following step:
1. at first calculate A p TA pProper vector V pAnd Λ P, so that calculate the training set eigenvectors matrix U of photo p
2. by projection To feature space U pIn, calculate the weighing vector of its eigenface
Figure C0382525700182
In addition, c pBy calculating c → P = V P Λ P - 1 / 2 b → P Obtain;
3. utilize As and c pReconstruct portrait Sr.If c pBy calculating, then pseudo-portrait Sr can pass through operational formula S r = A S c → P = A S V P Λ P - 1 / 2 b → P Draw;
The front was once mentioned, and before computing begins, deduct the average photograph image that generates from former photo Q from the photo training set
Figure C0382525700185
,, deduct average portrait for the portrait training set
Figure C0382525700186
, the step below so just needing:
From the input photograph image
Figure C0382525700187
In deduct average photo
Figure C0382525700188
Obtain
5. last, on average drawing a portrait Add-back is to obtain final visible reconstruct portrait
Figure C03825257001811
Fig. 3 shows the comparative example of real portrait and reconstruct portrait.Can be obvious the two similarity.
Although above-mentioned discussion is mainly the conversion of photo to portrait, clearly, opposite conversion can use the same method and accomplish.For example, a pseudo-photo can pass through formula P → r = A P c → S = A P V S Λ S - 1 / 2 b → S Obtain.
Portrait identification
Passing through by photo after the conversion of portrait, the identification portrait has just become easily from a large amount of photos.
Its specific algorithm is summarized as follows:
By previously described by photo to the transfer algorithm of drawing a portrait, use U pBe every in Photo Library photo
Figure C03825257001813
Calculate the puppet portrait of mapping
Figure C03825257001814
, U wherein pCalculate from Ap.Here Photo Library not necessarily wants the same with photo training set Ap.Certainly, as can improve arithmetic accuracy;
2. comparison query is drawn a portrait Draw a portrait with puppet Draw a portrait in order to the puppet that identification is mated most , and then find in the Photo Library with
Figure C03825257001818
The photo that mates most;
The comparison of pseudo-portrait and inquiry portrait can be with traditional eigenface method or other method that is fit to realizations, for example, and elastic graph matching method [4] [7];
Be that example describes with a kind of traditional people's face comparative approach below, can calculate proper vector earlier with the portrait training sample.To inquire about portrait then With the puppet portrait that in photograph collection, generates
Figure C0382525700192
Project on the portrait proper vector.Projection coefficient is used as the eigenvector of last classification.Concrete comparison algorithm is summarized as follows:
1. by projection inquiry portrait To portrait feature space U sCalculate the inquiry portrait
Figure C0382525700194
Weighing vector
Figure C0382525700195
2. calculate
Figure C0382525700196
And each
Figure C0382525700197
Between distance, The puppet portrait that is every photo generation from Photo Library is calculated, and portrait is identified as people's face that minor increment is arranged between two vectors.
In above-mentioned algorithm, based on photo feature space Up, the photo in the picture library is converted into pseudo-portrait earlier In portrait feature space Us, discern then.We also can draw a portrait each inquiry based on the portrait feature space conversely Be converted to pseudo-photo Discern with the photo feature space by eigenface method or any other suitable method then.
For two kinds of methods, we have used two groups of reconstruction coefficients With , wherein
Figure C03825257001914
The representative weight of photo training set reconstruct photo, The expression weight of the portrait of portrait training set reconstruct.In fact, compare a photo and portrait, the reconstruction coefficients of our also available their correspondences With Directly discern as eigenvector.
Stated as former that for the photo of an input, its reconstruction coefficients vector in the photo training set was
Figure C03825257001918
Wherein
Figure C03825257001919
Be the projection weight vectors of photo in the photo feature space.Similarly, for the portrait of an input, its reconstruction coefficients vector in the portrait training set is Wherein
Figure C03825257001921
It is the projection weight vectors of the input portrait in the portrait feature space.If we use
Figure C03825257001922
With
Figure C03825257001923
Directly compare photo and portrait, its decipherment distance is defined as
Generate a pseudo-portrait if we are earlier a photo, calculate their distances at the portrait feature space again, this distance then is Wherein
Figure C03825257001926
Be the weight vectors that is projected onto the puppet portrait of portrait feature space,
Figure C0382525700201
It is the weight vectors that projects the real portrait of portrait feature space.Because U s = A s V s Λ s - 1 2 , We calculate
Figure C0382525700205
For,
Figure C0382525700206
If V s T ( A s T A s ) V s = Λ s , We obtain,
Figure C0382525700208
We use formula Obtain
Figure C03825257002010
At last, apart from d 2For,
Figure C03825257002011
On the contrary, generate a pseudo-photo, calculate its distance again at the photo feature space if we are earlier a portrait, such apart from d 3Can calculate by following formula
d 3 = | | Λ P 1 / 2 V P T ( c → P - c → s ) | | - - - ( 13 )
The distance of discerning under three kinds of situations is different, and their performance will give comparison in the test of back.
For those persons skilled in the art, this method can be used to concentrate the portrait of selecting with photo PK coupling from a portrait.Here, we also have other two kinds of selections:
A. whole portraits that portrait is concentrated convert pseudo-photo to, then with photo P kRelatively.By comparing b pAnd b rFinish comparison, wherein b p=P kProjection coefficient in Up, b rThe projection coefficient of the pseudo-photo of=each generation in the U spectrum, range formula can be rewritten as now d 4 = | | Λ P 1 / 2 V P T ( c P - c S ) | | ;
B. photo P kConvert pseudo-portrait S to k, then with whole portrait picture libraries in portrait relatively.By comparing b sAnd b rFinish comparison, wherein b s=the projection coefficient of each portrait in Us in the portrait picture library, b r=pseudo-portrait S kProjection coefficient in Us.Range formula can be written as now d 5 = | | Λ S 1 / 2 V S T ( c S - c p ) | | .
Checking
In order to prove the validity of new algorithm, we do one group of experiment and traditional geometric properties method and comparison of eigenface method.We set up a database that 188 comparison films and corresponding portrait are arranged, and they come from 188 different people respectively, and wherein 88 comparison film-portraits are used as training data, and other 100 comparison film-portraits are used for test.
The identification protocol [6] among the FERET is adopted in this experiment.The Photo Library collection that is used to test is made up of 100 human face photos.Query set is made up of 100 human face portraits.The cumulative matches rate is used for assessing operation result.It detects the number percent in " correct option is preceding n coupling ", and n is called as rank.
A. with the comparison of classic method
Table 1 has shown preceding ten accumulation matching rates that draw with three kinds of methods.
The accumulation matching rate of three kinds of methods of table 1.
Rank 1 2 3 4 5 6 7 8 9 10
Geometric method 30 37 45 48 53 59 62 66 67 70
The eigenface method 31 43 48 55 61 63 65 65 67 67
The portrait transformation approach 71 78 81 84 88 90 94 94 95 96
The experimental result of geometric method and eigenface method is undesirable.30% accuracy is only arranged in first matching rate.The tenth cumulative matches rate is 70%.Because the greatest differences of photo and portrait can be anticipated the experimental result that the eigenface method is very poor.From the experimental result of geometric properties method we can to draw photo similar to portrait not merely because the geometric similarity of people's face.The same with caricature, the size of the usually exaggerative human face of portrait.If someone nose is greater than average-size, caricature can draw greater than the nose of average-size so.On the contrary, if someone nose less than normal size, his nose will further be dwindled, thereby reaches exaggeration.
Feature portrait transformation approach has improved identification accuracy widely, and the tenth cumulative matches rate reaches 96%.The first cumulative matches rate degree of accuracy has also surpassed the twice of other two kinds of methods.Clearly illustrated the superiority of new method.This result also depends on the quality of portrait, and portrait can improve degree of accuracy for same high-level artist's hand.As shown in Figure 1, not every portrait is all very similar to photo, and the first row portrait of Fig. 1 is very alike with their corresponding photo, but second has capablely just had very big difference.This result's importance is to have shown that new method is better than traditional face identification method greatly.
B. the comparison of three kinds of range observations
This part, three range observation d1 that we described relatively in the past with one group of experiment, d2, d3.Here adopt and top identical data set.Experimental result sees Table 2.
The cumulative matches rate that table 2. draws with three kinds of different distance
rank 1 2 3 4 5 6 7 8 9 10
d 1 20 49 59 6 5 69 73 75 76 81 82
d 2 71 78 81 84 88 90 94 94 95 96
d 3 57 70 77 79 83 84 85 86 87 88
We see from test findings, in three kinds of distances
Figure C0382525700221
Effect is the poorest.This no wonder, because With
Figure C0382525700223
Represented respectively that to project to training photo and portrait be the coefficient in nonopiate space of benchmark, so can not correctly react the distance between facial image.d 2And d 3Be the distance of calculating from the orthogonal characteristic space, so provided better result.An interesting result is that d2 is better than d3 all the time.This looks contrasts the difference that the sheet feature space can show the different people face better as the portrait feature space.This may be because the artist tends to pounce in drawing a picture and catches and emphasize people's face obvious characteristics and its easier quilt is distinguished.Above-mentioned test looks and confirmed this point, because
Figure C0382525700224
Being mapped to portrait feature space ratio is mapped to the photo feature space better recognition result is arranged.
d 2Having preferably, the result can have another kind of the explanation.In order to calculate d 2Photo will be converted into pseudo-portrait, and calculates d 3, portrait must be converted into pseudo-photo.In general, compressed information is more stable than amplification message.Because containing, photo gestures the abundanter information of picture, so the conversion photo is easier to portrait.For an extreme example, suppose that portrait only comprises the simple profile of face characteristic, be easy to from human face photo this profile that draws, but very difficultly from simple lines, reconstruct photo.Therefore, to d 2Calculating, draw better result and be because photo can more stably be converted to portrait.
C. with the comparison of people's naked eyes identification
Come new method more of the present invention and the recognition capability of human eye with two experiments below to drawing a portrait.This more important, because in police and judicial, normally in mass media, extensively disseminate by portrait with suspect.To expect to recognize true man after people see portrait.Be equal to mutually with human eye identification portrait ability if can confirm the automatic recognition capability of computing machine, we just can carry out the large tracts of land retrieval of system with computing machine in large-scale picture data storehouse with portrait.
In first experiment, see a period of time for a testee in a portrait, before beginning to see photo, take portrait away then.The testee remembers portrait as far as possible, under the situation that does not have portrait, searches in the picture data storehouse.Tested people can select 10 photos similar to portrait from whole photos.Then according to arranging the photo of selecting with the similarity of portrait.The method is near reality, because people just see suspect's portrait momently on TV or newspaper, must find in actual life and draws a portrait similar people according to memory then.
Second experiment, we allow the testee to see that when the search Photo Library portrait, this result are conduct and automatic recognition system benchmark relatively.The results are shown among Fig. 4 of two tests.The human eye recognition result of first experiment is more much lower than the recognition result of computing machine.This is not only because photo and portrait different, also be owing to very difficult accurately remember to draw a portrait cause the distortion remembered.In fact, people are easy to distinguish familiar people's face, such as relatives or famous public figure, but are not easy to distinguish the stranger.Portrait and photo are not put together, people are difficult to discern the two.
When tested person person allows the contrast portrait during at searching database, its accurate rate rises to 73%, and is similar with the discrimination of computing machine.Yet human recognition capability can not increase because of the increase of rank, and the tenth cumulative matches rate of computing machine is increased to 96%.This shows that computing machine is similar with the mankind at least for the portrait recognition capability.Therefore, we now can be as searching for automatically in big database with portrait with photo.It is extremely important under the situation that can't obtain photo this method to be applied to judicial department.
The present invention utilizes photo-portrait/portrait-photo conversion, proposes a new human face portrait recognizer.It is more effective in the automatic coupling of photo and portrait that photo converts portrait to.Except having improved recognition speed and efficient, the recognition capability of new method even be better than people's naked eyes.
Although above-mentioned discussion only concentrates on the identification of human face photo-portrait/portrait-photo, also can also be used for the identification of other kind for the very easy discovery the present invention of person skilled in the art, such as building or other objects.Let it be to the greatest extent is mainly used in legal department, and it also is possible being used in other field.
In portrait identification, utilize the information of hair, can improve discrimination sometimes, but, should not utilize under a lot of situations owing to the hair mutability.Whether utilize the information of hair, can determine by actual conditions.
Purposes of the present invention is set forth particularly by example.Clearly, those skilled in the art may make finishing and reorganization to present invention.But be noted that these finishings and reorganization also belong in the scope of the present invention, belong in the claim of mentioning subsequently.And example or legend that purposes of the present invention should only not explained by this paper limit.

Claims (42)

1. portrait-photo conversion method, this method is to utilize a photograph collection A pPortrait collection A with its correspondence sBe photo P kGenerate a pseudo-portrait S r, it is characterized in that: A pAnd A sM P arranged respectively iAnd S iSample, i.e. A p=[P 1, P 2..., P M], A s=[S 1, S 2..., S M], the feature space U of photo pBy A pCalculate, the step of the method comprises:
A) projection P kTo U pThe middle projection coefficient b that calculates p, obtain P k=U pb p
B) utilize A sAnd b pGenerate S r
2. as the described portrait-photo conversion method in the claim 1, wherein, further may further comprise the steps:
A) calculate c P = V P Λ P - 1 / 2 b P = [ c P 1 , c P 2 , K , c P M ] T ,
C wherein Pi=each photo P iTo the weighting coefficient of image reconstruction, use P k = A P c P = Σ i = 1 M c P i P i , Reconstruct P k
V p=A p TA pThe unit character vector matrix;
Λ P=A p TA pEigenvalue matrix;
B) pass through formula S r = A S V P Λ P - 1 / 2 b P = A S c P = Σ i = 1 M c P i S i Obtain S r
3. as the described portrait-photo conversion method in the claim 1, wherein, M 〉=80.
4. as the described portrait-photo conversion method in the claim 1, wherein, all draw a portrait A sPrepare by same artist.
5. as the described portrait-photo conversion method in the claim 1, wherein,
P i=Q i-m p, Q wherein i=P iOriginal photo,
S i=T i-m s, T wherein i=S iOriginal portrait,
6. as the described portrait-photo conversion method in the claim 5, wherein, also comprise visual puppet portrait T rThe generation step: T r=S r+ m s
7. portrait-photo conversion method, this method are to utilize a portrait collection A sPhotograph collection A with correspondence pBe portrait S kGenerate a pseudo-photo P r, method, it is characterized in that: A sAnd A pM S arranged respectively iAnd P iSample, i.e. A s=[S 1, S 2..., S M] and A p=[P 1, P 2..., P M], wherein draw a portrait feature space U sFrom A sCalculate, the method may further comprise the steps:
A) projection S kTo U sThe middle projection coefficient b that calculates s, S k=U sb s
B) utilize A pAnd b sGenerate p r
8. portrait as claimed in claim 7-photo conversion method, wherein, this method comprises the steps:
A) calculate c S = V S Λ S - 1 / 2 b S = [ c S 1 , c S 2 , . . . , c S M ] T , c Si=each photo S iReconstruct S kWeighting coefficient, use S k = A S c S = Σ i = 1 M c S i S i Reconstruct S k
V s=A s TA sThe unit character vector matrix
Λ s=A s TA sEigenvalue matrix;
B) pass through formula P r = A P V S Λ S - 1 / 2 b S = A P c S = Σ i = 1 M c S i P i Generate p r
9. portrait as claimed in claim 7-photo conversion method, wherein, M 〉=80.
10. portrait as claimed in claim 7-photo conversion method, wherein, all draw a portrait A sPrepare by same artist.
11. portrait as claimed in claim 7-photo conversion method, wherein,
P i=Q i-m p, Q wherein i=P iOriginal photo,
S i=T i-m s, T wherein i=S iOriginal portrait,
12. portrait as claimed in claim 11-photo conversion method, wherein, this method also comprises visual pseudo-photo Q rThe generation step: Q r=P r+ m p
13. a portrait-photo recognition methods, this method are to utilize photograph collection A pPortrait collection A with correspondence sIn a Photo Library, find and draw a portrait S kThe photo P that mates most k, it is characterized in that: each the photo P in this Photo Library GiExpression, A pAnd A sM P arranged respectively iAnd S iSample, i.e. A p=[P 1, P 2..., P M] and A s=[S 1, S 2..., S M], photo feature space U pWith portrait feature space U sRespectively from A pAnd A sCalculate, the method may further comprise the steps:
--be each the photo P in the Photo Library GiGenerate a pseudo-portrait S r, by
A) projection P GiTo U pThe middle projection coefficient b that calculates p, P Gi=U pb p
B) utilize A sAnd b pGenerate S r
--by relatively pseudo-portrait S rWith S kDiscern the corresponding puppet portrait S that mates most RK, its pairing photo in Photo Library is the P that will look for k
14. portrait one photo recognition methods as claimed in claim 13, wherein, M 〉=80.
15. portrait as claimed in claim 13-photo recognition methods, wherein, all draw a portrait A sPrepare by same artist.
16. portrait as claimed in claim 13-photo recognition methods, wherein, P i=Q i-m p, Q wherein i=P iOriginal photo, S i=T i-m s, T wherein i=S iOriginal portrait,
Figure C038252570004C2
P G i = Q G i - m p , Wherein Q G i = P G i Original photo.
17. portrait as claimed in claim 13-photo recognition methods, wherein, the puppet portrait S that identification is mated most RKBe by:
--for each pseudo-portrait S r, projection S rTo U sTo calculate corresponding projection coefficient b r,
S r=U sb r
--projection S kTo U sTo calculate corresponding projection coefficient b s, S k=U sb s
--find pseudo-portrait S RK, make its projection coefficient b rAnd b sMinimum difference is arranged, then S RKPairing photo is and S in Photo Library kThe photo P that mates most k
18. portrait as claimed in claim 13-photo recognition methods, wherein, puppet portrait S wherein rBe by each the photo P in the Photo Library GiGenerate, may further comprise the steps:
A) calculate c P = V P Λ P - 1 / 2 b P = [ c P 1 , c P 2 , . . . , c P M ] T ,
P G i = A P c P = Σ i = 1 M c P i P i , Wherein
V p=A p TA pThe unit character vector matrix
Λ P=A p TA pEigenvalue matrix;
B) by S r = A S V P Λ P - 1 / 2 b P = A S c P = Σ i = 1 M c P i S i Obtain S r
19. portrait as claimed in claim 18-photo recognition methods, wherein, the puppet portrait S that identification is mated most RKMethod be:
--projection S kTo U sIn, calculate projection coefficient b s, S k=U sb s
--calculate c S = V S Λ S - 1 / 2 b S = [ c S 1 , c S 2 , . . . , c S M ] T ,
Figure C038252570005C3
Figure C038252570005C4
S k = A S c S = Σ i = 1 M c S i S i ,
V s=A s TA sThe unit character vector matrix
Λ s=A s TA sEigenvalue matrix;
--find and S kMinimum d is arranged 2The puppet portrait S of value RK, discern the P that mates most with this k,
Pass through formula
d 2 = | | Λ S 1 / 2 V S T ( c P - c S ) | | .
20. a portrait-photo recognition methods, this method are to utilize photograph collection A pWith the portrait collection A corresponding with it s, in a Photo Library, find and draw a portrait S kThe photo P that mates most k, it is characterized in that: each the photo P in the picture library GiMark, A pAnd A sM P arranged respectively iAnd S iSample, i.e. A p=[P 1, P 2..., P MAnd A s=[S 1, S 2..., S M], photo feature space U pWith portrait feature space U sRespectively by A pAnd A sCalculate, step is as follows:
--be S kGenerate a pseudo-photo, by
A) projection S kTo U sIn, calculate projection coefficient b s, S k=U sb s,
B) utilize A pAnd b sGenerate P r
--by pseudo-photo P rCompare the P that identification is mated most with the photo in the Photo Library k
21. the portrait described in claim 20-photo recognition methods, wherein, M 〉=80.
22. the portrait described in claim 20-photo recognition methods, wherein, all draw a portrait A sPrepare by same artist.
23. the portrait described in claim 20-photo recognition methods, wherein, P i=Q i-m p, Q wherein i=P iOriginal photo, S i=T i-m s, T wherein i=S iOriginal portrait,
Figure C038252570006C1
P G i = Q G i - m p , Wherein Q G i = P G i Original photo.
24. the portrait described in claim 20-photo recognition methods, wherein, the photo P that identification is mated most kBy:
A) each photo P among the comparison film collection figure Gi, projection P GiTo U pIn, calculate corresponding projection coefficient b p, make P G i = U p b p ;
B) the pseudo-photo P of projection rTo U pIn, calculate corresponding projection coefficient b r, make P r=U pb r
C) P that mates most kBe its coefficient b pAnd b rMinimum difference is arranged.
25. the portrait described in claim 24-photo recognition methods, wherein this method further may further comprise the steps:
A) calculate c s = V S Λ S - 1 / 2 b r = [ c S 1 , c S 2 , . . . , c S M ] T ,
Come reconstruct P r, P r = A P c s = Σ i = 1 M c S i P i ;
V s=A s TA sThe unit character vector matrix
Λ s=A s TA sEigenvalue matrix
B) be each photo P in Photo Library Gi, calculate
c P = V P Λ P - 1 / 2 b P = [ c P 1 , c P 2 , . . . , c P M ] T
Figure C038252570006C10
Come reconstruct P Gi, P G i = A P c P = Σ i = 1 M c P i P i ,
V p=A p TA pThe unit character vector matrix
Λ P=A p TA pEigenvalue matrix
--with minimum d 3The P that value identification is mated most k, pass through formula
d 3 = | | Λ P 1 / 2 V P T ( c P - c s ) | | .
26. a portrait-photo recognition methods, this method are to utilize photograph collection A pWith the portrait collection A corresponding with it sIn a portrait picture library, find and photo P kThe portrait S that mates most k, each the portrait S in the picture library GiMark, A pAnd A sM sample P arranged respectively iAnd S i, i.e. A p=[P 1, P 2..., P M] and A s=[S 1, S 2..., S M].Photo feature space U pWith portrait feature space U sRespectively by A pAnd A sCalculate, step is as follows:
--for being each portrait S in the portrait picture library GiGenerate a pseudo-photo P r, by
A) projection S GiTo U sIn, calculate projection coefficient b s, S Gi=U sb s
B) utilize A pAnd b sGenerate P r
--by pseudo-photo P rWith P kDiscern the corresponding pseudo-photo P that mates most RK, its pairing portrait in the portrait picture library is the S that will look for k
27. the portrait described in claim 26-photo recognition methods, wherein, M 〉=80.
28. the portrait described in claim 26-photo recognition methods, wherein, all draw a portrait A sPrepare by same artist.
29. the portrait described in claim 26-photo recognition methods, wherein
P i=Q i-m p, Q wherein i=P iOriginal photo,
Figure C038252570007C1
S i=T i-m s, T wherein i=S iOriginal portrait,
S G i = T G i - m s , Wherein T G i = S G i Original photo.
30. the portrait described in claim 26-photo recognition methods, wherein, the pseudo-photo P that identification is mated most RKBy:
--for each pseudo-photo P r, projection P rTo U pIn, calculate corresponding projection coefficient b r, make P r=U pb r
--projection P kTo U pIn, calculate corresponding projection coefficient b p, make P k=U pb p
--find pseudo-photo P RK, make its projection coefficient b rAnd b pMinimum difference is arranged, then P RKPairing portrait is and S in the portrait picture library k
31. the portrait described in claim 26-photo recognition methods wherein, is each portrait S from the portrait storehouse GiGenerate pseudo-photo P rMay further comprise the steps:
A) calculate c S = V S Λ S - 1 / 2 b S = [ c S 1 , c S 2 , . . . , c S M ] T ,
Figure C038252570007C7
Come reconstruct S Gi, S G i = A S c S = Σ i = 1 M c S i S i ,
V s=A s TA sThe unit character vector matrix
Λ s=A s TA sEigenvalue matrix
B) pass through P r = A P V S Λ S - 1 / 2 b S = A P c S = Σ i = 1 M c S i P i Generate pseudo-photo P r
32. the portrait described in claim 31-photo recognition methods, wherein, the pseudo-photo P that identification is mated most Rk, by:
--projection P kTo U pCalculate projection coefficient b p, P k=U pb p
--calculate c P = V P Λ P - 1 / 2 b P = [ c P 1 , c P 2 , . . . , c P M ] T ,
Figure C038252570008C4
P k = A P c P = Σ i = 1 M c P i P i
V p=A p TA pThe unit character vector matrix
Λ P=A p TA pEigenvalue matrix;
--with minimum d is arranged 4The puppet portrait P of value rThe S that identification is mated most k, pass through formula
d 4 = | | Λ P 1 / 2 V P T ( c P - c S ) | | .
33. a portrait-photo recognition methods, this method are to utilize photograph collection A pWith the portrait collection A corresponding with it sIn a portrait picture library, find and photo P kThe portrait S that mates most k, each the portrait S in the picture library GiMark, A pAnd A sM P arranged respectively iAnd P iSample, i.e. A p=[P 1, P 2..., P M] and A s=[S 1, S 2..., S M], photo feature space U pWith portrait feature space U sRespectively by A pAnd A sCalculate, step is as follows:
--be P kGenerate a pseudo-portrait S r, by
A) projection P kTo U pIn, calculate projection coefficient b p, make P k=U pb p,
B) utilize A sAnd b pGenerate S r
--by puppet being drawn a portrait S rWith the portrait picture library in portrait relatively, identification is the S of coupling k
34. the portrait described in claim 33-photo recognition methods, wherein, M 〉=80.
35. the portrait described in claim 33-photo recognition methods, wherein, all draw a portrait A sPrepare by same artist.
36. the portrait described in claim 33-photo recognition methods, wherein,
P i=Q i-m p, Q wherein i=P iOriginal photo,
Figure C038252570008C7
S i=T i-m s, T wherein i=S iOriginal portrait,
S G i = T G i - m s , Wherein T G i = S G i Original portrait.
37. the portrait described in claim 33-photo recognition methods is wherein discerned the portrait S that mates most kBy:
--to each portrait S Gi, projection S GiTo U sIn, calculate corresponding projection coefficient b s, make S G i = U s b s ;
--the pseudo-portrait of projection S rTo U sIn, calculate corresponding projection coefficient b r, make S r=U sb r
--the S that mates most kPromptly be that minimum coefficient b is arranged sAnd b rThe portrait of difference.
38. the portrait described in claim 37-photo recognition methods, wherein, this method further may further comprise the steps:
A) calculate c p = V P Λ P - 1 / 2 b p = [ c P 1 , c P 2 , . . . , c P M ] T ,
Figure C038252570009C5
Come reconstruct S r, S r = A S c p = Σ i = 1 M c P i S i ,
V p=A p TA pThe unit character vector matrix
Λ P=A p TA pEigenvalue matrix;
B) to each S in the portrait picture library Gi, calculate c S = V S Λ S - 1 / 2 b S = [ c S 1 , c S 2 , . . . , c S M ] T , Reconstruct S Gi, S G i = A S c S = Σ i = 1 M c S i S i
V s=A s TA sThe unit character vector matrix
Λ s=A s TA sEigenvalue matrix;
--with minimum d is arranged 5The S that value identification is mated most k, pass through formula
d 5 = | | Λ S 1 / 2 V S T ( c S - c p ) | | .
39. a portrait-photo converting system, this system are to utilize a photograph collection A pWith the portrait collection A corresponding with it sBe photo P kGenerate a pseudo-portrait S r, photograph collection A pWith portrait collection A sM P arranged respectively iAnd A iSample, i.e. A p=[P 1, P 2..., P M] and A s=[S 1, S 2..., S M], photo feature space U pBe from A pCalculate, utilize the algorithm of setting forth in the claim 1.
40. a portrait-photo coordinate conversion computer system, this system utilize a portrait collection A sWith its corresponding photograph collection A pBe portrait S kGenerate a pseudo-photo P r, A pAnd A sM P arranged respectively iAnd S iSample, i.e. A s=[S 1, S 2..., S M] and A p=[P 1, P 2..., P M], portrait feature space U sBe from A sCalculate, utilize the algorithm of setting forth in the claim 7.
41. a portrait-photo identification computer system, this system utilizes photograph collection A pWith the portrait collection A corresponding with it sIn a Photo Library, be portrait S kLook for the photo P that mates most k, a large amount of photos is arranged, each photo P in the Photo Library GiMark, A pAnd A sM P arranged respectively iAnd S iSample, i.e. A p=[P 1, P 2..., P M] and A s=[S 1, S 2..., S M], photo feature space U pWith portrait feature space U sRespectively from A pAnd A sCalculate, utilize the algorithm of setting forth in claim 13 or 20.
42. a portrait-photo identification computer system, this system utilizes photograph collection A pWith the portrait collection A corresponding with it sIn a portrait picture library, be photo P kLook for the portrait S that mates most k ,A large amount of portraits is arranged, each portrait S in the portrait picture library GiMark, A pAnd A sM P arranged respectively iAnd S iSample, i.e. A p=[P 1, P 2..., P M] and A s=[S 1, S 2..., S M], photo feature space U pWith portrait feature space U sRespectively from A pAnd A sCalculate, utilize the algorithm of setting forth in claim 26 or 33.
CNB038252570A 2002-09-19 2003-09-19 Portrait-photo recognition Expired - Lifetime CN1327386C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
HK02106852.2 2002-09-19
HK02106852A HK1052831A2 (en) 2002-09-19 2002-09-19 Sketch-photo recognition

Publications (2)

Publication Number Publication Date
CN1701339A CN1701339A (en) 2005-11-23
CN1327386C true CN1327386C (en) 2007-07-18

Family

ID=30130369

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB038252570A Expired - Lifetime CN1327386C (en) 2002-09-19 2003-09-19 Portrait-photo recognition

Country Status (4)

Country Link
CN (1) CN1327386C (en)
AU (1) AU2003271508A1 (en)
HK (1) HK1052831A2 (en)
WO (1) WO2004027692A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159064B (en) * 2007-11-29 2010-09-01 腾讯科技(深圳)有限公司 Image generation system and method for generating image
CN103034849B (en) * 2012-12-19 2016-01-13 香港应用科技研究院有限公司 Estimate for the perception variance level of cartographical sketching in sketch mates with photo
WO2015143580A1 (en) 2014-03-28 2015-10-01 Huawei Technologies Co., Ltd Method and system for verifying facial data
CN107004136B (en) * 2014-08-20 2018-04-17 北京市商汤科技开发有限公司 Method and system for the face key point for estimating facial image
US10866984B2 (en) 2015-08-03 2020-12-15 Orand S.A. Sketch-based image searching system using cell-orientation histograms and outline extraction based on medium-level features
CN106412590B (en) * 2016-11-21 2019-05-14 西安电子科技大学 A kind of image processing method and device
CN108805951B (en) * 2018-05-30 2022-07-19 重庆辉烨物联科技有限公司 Projection image processing method, device, terminal and storage medium
KR20210025020A (en) * 2018-07-02 2021-03-08 스토워스 인스티튜트 포 메디컬 리서치 Face image recognition using pseudo images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1184542A (en) * 1995-03-20 1998-06-10 Lau技术公司 System and method for identifying images
US5835616A (en) * 1994-02-18 1998-11-10 University Of Central Florida Face detection using templates
WO2000033240A1 (en) * 1998-12-02 2000-06-08 The Victoria University Of Manchester Face sub-space determination

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835616A (en) * 1994-02-18 1998-11-10 University Of Central Florida Face detection using templates
CN1184542A (en) * 1995-03-20 1998-06-10 Lau技术公司 System and method for identifying images
WO2000033240A1 (en) * 1998-12-02 2000-06-08 The Victoria University Of Manchester Face sub-space determination

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人脸图象的校准与特征抽取 金忠,李士进,杨静宇,小型微型计算机系统 2000 *

Also Published As

Publication number Publication date
WO2004027692A1 (en) 2004-04-01
CN1701339A (en) 2005-11-23
HK1052831A2 (en) 2003-09-05
AU2003271508A1 (en) 2004-04-08

Similar Documents

Publication Publication Date Title
JP6831769B2 (en) Image search device, image search method, and setting screen used for it
Jian et al. Simultaneous hallucination and recognition of low-resolution faces based on singular value decomposition
Burl et al. Recognition of planar object classes
Zhu et al. Fusing spatiotemporal features and joints for 3d action recognition
US6807286B1 (en) Object recognition using binary image quantization and hough kernels
US8024343B2 (en) Identifying unique objects in multiple image collections
US20070239778A1 (en) Forming connections between image collections
CN101189621A (en) Using photographer identity to classify images
EP0555380A4 (en) A face recognition system
CN103294989A (en) Method for discriminating between a real face and a two-dimensional image of the face in a biometric detection process
CN1327386C (en) Portrait-photo recognition
US7106903B2 (en) Dynamic partial function in measurement of similarity of objects
JP4539519B2 (en) Stereo model generation apparatus and stereo model generation method
CN106204615A (en) Salient target detection method based on central rectangular composition prior
Veinidis et al. Unsupervised human action retrieval using salient points in 3D mesh sequences
Lin et al. Image set-based face recognition using pose estimation with facial landmarks
JP3729581B2 (en) Pattern recognition / collation device
Ismail et al. Understanding indoor scene: Spatial layout estimation, scene classification, and object detection
Islam et al. Single and two-person (s) pose estimation based on R-WAA
Wannous et al. Place recognition via 3d modeling for personal activity lifelog using wearable camera
Kawanishi et al. Which content in a booklet is he/she reading? Reading content estimation using an indoor surveillance camera
Cootes et al. Flexible 3D models from uncalibrated cameras
CN116188804B (en) Twin network target search system based on transformer
Nolan Organizational response and information technology
Lan et al. Social image aesthetic measurement based on 3D reconstruction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20070718