CN1137662C - Main unit component analysis based multimode human face identification method - Google Patents
Main unit component analysis based multimode human face identification method Download PDFInfo
- Publication number
- CN1137662C CN1137662C CNB011365773A CN01136577A CN1137662C CN 1137662 C CN1137662 C CN 1137662C CN B011365773 A CNB011365773 A CN B011365773A CN 01136577 A CN01136577 A CN 01136577A CN 1137662 C CN1137662 C CN 1137662C
- Authority
- CN
- China
- Prior art keywords
- face
- eyes
- feature
- nose
- eyebrow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000004458 analytical method Methods 0.000 title claims description 7
- 210000001508 eye Anatomy 0.000 claims abstract description 125
- 210000004709 eyebrow Anatomy 0.000 claims abstract description 62
- 238000012549 training Methods 0.000 claims abstract description 18
- 210000001331 nose Anatomy 0.000 claims description 36
- 238000000513 principal component analysis Methods 0.000 claims description 26
- 239000000284 extract Substances 0.000 claims description 24
- 230000001815 facial effect Effects 0.000 claims description 21
- 210000005252 bulbus oculi Anatomy 0.000 claims description 13
- 210000000214 mouth Anatomy 0.000 claims description 8
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 210000000887 face Anatomy 0.000 abstract description 9
- 238000003909 pattern recognition Methods 0.000 abstract description 2
- 238000012847 principal component analysis method Methods 0.000 abstract 1
- 239000011159 matrix material Substances 0.000 description 8
- 238000000605 extraction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000036403 neuro physiology Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Collating Specific Patterns (AREA)
- Image Processing (AREA)
Abstract
The present invention belongs to the technical fields of image processing, computer vision and pattern recognition. The present invention comprises: human face images are positioned; five kinds of human face components, namely naked faces, eyebrows and eyes, eyes, nasal tips and mouths, are extracted from whole human faces; five kinds of human face components from each human face image, each known human face image and each human face image to be recognized in a training set are extracted; five kind character components, and the projection character values of the human face components of the known human faces are respectively formed by a character face method in a principal component analysis method, a database which comprises the projection character valves of the human face components of the known human faces, the compressing images of the known human faces and personal identity files of the known human is established, and the character valves of the human face components of the human faces to be recognized are established; the human faces to be recognized in the database of the known human faces are processed by multimode overall human face recognition and part human face recognition by computing similarity and by a similarity ordering method. The present invention achieves high recognition rate.
Description
Technical field
The invention belongs to Flame Image Process, computer vision, mode identification technology, particularly face identification method.
Recognition of face relates to a lot of subjects, comprises Flame Image Process, computer vision, pattern recognition and neutral net etc., also is closely related with neuro physiology and the neurobiology achievement in research to the human brain structure.The difficult point of recognition of face is:
(1) people's face plastic deformation of expressing one's feelings and causing
(2) people's face multiformity of causing of attitude
(3) people's face of causing of age changes
(4) multiplicity of people's face pattern of causing of factors such as hair style, beard, glasses, makeup
(5) diversity of the facial image that causes of factors such as the angle of illumination, intensity and sensor characteristics
Therefore many factors make Identification of Images become a thorny and challenging problem, are also becoming the focus of scientific research in recent years.
Present existing face identification method is all discerned whole people's face, and in many recognition methodss, the main methods such as principal component analysis (PCA-Principal Component Analysis), Elastic Matching, neutral net, geometric properties that adopt, the PCA method that adopts at present for example is to whole people's face calculated characteristics face and calculate the projection properties value of whole people's face.Because the difficult point of above-mentioned five aspects, make the identification of whole people's face be difficult to the discrimination that reaches high.
Summary of the invention
The objective of the invention is for overcoming the weak point of prior art, a kind of multi-mode face identification method based on the parts principal component analysis has been proposed, people's face is carried out parts extract, again face component is carried out principal component analysis and multi-mode identification, to reach high discrimination.
A kind of multi-mode face identification method based on the parts principal component analysis that the present invention proposes is characterized in that: may further comprise the steps:
1) adopts the method for template matching and projection histogram that facial image is located, determine the home position on people's face coarse positioning district, left and right sides eyeball, nose, mouth, lower jaw summit;
2) from whole people's face, extract naked face, eyebrow+eyes, eyes, nose, five kinds of face components of mouth;
3) everyone facial image to training set adopts step 1), 2) extract everyone five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth, to the five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth that from training set people face, extract, utilize the eigenface method in the principal component method, form the naked face of feature, feature (eyes+eyebrow), feature eyes, feature nose, feature face respectively;
4) known everyone facial image is adopted step 1), 2) the naked face that extracts, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, to the naked face that from the known person face, extracts, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, utilize the projection properties value analytical method in the principal component method, extract the naked face of known person face, eyes+eyebrow, eyes, nose, the projection properties value of five kinds of face components of mouth and foundation comprise the naked face of known person face, eyes+eyebrow, eyes, nose, the data base of the personal identification archives of the projection properties value of five kinds of face components of mouth and the compressed image of known person face and known person;
5) facial image to be identified everyone adopts step 1), 2) five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth of extracting, to the five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth that from people's face to be identified, extract, utilize the feature projection value analytical method in the principal component method, extract naked face, eyes+eyebrow, eyes, the nose of people's face to be identified, the projection properties value of face;
6) in known face database, people's face to be identified is adopted the calculating similarity and carries out multimodal overall recognition of face and local recognition of face by the method for sequencing of similarity.
Said multimodal overall face identification method can comprise eigenface, feature [eyes+eyebrow], feature eyes, feature nose, the comprehensive recognition methods of feature face.
Said multimodal local face identification method can comprise the identification of single eigenface, feature [eyes+eyebrow], feature eyes, feature nose, feature mouth, or the combination identification each other of eigenface, feature [eyes+eyebrow], feature eyes, feature nose, feature mouth.
Characteristics of the present invention and effect
The present invention carries out parts to people's face and extracts, and again face component is carried out principal component analysis and multi-mode identification, reaches high discrimination.
Description of drawings Fig. 1 is the facial image positioning result sketch map of the embodiment of the invention.The face template sketch map that Fig. 2 is used to mate for the present invention.Fig. 3: dividing region Fig. 4 to be matched: the direction integral projection 5 of gradient: gradient integral projection Fig. 7 on 6: two nostril X-axis of the search graph of valley point: eyeball is made gray integration projection 8 with the lower part: extract naked face from whole people's face, eyebrow+eyes, eyes, nose, five kinds of component diagram 9 of mouth: analyze the sketch map Figure 10 that forms eigenface with PCA: block diagram Figure 11 of face identification system: people's face part (naked face, eyes+eyebrow, eyes) identification inquiry Figure 12: overall recognition of face inquiry
The specific embodiment
A kind of multi-mode face identification method embodiment based on parts PCA that the present invention proposes is described in detail as follows in conjunction with each accompanying drawing;
Present embodiment may further comprise the steps: 1) facial image is located.Selected standard faces picture size is 330 * 480 (wide * height), and location process is divided into coarse positioning and thin two steps of location.The process of coarse positioning is just found out the process in people's face coarse positioning district in people's face figure, thin location process is just made the process of position at people's two eyeball places on the basis of coarse positioning.And then definite nose, face, lower jaw apical position, the facial image positioning result is as shown in Figure 1.Among Fig. 1, the white line frame is the position in the determined people's face of coarse positioning coarse positioning district, white point in the eyeball is eyeball position, the determined left and right sides, thin location, the Xiao Bai circle of nose, lower jaw top end is respectively the determined nose in thin location, lower jaw apical position, and the Xiao Bai circle in the face is the position at face center.Wherein, coarse positioning adopts the method for template matching, and Fig. 2 is the face template that is used to mate, and this is the template of one 48 * 33 pixel size.Fig. 3 is divided into 9 sub regions with single matching area in the facial image to be positioned, and each subregion is of a size of 16 * 11.In template matching, each subregion for the treatment of matching area earlier carries out the gradient statistics, and statistical result is Mg1~Mg9,, as long as think that according to the Gradient distribution of people's face matching area meets one of following condition and promptly do not carry out matching operation:
Mg1<Mg4 or Mg3<Mg6 or Mg1<Mg2 or Mg3<Mg2 have adopted (1) formula adaptation function when carrying out matching operation.
S=S
Sym+ S
0+ S
Hist(1) S wherein
SymThe symmetry of representing regional the right and left to be matched, S
0The result of expression template matching, S
HistIt is the histogrammic match condition of X-axis integral projection in zone 1 and 4.S in the reality
0Calculating be according to S
SymLeft and right sides mean value of areas is adjusted to calculated again under the situation of same value.S
0It is the dependency in calculation template and zone to be matched.Under the situation that the average and the variance of zone to be matched and template are adjusted to same value, adopt to subtract each other to take absolute value again and mate.
The accurate location of people's face is to realize by the location of eyeball, zone 1 and 3 should be eyeball the zone that may occur.Realize by seeking the valley point location of eyeball.Here adopted search way from bottom to top, as shown in Figure 4, the integral projection among the figure is the integral projection of gradient map.In searching Fig. 4 during A point peak value, think the approximate horizontal position Ay that has found eyes, provide area-of-interest y and belong to that (Ay-delta Ay+delta), allows x change within a large range, seeks the valley point (minimum point) in this zone.Adopt model shown in Figure 5 to carry out the ratio calculation shown in (2) formula, thereby determine the position of valley point, among Fig. 5, the zone at subregion 5 expression eyeball places.
(2) the inside and outside gray scale ratio of the expression subregion in the formula 5, wherein the gray scale in Gray (n) expression n zone and.
Extraction to centrage between two nostrils mainly is the peak information that relies on gradient integral projection on the two nostril X-axis, as shown in Figure 6.
Among Fig. 6, L1 is the x coordinate in left nostril, and L2 is the x coordinate in right nostril, and x0 is the x coordinate of centrage between two nostrils.Treat the eyeball of match people face and do the gray integration projection, as shown in Figure 7 with the lower part.Among Fig. 7, y1 is the Y coordinate on lower jaw summit, and y2 is the Y coordinate at face center, and y3 is the Y coordinate of nose, and its X coordinate is x0.
Like this, left and right sides eyeball, nose, face center, the vertical location of lower jaw have been finished.
2) carrying out face component extracts.According to facial image being located determined left and right sides eyeball, nose, face center, the vertical particular location of lower jaw, from whole people's face, extract naked face, eyebrow+eyes, eyes, nose, five kinds of parts of mouth again, these 5 kinds of parts are as shown in Figure 8.The picture size of naked face is 90 * 120, and the picture size of eyebrow+eyes is 182 * 70, and the picture size of eyes is 160 * 40, and the picture size of nose is 78 * 32, and the picture size of face is 90 * 68.
3) naked face, eyebrow+eyes, eyes, nose, the mouth to extracting from training set, utilize PCA (Principal Component Analysis) methods analyst, form the naked face of feature, feature (eyes+eyebrow), feature eyes, feature nose, feature face respectively, extract respectively the known person face naked face, eyes+eyebrow, eyes, nose, face the projection properties value and extract naked face, eyes+eyebrow, eyes, the nose of people's face to be identified, the projection properties value of face.
Choose the above facial image of 1000 width of cloth, every width of cloth facial image all adopts above-mentioned steps 1), 2) handle, form naked face training set, (eyes+eyebrow) training set, the eyes training set, the nose training set, the face training set, these training sets are carried out PCA respectively to be analyzed, form eigenface, feature (eyes+eyebrow), the feature eyes, the feature nose, the feature face, Fig. 9 has provided the facial image of training set has been analyzed the process that forms eigenface by PCA, wherein, X is the pixel number of naked face, N is the facial image quantity of training set, and D is the quantity of keeping characteristics face.
The forming process of feature (eyes+eyebrow), feature eyes, feature nose, feature face is similar to the forming process of eigenface.
On the basis that obtains eigenface, feature (eyes+eyebrow), feature eyes, feature nose, feature face, extract naked face, eyes+eyebrow, eyes, the nose of the projection properties value of naked face, eyes+eyebrow, eyes, nose, face of known person face and people's face to be identified, the projection properties value of face more respectively.Concrete way is:
With n * N people's face of N matrix notation vector, n is the dot matrix number of people's face image, and N is the dot matrix number of training of human face figure item, then:
(3) X in the formula
k=x
1k, x
2k..., x
NkK=(1,2 ..., N)
In the characteristic vector and eigenvalue of calculating C, owing to calculate XX
TThe very big (n of dimension
2Dimension), and adopts singular value decomposition, change into and calculate X
TX can obtain characteristic vector and the eigenvalue of C so indirectly, and calculate X
TThen reduce to N behind the X
2Dimension, XX
TWith X
TThe eigenvalue of X is the same, and the relation of the characteristic vector between them satisfies following formula:
(4) formula u
kBe XX
TCharacteristic vector, and φ
kBe X
TThe characteristic vector of X, λ
kBe XX
TEigenvalue, also be X simultaneously
TThe eigenvalue of X.
For matrix R, there is a Φ matrix, following formula is set up:
R×Φ=Φ×Λ (5)
(5) Λ has comprised the eigenvalue of matrix R, A=Dig{ λ in the formula
1, λ
2..., λ
N, (5) formula is expressed as N equation:
R×φ
k=λ
k×φ
k k=1,2,...,N (6)
(6) eigenvalue in the formula
kCan try to achieve by following formula:
|R-λ
k×I|=0 (7)
The λ that tries to achieve
kNumerical value is by sorting from big to small, and D maximum eigenvalue also keeps corresponding with it D characteristic vector φ before taking out
k
The characteristic vector u that calculates Matrix C by (4) formula
k
Matrix C is respectively naked face, eyes+eyebrow, eyes, nose, the face of separating from training set people face, pass through the computing of (3), (4), (5), (6), (7) formula respectively, obtain eigenface, feature (eyes+eyebrow), feature eyes, feature nose, feature face, eigenface method in PCA (Principal ComponentAnalysis, the principal component analysis) method that Here it is.
4) naked face, eyes+eyebrow, eyes, nose, the projection properties value of face and the data base who sets up the known person face of extraction known person face.
Image employing above-mentioned steps 1 to the known person face), 2) described method is isolated naked face, eyes+eyebrow, eyes, nose, face, the formation face characteristic.The computing of (8) formula (is PCA (PrincipalComponent Analysis below adopting, principal component analysis) the projection properties value analytical method in the method), forms naked face, eyes+eyebrow, eyes, the nose of known person face, the projection properties value of face respectively.
(8) q is respectively naked face, eyes+eyebrow, eyes, nose, the face of known person face, u in the formula
k TBe respectively the eigenface, feature (eyes+eyebrow), feature eyes, feature nose, the feature face that from training set people face, obtain, D=60.
The projection properties value string of the feature projection value of the naked face of a known person face, eyes+eyebrow, eyes, nose, face being formed a known person face according to the order of naked face, eyes+eyebrow, eyes, nose, face, set up a data base who supplies recognition of face to use on this basis, this data base includes known facial image (adopting the compression of JPEG method), the projection properties value string of known person face and individual's identity archives.
5) the feature projection value of the naked face of extraction people's face to be identified, eyes+eyebrow, eyes, nose, face.Image to people's face to be identified adopts above-mentioned 1), 2) described method isolates naked face, eyes+eyebrow, eyes, nose, face, forms face characteristic.It (is PCA (PrincipalComponent Analysis that (9) formula of employing is calculated, principal component analysis) the projection properties value analytical method in the method), forms naked face, eyes+eyebrow, eyes, the nose of people's face to be identified, the projection properties value of face respectively.
(9) q is naked face, eyes+eyebrow, eyes, nose, the face of people's face to be identified in the formula, u
k TBe respectively the eigenface, feature (eyes+eyebrow), feature eyes, feature nose, the feature face that from training set people face, obtain.
6) adopt the method for overall recognition of face and local recognition of face to carry out recognition of face.The process of recognition of face is: the feature with storage people face among the feature of people's face to be identified and the data base is compared, calculate similarity, again by the people's face among the data base being carried out from big to small ordering with the size of human face similarity degree to be identified, and according to this demonstrate in proper order the people who is found out photo, individual the identity archives and with people's to be identified similarity, thereby find out person to be identified identity or to person to be identified similar people's on looks identity.Calculate people's face to be identified and known person face similarity degree and adopt (10) formula.
(10) A is that the projection properties value string, B of people's face to be identified returned projection properties value string for known person face among the data base in the formula.
When adopting overall face identification method, to the projection properties value of the naked face of known person face, eyes+eyebrow, eyes, nose, face in 5: 6: 4: 3: 2 ratio is weighted, simultaneously to the projection properties value of the naked face of people's face to be identified, eyes+eyebrow, eyes, nose, face in 5: 6: 4: 3: 2 ratio is weighted, and calculates similarity by (10) formula then.
When adopting local face identification method, select the combination in any of naked face, eyes+eyebrow, eyes, nose, face with the method for man-machine interaction, its number of combinations for Totally 120 kinds promptly have 120 kinds of recognition of face patterns.The projection properties value of naked face, eyes+eyebrow, eyes, nose, face is still in 5: 6: 4: 3: 2 ratio is weighted.
A kind of face identification system structure of employing present embodiment method as shown in figure 10.Comprise 6 steps in step of 1~6 in the method, add the input of image, adopt VC++ to program, form each module among Figure 10, (4 as server to be installed in 5 PC microcomputers respectively, one as client computer) in, also comprise the video camera, the microtek E6 type scanner that are used for the image input.
Adopt the face identification system that this method constituted to adopt the client/server mode, the matching algorithm of recognition of face is embedded in the server.Can set up known person as the data base by Registering modules.The identification Query Result of client by assert that enquiry module is carried the feature of query requests and people's face to be identified and accepted to send back from server to server.In order to improve the identification inquiry velocity, adopt a plurality of servers to connect and realize parallel identification inquiry by cluster computer.People's face input module is accepted the facial image by scanner or video camera input, after quality identification module, characteristic extracting module, realizes the identification inquiry of portrait again by the identification enquiry module.
The data base for recognition of face inquiry usefulness in the system comprises individual's identity archives, facial image (with the compression of JPEG method) and people's face projection properties value three parts.Four servers that connect to form by cluster computer comprise that a master server and three in order to reduce the communication flows of network, need carry out data base's branch storehouse work from server in advance, promptly are divided into four little storehouse branches to a big storehouse and are deposited in four servers.
Data base's master server of system uses dawn PC server, is configured to: two CPU-PIII 500,512MB internal memory, two SCISI hard disk-9G * 2.Windows NT 4.0 and Oracle8 are installed.
Native system has used three data bases from server, and each is configured to from server: CPU-PIII733MHz, and the 512MB internal memory, 15G IDE hard disk is installed Windows NT 4.0 and Oracle8 data base.
The known people of present embodiment institute typing has 92000 people.
Being configured to of client computer: CPU-PII 266, the 128MB internal memory is installed Windows 98.Whole system is by network joint, and hub is adopted as the switch of 100Bb.
The face identification system of being made up of above method is by the test of actual motion, and effect is very obvious, illustrates as follows:
Example 1: as shown in figure 11, adopt people's face part (naked face, eyes+eyebrow, eyes) identification inquiry of this method, the matching degree of people's face of suspect's anthropomorphic dummy's face (shown in figure left part position) and genuine suspicion people is 87.88%, and inquiry ordering genuine suspicion people comes 182 in 92000 famous person's face images.If with the whole identification of overall people's face, inquiry ordering genuine suspicion people comes 16617 (not shown)s in 92000 famous person's face images.
Example 2: as shown in figure 12, adopt the overall recognition of face inquiry of this method, shown in figure left part position, the identification Query Result is: in 92000 famous person's face image libraries, come front three by the photo of inquirer's different times by inquirer's face.
Claims (3)
1, a kind of multi-mode face identification method based on the parts principal component analysis is characterized in that: may further comprise the steps:
1) adopts the method for template matching and projection histogram that facial image is located, determine the home position on people's face coarse positioning district, left and right sides eyeball, nose, mouth, lower jaw summit;
2) from whole people's face, extract naked face, eyebrow+eyes, eyes, nose, five kinds of face components of mouth;
3) everyone facial image to training set adopts step 1), 2) extract everyone five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth, to the five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth that from training set people face, extract, utilize the eigenface method in the principal component method, form the naked face of feature, feature eyes+feature eyebrow, feature eyes, feature nose, feature face respectively;
4) known everyone facial image is adopted step 1), 2) the naked face that extracts, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, to the naked face that from the known person face, extracts, eyebrow+eyes, eyes, nose, five kinds of face components of mouth, utilize the projection properties value analytical method in the principal component method, extract the naked face of known person face, eyes+eyebrow, eyes, nose, the projection properties value of five kinds of face components of mouth and foundation comprise the naked face of known person face, eyes+eyebrow, eyes, nose, the data base of the personal identification archives of the projection properties value of five kinds of face components of mouth and the compressed image of known person face and known person;
5) facial image to be identified everyone adopts step 1), 2) five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth of extracting, to the five kinds of face components of naked face, eyebrow+eyes, eyes, nose, mouth that from people's face to be identified, extract, utilize the feature projection value analytical method in the principal component method, extract naked face, eyes+eyebrow, eyes, the nose of people's face to be identified, the projection properties value of face;
6) in known face database, people's face to be identified is adopted the calculating similarity and carries out multimodal overall recognition of face and local recognition of face by the method for sequencing of similarity.
2, the multi-mode face identification method based on the parts principal component analysis as claimed in claim 1, it is characterized in that, said multimodal overall face identification method comprises eigenface, feature eyes+feature eyebrow, feature eyes, feature nose, the comprehensive recognition methods of feature face.
3, the multi-mode face identification method based on the parts principal component analysis as claimed in claim 1, it is characterized in that, said multimodal local face identification method comprises the identification of single eigenface, feature eyes+feature eyebrow, feature eyes, feature nose, feature mouth, or the combination identification each other of eigenface, feature eyes+feature eyebrow, feature eyes, feature nose, feature mouth.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB011365773A CN1137662C (en) | 2001-10-19 | 2001-10-19 | Main unit component analysis based multimode human face identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB011365773A CN1137662C (en) | 2001-10-19 | 2001-10-19 | Main unit component analysis based multimode human face identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1341401A CN1341401A (en) | 2002-03-27 |
CN1137662C true CN1137662C (en) | 2004-02-11 |
Family
ID=4673750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB011365773A Expired - Fee Related CN1137662C (en) | 2001-10-19 | 2001-10-19 | Main unit component analysis based multimode human face identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1137662C (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100416592C (en) * | 2005-12-23 | 2008-09-03 | 北京海鑫科金高科技股份有限公司 | Human face automatic identifying method based on data flow shape |
CN108334869A (en) * | 2018-03-21 | 2018-07-27 | 北京旷视科技有限公司 | Selection, face identification method and the device and electronic equipment of face component |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1296872C (en) * | 2003-11-11 | 2007-01-24 | 易连科技股份有限公司 | Method for quick establishing human face image planar model |
CN1627317A (en) * | 2003-12-12 | 2005-06-15 | 北京阳光奥森科技有限公司 | Method for obtaining image of human faces by using active light source |
CN1319013C (en) * | 2005-03-16 | 2007-05-30 | 沈阳工业大学 | Combined recognising method for man face and ear characteristics |
CN1319014C (en) * | 2005-03-16 | 2007-05-30 | 沈阳工业大学 | Personal identity recognising method based on pinna geometric parameter |
JP4653606B2 (en) | 2005-05-23 | 2011-03-16 | 株式会社東芝 | Image recognition apparatus, method and program |
CN100412885C (en) * | 2005-05-23 | 2008-08-20 | 株式会社东芝 | Image recognition apparatus and method |
CN100412884C (en) * | 2006-04-10 | 2008-08-20 | 中国科学院自动化研究所 | Human face quick detection method based on local description |
CN100444191C (en) * | 2006-11-08 | 2008-12-17 | 中山大学 | Multiple expression whole face profile testing method based on moving shape model |
JP4337064B2 (en) * | 2007-04-04 | 2009-09-30 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
CN101305913B (en) * | 2008-07-11 | 2010-06-09 | 华南理工大学 | Face beauty assessment method based on video |
US9396539B2 (en) | 2010-04-02 | 2016-07-19 | Nokia Technologies Oy | Methods and apparatuses for face detection |
CN101819631B (en) * | 2010-04-16 | 2012-12-26 | 深圳大学 | Identity identification method and identity identification system |
CN101853397A (en) * | 2010-04-21 | 2010-10-06 | 中国科学院半导体研究所 | Bionic human face detection method based on human visual characteristics |
CN102043966B (en) * | 2010-12-07 | 2012-11-28 | 浙江大学 | Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation |
CN103065130B (en) * | 2012-12-31 | 2015-12-09 | 华中科技大学 | A kind of target identification method of three-dimensional fuzzy space |
CN103268654A (en) * | 2013-05-30 | 2013-08-28 | 苏州福丰科技有限公司 | Electronic lock based on three-dimensional face identification |
CN105095917B (en) * | 2015-08-31 | 2019-08-06 | 小米科技有限责任公司 | Image processing method, device and terminal |
CN109993042A (en) * | 2017-12-29 | 2019-07-09 | 国民技术股份有限公司 | A kind of face identification method and its device |
CN109446893A (en) * | 2018-09-14 | 2019-03-08 | 百度在线网络技术(北京)有限公司 | Face identification method, device, computer equipment and storage medium |
CN109241943A (en) * | 2018-10-09 | 2019-01-18 | 深圳市三宝创新智能有限公司 | Non-alignment face feature extraction method, device, computer equipment and storage medium |
CN109684917A (en) * | 2018-11-14 | 2019-04-26 | 南宁学院 | A kind of fast human face recognition |
CN112766013A (en) * | 2019-10-21 | 2021-05-07 | 深圳君正时代集成电路有限公司 | Recognition method for performing multistage screening in face recognition |
CN116386119A (en) * | 2023-05-09 | 2023-07-04 | 北京维艾狄尔信息科技有限公司 | Body-building footpath-based identity recognition method, body-building footpath-based identity recognition system, body-building footpath-based identity recognition terminal and storage medium |
-
2001
- 2001-10-19 CN CNB011365773A patent/CN1137662C/en not_active Expired - Fee Related
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100416592C (en) * | 2005-12-23 | 2008-09-03 | 北京海鑫科金高科技股份有限公司 | Human face automatic identifying method based on data flow shape |
CN108334869A (en) * | 2018-03-21 | 2018-07-27 | 北京旷视科技有限公司 | Selection, face identification method and the device and electronic equipment of face component |
Also Published As
Publication number | Publication date |
---|---|
CN1341401A (en) | 2002-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1137662C (en) | Main unit component analysis based multimode human face identification method | |
CN102938065B (en) | Face feature extraction method and face identification method based on large-scale image data | |
Li et al. | A comprehensive survey on 3D face recognition methods | |
Guo et al. | Hierarchical multiscale LBP for face and palmprint recognition | |
Kang et al. | Attentional feature-pair relation networks for accurate face recognition | |
CN111738143B (en) | Pedestrian re-identification method based on expectation maximization | |
CN103136516B (en) | The face identification method that visible ray and Near Infrared Information merge and system | |
CN105893947B (en) | The two visual angle face identification methods based on more local correlation feature learnings | |
CN1388945A (en) | Iris identification system and method and computer readable storage medium stored therein computer executable instructions to implement iris identification method | |
CN105138954A (en) | Image automatic screening, query and identification system | |
CN101630364A (en) | Method for gait information processing and identity identification based on fusion feature | |
Salah et al. | Fusing local binary patterns with wavelet features for ethnicity identification | |
CN1304114A (en) | Identity identification method based on multiple biological characteristics | |
CN105469117B (en) | A kind of image-recognizing method and device extracted based on robust features | |
Angadi et al. | Face recognition through symbolic modeling of face graphs and texture | |
CN107122725B (en) | Face recognition method and system based on joint sparse discriminant analysis | |
Jadhav et al. | HDL-PI: hybrid DeepLearning technique for person identification using multimodal finger print, iris and face biometric features | |
Zuo et al. | Face liveness detection algorithm based on livenesslight network | |
Salah et al. | Recognize Facial Emotion Using Landmark Technique in Deep Learning | |
Andrie et al. | A review of Chinese Academy of Sciences (CASIA) gait database as a human gait recognition dataset | |
Christabel et al. | Facial feature extraction based on local color and texture for face recognition using neural network | |
Dua'a Hamed et al. | Finger knuckle recognition, a review on prospects and challenges based on PolyU dataset | |
Harakannanavar et al. | Face Recognition based on the fusion of Bit-Plane and Binary Image Compression Techniques | |
Abhulimen et al. | Facial age estimation using deep learning: A review | |
CN106022214B (en) | Effective face feature extraction method under unconstrained condition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C06 | Publication | ||
PB01 | Publication | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20040211 Termination date: 20101019 |