CN1949245A - Personal face matching method based on limited multiple personal face images - Google Patents

Personal face matching method based on limited multiple personal face images Download PDF

Info

Publication number
CN1949245A
CN1949245A CN 200610118162 CN200610118162A CN1949245A CN 1949245 A CN1949245 A CN 1949245A CN 200610118162 CN200610118162 CN 200610118162 CN 200610118162 A CN200610118162 A CN 200610118162A CN 1949245 A CN1949245 A CN 1949245A
Authority
CN
China
Prior art keywords
frame
video
face
people
num
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610118162
Other languages
Chinese (zh)
Other versions
CN100447807C (en
Inventor
李丹
李国正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI JINTAIDENG INFORMATION TECHNOLOGY CO., LTD.
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CNB2006101181620A priority Critical patent/CN100447807C/en
Publication of CN1949245A publication Critical patent/CN1949245A/en
Application granted granted Critical
Publication of CN100447807C publication Critical patent/CN100447807C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a face matching method based on limit many face images. It uses doubling principal component analysis algorithm that E(PC)2A as the main, PCA as assistant, to detect face matching by using face principle component dependency, and solve the problem of 'dimensionality calamity'. The method can be used in video frequency matching by real time monitoring limit many front face images to gain higher testing accuracy rate which has wide application prospect.

Description

People's face matching process based on limited several facial images
Technical field
The present invention relates to a kind of people's face matching process based on limited several facial images, relate to fields such as the detection of people's face, people's face coupling, pattern-recognition, intelligent monitoring, can directly judge detected people's face in the monitor video whether with multiple photos in people's face coupling.
Background technology
People's face coupling is a hot issue of present mode identification and field of intelligent monitoring, has a wide range of applications.Effectively carrying out of people's face coupling, can improve speed and the efficient of identification personage in the supervisory system, for all many-sides such as production and life bring great convenience, especially frequently take place in current terrorist incident in the world, under the environment of domestic some factors leading to social instability in addition, people's face coupling has actively and profound significance and influence for the safety inspection of public place.
Aspect people's face coupling many methods are being proposed at present, from PCA (principal component analysis (PCA)) to E (PC) 2A (the standard feature face expansion technique of enhancing), the people's face that matches based on the part from the people's face based on integral body mates, and arrives whole and local people's face matching technique of mixing mutually again.But also there are some limitations, 1) in the problem of being permitted the plurality of human faces coupling, all need to mate the unknown human face by training known person face, and in the process of training and coupling, exist " dimension disaster " problem of (calculated amount is exponential increase with the increase of dimension), it will directly cause the reduction of matching rate and the raising of time complexity.2) and in the present research, cause the wretched insufficiency of training sample with single width people face training sample condition precedent in the majority, this also will cause the reduction of matching rate.
Summary of the invention
The object of the present invention is to provide a kind of people's face matching process, real-time monitor video and limited several that collected are gone into the face photo carry out people's face coupling based on limited several facial images.
In order to achieve this end, design of the present invention is: adopt dual principal component analysis (PCA) algorithm with E (PC) 2A method and PCA method organically combine.E (PC) 2The A method has good people's face matching capacity under the situation of having only the single width facial image, this method has adopted the method for two-dimensional projection, has strengthened the key feature position of people's face effectively, thereby has improved the matching capacity of people's face.And because light variation etc. all multifactor, if with the PCA method as the replenishing of this method, and utilize limited several facial images that obtain, then can further improve the matching capacity of people's face.
Everyone has photo and one section real-time monitor video of 3 different expressions in people's face data that the present invention used.But must satisfy following the requirement:
1) the people's face in photo and the video is positive, all can within 5 degree in the inclination angle of upper and lower, left and right;
2) background of photo and video is single color, allows the slight variation of background color;
3) photo and video are advisable with natural light;
4) only comprise people's face in photo and the video.
According to above-mentioned inventive concept, the present invention adopts following technical proposals:
A kind of people's face matching process based on limited several facial images is characterized in that adopting dual principal component analysis (PCA) algorithm, with the method E of two-dimensional projection (PC) 2The A method is main, is auxilliary with the PCA method, and the correlativity between end user's face major component detects the coupling of people's face; The concrete operations step is as follows:
(1) normalization of photo and video data: photo and video requency frame data with the RGB form are converted to gray level image, again gray-scale value is mapped between 0 to 1;
(2) location of people's face in photo: calculate the correlativity between going in the photo and going from top to bottom,, then show the border of having detected the crown when sudden change has taken place correlativity; From left to right the correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the left margin that has detected people's face; Correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the right margin that has detected people's face from right to left;
(3) major component of people's face in the calculating photo: in locating the people's face rectangular area that finishes, use the method E of two-dimensional projection (PC) respectively 2A method and PCA method are calculated the major component of people's face, and the major component of two kinds of method gained all sorts from big to small according to the size that influences of each composition;
(4) location of people's face in video: extract a frame every the number frame and carry out people's face location, the same step of localization method (2);
(5) major component of people's face in the calculating frame of video;
(6) mate first;
(7) handle matching result first: if having frame of video and the average matching degree Match_degree_A between three photos more than or equal to certain numerical value to surpass certain numerical value in the matching result of step (6), then the match is successful; Do not reach certain numerical value if having in the matching result of step (6) greater than the frame of video of certain numerical value and the average matching degree Match_degree_A between three photos, then coupling is unsuccessful; Jump to step (8) under other situations;
(8) secondary coupling;
(9) handle the secondary matching result: surpassed certain numerical value if having among the result of step (8) more than or equal to the frame of video of certain numerical value and the minimum M atch_degree_min of the matching degree between three photos, then the match is successful; Otherwise coupling is unsuccessful.
Above-mentioned E (PC) 2The A method is as follows:
Suppose I Mn(x y) is the gray-scale value of the people's face after the process normalization; X ∈ [1, row], y ∈ [1, col], m ∈ [1, P], n ∈ [1,3]; (x is the coordinate of pixel y), and the resolution of supposing each frame facial image is row*col, the line number of row presentation graphs picture frame, the columns of col presentation graphs picture frame; The concrete operations step is as follows:
1. J Mn(x, y)=I Mn(x, y) * I Mn(x, y), J MnBe J Mn(x, y) mean value of matrix;
V 2 mn ( x ) = 1 col Σ y = 1 col J mn ( x , y ) ;
H 2 mn ( y ) = 1 row Σ x = 1 row J mn ( x , y ) ;
P 2 mn ( x , y ) = V 2 mn ( x ) H 2 mn ( y ) J mn ‾ ;
V 1 mn ( x ) = 1 col Σ y = 1 col I mn ( x , y ) ;
H 1 mn ( y ) = 1 row Σ x = 1 row I mn ( x , y ) ;
P 1 mn ( x , y ) = V 1 mn ( x ) H 1 mn ( y ) I mn ( x , y ) ‾ ;
II mn ( x , y ) = I mn ( x , y ) + α P 1 mn ( x , y ) + β P 2 mn ( x , y ) 1 + α + β , α and β are constant;
9. calculate II MnThe PCA major component, and sort from big to small according to the size that influences of each composition, R Mn
Above-mentioned calculating II MnThe method step of PCA major component as follows:
A, be the matrix II of row * 1 with the size MnVge record diagram picture frame II MnThe mean value of each row;
B, II MnNew (x, y)=II Mn(x, y)-II MnVge (x, 1), II herein MnNew has write down II MnWith its mean value II MnVge makes the result after the difference;
C, to II MnNew carries out svd, [U, S, V]=SVD (II MnNew), wherein: S is a pair of angular moment battle array, and U and V represent two mutually orthogonal matrixes;
D, extract the diagonal element of diagonal matrix S and carry out descending sort, be designated as S_diag according to element value;
The number of the element of preceding percent i of e, calculating S_diag, the number of the element of preceding percent i of note S_diag is num_n;
F, the preceding num_n that gets matrix U are listed as II MnMajor component.
The major component step of people's face is in the above-mentioned calculating frame of video: step 1.-8. with above-mentioned step (3) step by step 1.-8. step is identical, step is 9. for calculating II MnMajor component, VR Mn
Above-mentioned coupling first is a total frame of video Video_frame_num frame in the hypothesis video-frequency band, and concrete steps are as follows:
1. read in video the num_frame=i frame through E (PC) 2The major component of preceding percent n that the A method obtains, the number of supposing the major component of preceding percent n is num_n.For the first time during iteration, i=15;
2. read in the num_photo=j photos through E (PC) 2Preceding num_n the major component that the A method obtains.When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degreeij.j=j+1;
If 4. j is smaller or equal to 3, then continue step by step 2.;
5. calculate the mean value of correlativity between current video frame and three photos, get Match_degree_A i
⑥i=i+15;
If 7. 1. i then continues step smaller or equal to frame of video sum Video_frame_num.
The secondary of last time mates, and supposes total frame of video Video_frame_num frame in the video-frequency band, and the concrete operations step is as follows:
1. read in the major component of preceding percent n that obtains through the PCA method of the num_frame=i frame of video, the number of supposing the major component of preceding percent n is num_n; For the first time during iteration, i=15;
2. read in preceding num_n the major component through the acquisition of PCA method of num_photo=j photos; When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree_pcaij; J=j+1;
If 4. j is smaller or equal to 3, then continue step 2.;
5. calculate the minimum value of correlativity between current video frame and three photos, get Match_degree_mini;
⑥i=i+15;
If 7. i is smaller or equal to Video_frame_num, then continue step 1..
The present invention has following conspicuous outstanding substantive distinguishing features and remarkable advantage compared with prior art:
Method of the present invention can obtain higher matching accuracy rate: owing to can make full use of existing sample based on people's face coupling of video and limited multiple photos, and E (PC) 2Replenishing of the utilization of A method and PCA method then can therefrom find to have more the feature of representational people's face at limited multiple image.And people's face matching process in the past is owing to lacking in training sample and adopt single people's face matching process to fail to obtain matching accuracy rate than higher.Method of the present invention is applicable to by limited several front faces monitors coupling between the gained video in real time, can obtain to have more actual use value than higher test accuracy rate.
Description of drawings
Fig. 1 is the process flow diagram of people's face matching process of the present invention.
Fig. 2 is the operation interface synoptic diagram of people's face matching process of the present invention.
Fig. 3 is the facial image that image detection arrives in the one embodiment of the invention.
Fig. 4 is the facial image that Video Detection arrives in the present embodiment.
Embodiment
Below in conjunction with specific embodiment technical scheme of the present invention is described in further detail.
5 parts of the total samples of the people's face matching process database that adopts in the present embodiment do not comprise personal informations such as name.Every increment example all has JPEG front head photo and one section avi head video of 3 different expressions.
Referring to Fig. 1 and Fig. 2, whole people's face coupling implementation procedure is as follows:
(1) normalization of photo and video data: the photo with the RGB form is converted to gray level image, again gray-scale value is mapped between 0 to 1.From video, extract a frame every 15 frames and carry out normalization.As: the value of a certain pixel after the normalization is 0.56.
(2) location of people's face in photo: people's face and background color are more single owing to only existing in the photo, so can locate people's face by determining border, the crown, face left border, face right side boundary.Overhead in the mensuration on border, calculate the correlativity between row and the row from top to bottom, because background color is almost single, so when also not detecting border, the crown, the correlativity between the row is almost more than 90%, in case detected the crown, then correlativity is undergone mutation, and this moment, correlativity sharply descended, when continuing to calculate with the correlativity between descending and the row again, its value is gone up again at once about 90%, shows that then correlativity catastrophe point place is border, the crown.The method that detects people's face left margin and right margin is similar, and different is when detecting people's face left margin, is the correlativity between calculated column and the row from left to right, and when detecting people's face right margin, be from the right side turn left calculated column and be listed as between correlativity.After people's face location finishes, all human face regions are all fixed in the matrix [90*60] with identical size.As Fig. 3.
(3) major component of people's face in the calculating photo: in people's face rectangular area, use the method E of two-dimensional projection (PC) respectively 2A method and PCA method are calculated the major component of people's face, and the major component of two kinds of method gained all sorts from big to small according to the size that influences of each composition.
E (PC) 2The A method is specific as follows: suppose I Mn(x y) is the gray-scale value (x ∈ [1,90], y ∈ [1,60], n ∈ [1,5], n ∈ [1,3]) of the people's face after the process normalization
1. J Mn(x, y)=I Mn(x, y) * I Mn(x, y), J MnBe J Mn(x, y) mean value of matrix;
V 2 mn ( x ) = 1 60 Σ y = 1 60 J mn ( x , y ) ;
H 2 mn ( y ) = 1 90 Σ x = 1 90 J mn ( x , y ) ;
P 2 mn ( x , y ) = V 2 mn ( x ) H 2 mn ( y ) J mn ‾ ;
V 1 mn ( x ) = 1 60 Σ y = 1 60 I mn ( x , y ) ;
H 1 mn ( y ) = 1 90 Σ x = 1 90 I mn ( x , y ) ;
P 1 mn ( x , y ) = V 1 mn ( x ) H 1 mn ( y ) I mn ( x , y ) ‾ ;
II mn ( x , y ) = I mn ( x , y ) + α P 1 mn ( x , y ) + β P 2 mn ( x , y ) 1 + α + β ;
9. calculate II MnMajor component, and sort from big to small according to the size that influences of each composition, R MnGet α=0.25, β=1.5
PCA method: calculate people's face I Mn(x, major component y), and sort from big to small according to the size that influences of each composition, Q Mn
As: R MnBe one 90 * 60 matrix, Q MnIt is one 90 * 60 matrix.
(4) location of people's face in video: in the process of coupling, be not that each frame all is extracted as the frame that can mate, mate but get a frame every 15 frames.In each frame that extracts, the location of people's face is as described in the step (2).The result who obtains as shown in Figure 4.
(5) major component of people's face in the calculating frame of video:, use the method E of two-dimensional projection (PC) respectively for the frame of each extraction 2Method and PCA method are calculated the major component of people's face.Suppose a total N frame in the video, then be extracted num=[N/15 altogether] frame, suppose with E (PC) 2Preceding 68% the major component that A method and PCA method calculate is respectively VE AbAnd VP Ab(α ∈ [1,5], b ∈ [1, num]).As: VE AbBe one 90 * 8 matrix, VP AbIt is one 90 * 6 matrix.
(6) mate first: (supposing total frame of video N frame in the video-frequency band)
1. read in video the num_frame=i frame through E (PC) 2Preceding 68% the major component that the A method obtains supposes that the number of preceding 68% major component is num_n.For the first time during iteration, i=15.
2. read in the num_photo=j photos through E (PC) 2Preceding num_n the major component that the A method obtains.When entering second iteration for the first time, j=1.
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degreeij.j=j+1。As: the correlativity between the frame of video of certain coupling and certain photos is 0.83.
If 4. j is smaller or equal to 3, then continue step by step 2..
5. calculate the mean value of correlativity between current video frame and three photos, get Match_degree_A iAverage as the correlativity between: current video frame and three photos is 0.87.
⑥i=i+15。
If 7. i is smaller or equal to N, then continue step 1..
(7) handle matching result first: surpassed 70% if having more than or equal to the frame of video of [num/3] and the matching degree of photo in the matching result of step (6), then the match is successful; Do not reach 50% if having greater than the frame of video of [num/3] and the matching degree of photo in the matching result of step (6), then coupling is unsuccessful; Jump to step (8) under other situations.As: the result of matching process is as follows first for certain:
0.85,0.84,0.78,0.79,0.93,0.82,0.75,0.71,0.59,0.78,0.62,0.61,0.89,0.76, and 0.64}, then coupling is passed through first.
(8) secondary coupling: (supposing total frame of video N frame in the video-frequency band)
1. read in preceding 68% the major component that obtains through the PCA method of the num_frame=i frame of video, suppose that the number of preceding 68% major component is num_n.For the first time during iteration, i=15.
2. read in preceding num_n the major component through the acquisition of PCA method of num_photo=j photos.When entering second iteration for the first time, j=1.
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree_pcaij.j=j+1。As: the correlativity between the frame of video of certain coupling and certain photos is 0.78.
If 4. j is smaller or equal to 3, then continue step by step 2..
5. calculate the minimum value of correlativity between current video frame and three photos, get Match_degree_mini.Minimum value as the correlativity between: current video frame and three photos is 0.73.
⑥i=i+15。
If 7. i is smaller or equal to N, then continue step 1..
(9) handle the secondary matching result: surpassed 70% if having more than or equal to the frame of video of [num/3] and the matching degree of photo among the result of step (8), then the match is successful; Otherwise coupling is unsuccessful.As: the result of certain secondary coupling is as follows: 0.85,0.83,0.72,0.81,0.76,0.79,0.60,0.75,0.86,0.90,0.70,0.84, and 0.88}, then the secondary coupling is passed through.

Claims (6)

1. the people's face matching process based on limited several facial images is characterized in that adopting dual principal component analysis (PCA) algorithm, with the method E of two-dimensional projection (PC) 2The A method is main, is auxilliary with the PCA method, and the correlativity between end user's face major component detects the coupling of people's face; The concrete operations step is as follows:
(1) normalization of photo and video data: photo and video requency frame data with the RGB form are converted to gray level image, again gray-scale value is mapped between 0 to 1;
(2) location of people's face in photo: calculate the correlativity between going in the photo and going from top to bottom,, then show the border of having detected the crown when sudden change has taken place correlativity; From left to right the correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the left margin that has detected people's face; Correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the right margin that has detected people's face from right to left;
(3) major component of people's face in the calculating photo: in locating the people's face rectangular area that finishes, use the method E of two-dimensional projection (PC) respectively 2A method and PCA method are calculated the major component of people's face, and the major component of two kinds of method gained all sorts from big to small according to the size that influences of each composition;
(4) location of people's face in video: extract a frame every the number frame and carry out people's face location, the same step of localization method (2);
(5) major component of people's face in the calculating frame of video;
(6) mate first;
(7) handle matching result first: if having frame of video and the average matching degree Match_degree_A between three photos more than or equal to certain numerical value to surpass certain numerical value in the matching result of step (6), then the match is successful; Do not reach certain numerical value if having in the matching result of step (6) greater than the frame of video of certain numerical value and the average matching degree Match_degree_A between three photos, then coupling is unsuccessful; Jump to step (8) under other situations;
(8) secondary coupling;
(9) handle the secondary matching result: surpassed certain numerical value if having among the result of step (8) more than or equal to the frame of video of certain numerical value and the minimum M atch_degree_min of the matching degree between three photos, then the match is successful; Otherwise coupling is unsuccessful.
2. the people's face matching process based on limited several facial images according to claim 1 is characterized in that described E (PC) 2The A method is as follows:
Suppose I Mn(x y) is the gray-scale value of the people's face after the process normalization; X ∈ [1, row], y ∈ [1, col], m ∈ [1, p], n ∈ [1,3]; (x is the coordinate of pixel y), and the resolution of supposing each frame facial image is row*col, the line number of row presentation graphs picture frame, the columns of col presentation graphs picture frame; The concrete operations step is as follows:
1. J Mn(x, y)=I Mn(x, y) * I Mn(x, y), J MnBe J Mn(x, y) mean value of matrix;
V 2 mn ( x ) = 1 col Σ y = 1 col J mn ( x , y ) ;
H 2 mn ( y ) = 1 row Σ x = 1 row J mn ( x , y ) ;
P 2 mn ( x , y ) = V 2 mn ( x ) H 2 mn ( y ) J mn ;
V 1 mn ( x ) = 1 col Σ y = 1 col I mn ( x , y ) ;
H 1 mn ( y ) = 1 row Σ x = 1 row I mn ( x , y ) ;
P 1 mn ( x , y ) = V 1 mn ( x ) H 1 mn ( y ) I mn ( x , y ) ;
II mn ( x , y ) = I mn ( x , y ) + αP 1 mn ( x , y ) + βP 2 mn ( x , y ) 1 + α + β , α and β are constant;
9. calculate II MnThe PCA major component, and sort from big to small according to the size that influences of each composition, R Mn
3. the people's face matching process based on limited several facial images according to claim 2 is characterized in that described calculating II MnThe method step of PCA major component as follows:
A, be the matrix II of row * 1 with the size MnVge record diagram picture frame II MnThe mean value of each row;
B, II MnNew (x, y)=II Mn(x, y)-II MnVge (x, 1), II herein MnNew has write down II MnWith its mean value II MnVge makes the result after the difference;
C, to II MnNew carries out svd, [U, S, V]=SVD (II MnNew), wherein: S is a pair of angular moment battle array, and U and V represent two mutually orthogonal matrixes;
D, extract the diagonal element of diagonal matrix S and carry out descending sort, be designated as S_diag according to element value;
The number of the element of preceding percent i of e, calculating S_diag, the number of the element of preceding percent i of note S_diag is num_n;
F, the preceding num_n that gets matrix U are listed as II MnMajor component.
4. the people's face matching process based on limited several facial images according to claim 1 is characterized in that the major component step of calculating people's face in the frame of video in described (5) step is:
1.-8. in step and the claim 2 1.-8. step is identical;
9. calculate II MnMajor component, VR Mn
5. the people's face matching process based on limited several facial images according to claim 1 is characterized in that described first coupling is a total frame of video Video_frame_num frame in the hypothesis video-frequency band, and operation steps is as follows:
1. read in video the num_frame=i frame through E (PC) 2The major component of preceding percent n that the A method obtains, the number of supposing the major component of preceding percent n is num_n.For the first time during iteration, i=15;
2. read in the num_photo=j photos through E (PC) 2Preceding num_n the major component that the A method obtains.
When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree Ijj=j+1;
If 4. j is smaller or equal to 3, then continue step by step 2.;
5. calculate the mean value of correlativity between current video frame and three photos, get Match_degree_A i
⑥i=i+15;
If 7. 1. i then continues step smaller or equal to frame of video sum Video_frame_num.
6. according to claim 1 or 5 based on people's face matching process of limited several facial images, it is characterized in that described secondary coupling is a total frame of video Video_frame_num frame in the hypothesis video-frequency band, operation steps is as follows:
1. read in the major component of preceding percent n that obtains through the PCA method of the num_frame=i frame of video,
The number of supposing the major component of preceding percent n is num_n; For the first time during iteration, i=15;
2. read in preceding num_n the major component through the acquisition of PCA method of num_photo=j photos; When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree_pca IjJ=j+1;
If 4. j is smaller or equal to 3, then continue step 2.;
5. calculate the minimum value of correlativity between current video frame and three photos, get Match_degree_min i
⑥i=i+15;
If 7. i is smaller or equal to Video_frame_num, then continue step 1..
CNB2006101181620A 2006-11-09 2006-11-09 Personal face matching method based on limited multiple personal face images Expired - Fee Related CN100447807C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101181620A CN100447807C (en) 2006-11-09 2006-11-09 Personal face matching method based on limited multiple personal face images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101181620A CN100447807C (en) 2006-11-09 2006-11-09 Personal face matching method based on limited multiple personal face images

Publications (2)

Publication Number Publication Date
CN1949245A true CN1949245A (en) 2007-04-18
CN100447807C CN100447807C (en) 2008-12-31

Family

ID=38018759

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101181620A Expired - Fee Related CN100447807C (en) 2006-11-09 2006-11-09 Personal face matching method based on limited multiple personal face images

Country Status (1)

Country Link
CN (1) CN100447807C (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009003314A1 (en) * 2007-07-02 2009-01-08 Shanghai Isvision Technologies Co. Ltd. Method for realizing personal face login system
CN102467669A (en) * 2010-11-17 2012-05-23 北京北大千方科技有限公司 Method and equipment for improving matching precision in laser detection
CN103324918A (en) * 2013-06-25 2013-09-25 浙江中烟工业有限责任公司 Identity authentication method with face identification and lip identification matched

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100473598B1 (en) * 2002-11-04 2005-03-11 삼성전자주식회사 System and method for detecting veilde face image
CN1209731C (en) * 2003-07-01 2005-07-06 南京大学 Automatic human face identification method based on personal image
KR100543707B1 (en) * 2003-12-04 2006-01-20 삼성전자주식회사 Face recognition method and apparatus using PCA learning per subgroup

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009003314A1 (en) * 2007-07-02 2009-01-08 Shanghai Isvision Technologies Co. Ltd. Method for realizing personal face login system
CN102467669A (en) * 2010-11-17 2012-05-23 北京北大千方科技有限公司 Method and equipment for improving matching precision in laser detection
CN102467669B (en) * 2010-11-17 2015-11-25 北京北大千方科技有限公司 Method and equipment for improving matching precision in laser detection
CN103324918A (en) * 2013-06-25 2013-09-25 浙江中烟工业有限责任公司 Identity authentication method with face identification and lip identification matched
CN103324918B (en) * 2013-06-25 2016-04-27 浙江中烟工业有限责任公司 The identity identifying method that a kind of recognition of face matches with lipreading recognition

Also Published As

Publication number Publication date
CN100447807C (en) 2008-12-31

Similar Documents

Publication Publication Date Title
CN1777915A (en) Face image candidate area search method, face image candidate area search system, and face image candidate area search program
CN1932847A (en) Method for detecting colour image human face under complex background
CN110287805A (en) Micro- expression recognition method and system based on three stream convolutional neural networks
CN1932846A (en) Visual frequency humary face tracking identification method based on appearance model
CN1950844A (en) Object posture estimation/correlation system, object posture estimation/correlation method, and program for the same
CN1599917A (en) Face recognition using kernel fisherfaces
CN1842125A (en) Extraction and scaled display of objects in an image
CN1343339A (en) Video stream classifiable symbol isolation method and system
CN1637775A (en) Positionally encoded document image analysis and labeling
CN102831411B (en) A kind of fast face detecting method
CN1892702A (en) Tracking apparatus
CN1928921A (en) Automatic searching method for characteristic points cloud band in three-dimensional scanning system
CN1912889A (en) Deformed fingerprint identification method based on local triangle structure characteristic collection
CN1324530C (en) Digital video texture analytic method
CN1215438C (en) Picture contrast equipment, picture contrast method and picture contrast program
CN1949245A (en) Personal face matching method based on limited multiple personal face images
CN1967561A (en) Method for making gender recognition handler, method and device for gender recognition
CN101067803A (en) Image processing system and method thereof
CN113780423A (en) Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model
CN1368705A (en) Mode identification device using probability density function and its method
CN101034440A (en) Identification method for spherical fruit and vegetables
Jiang et al. Attention M-net for automatic pixel-level micro-crack detection of photovoltaic module cells in electroluminescence images
CN111507119B (en) Identification code recognition method, identification code recognition device, electronic equipment and computer readable storage medium
CN1781122A (en) Method, system and program for searching area considered to be face image
CN1967526A (en) Image search method based on equity search

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHANGHAI JINDENGTAI INFORMATION TECHNOLOGY CO., LT

Free format text: FORMER OWNER: SHANGHAI UNIVERSITY

Effective date: 20130725

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 200444 BAOSHAN, SHANGHAI TO: 200092 HONGKOU, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20130725

Address after: 2, 200092 floor, No. 260, Yutian Road, Shanghai, Hongkou District, A10

Patentee after: SHANGHAI JINTAIDENG INFORMATION TECHNOLOGY CO., LTD.

Address before: 200444 Baoshan District Road, Shanghai, No. 99

Patentee before: Shanghai University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081231

Termination date: 20141109

EXPY Termination of patent right or utility model