CN100447807C - Personal face matching method based on limited multiple personal face images - Google Patents

Personal face matching method based on limited multiple personal face images Download PDF

Info

Publication number
CN100447807C
CN100447807C CNB2006101181620A CN200610118162A CN100447807C CN 100447807 C CN100447807 C CN 100447807C CN B2006101181620 A CNB2006101181620 A CN B2006101181620A CN 200610118162 A CN200610118162 A CN 200610118162A CN 100447807 C CN100447807 C CN 100447807C
Authority
CN
China
Prior art keywords
frame
face
video
people
num
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006101181620A
Other languages
Chinese (zh)
Other versions
CN1949245A (en
Inventor
李丹
李国正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI JINTAIDENG INFORMATION TECHNOLOGY CO., LTD.
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CNB2006101181620A priority Critical patent/CN100447807C/en
Publication of CN1949245A publication Critical patent/CN1949245A/en
Application granted granted Critical
Publication of CN100447807C publication Critical patent/CN100447807C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a face matching method based on limit many face images. It uses doubling principal component analysis algorithm that E(PC)2A as the main, PCA as assistant, to detect face matching by using face principle component dependency, and solve the problem of 'dimensionality calamity'. The method can be used in video frequency matching by real time monitoring limit many front face images to gain higher testing accuracy rate which has wide application prospect.

Description

People's face matching process based on limited several facial images
Technical field
The present invention relates to a kind of people's face matching process based on limited several facial images, relate to fields such as the detection of people's face, people's face coupling, pattern-recognition, intelligent monitoring, can directly judge detected people's face in the monitor video whether with multiple photos in people's face coupling.
Background technology
People's face coupling is a hot issue of present mode identification and field of intelligent monitoring, has a wide range of applications.Effectively carrying out of people's face coupling, can improve speed and the efficient of identification personage in the supervisory system, for all many-sides such as production and life bring great convenience, especially frequently take place in current terrorist incident in the world, under the environment of domestic some factors leading to social instability in addition, people's face coupling has actively and profound significance and influence for the safety inspection of public place.
Aspect people's face coupling many methods are being proposed at present, from PCA (principal component analysis (PCA)) to E (PC) 2A (the standard feature face expansion technique of enhancing), the people's face that matches based on the part from the people's face based on integral body mates, and arrives whole and local people's face matching technique of mixing mutually again.But also there are some limitations, 1) in the problem of being permitted the plurality of human faces coupling, all need to mate the unknown human face by training known person face, and in the process of training and coupling, exist " dimension disaster " problem of (calculated amount is exponential increase with the increase of dimension), it will directly cause the reduction of matching rate and the raising of time complexity.2) and in the present research, cause the wretched insufficiency of training sample with single width people face training sample condition precedent in the majority, this also will cause the reduction of matching rate.
Summary of the invention
The object of the present invention is to provide a kind of people's face matching process, real-time monitor video and limited several human face photos that collected are carried out people's face coupling based on limited several facial images.
In order to achieve this end, design of the present invention is: adopt dual principal component analysis (PCA) algorithm will strengthen standard feature face development method E (PC) 2A and Principal Component Analysis Method PCA organically combine.Strengthen standard feature face development method E (PC) 2A has good people's face matching capacity under the situation of having only the single width facial image, this method has adopted the method for two-dimensional projection, has strengthened the key feature position of people's face effectively, thereby has improved the matching capacity of people's face.And because light variation etc. all multifactor, if with Principal Component Analysis Method PCA as the replenishing of this method, and utilize limited several facial images that obtain, then can further improve the matching capacity of people's face.
Everyone has photo and one section real-time monitor video of 3 different expressions in people's face data that the present invention used.But must satisfy following the requirement:
1) the people's face in photo and the video is positive, all can within 5 degree in the inclination angle of upper and lower, left and right;
2) background of photo and video is single color, allows the slight variation of background color;
3) photo and video are advisable with natural light;
4) only comprise people's face in photo and the video.
According to above-mentioned inventive concept, the present invention adopts following technical proposals:
A kind of people's face matching process based on limited several facial images is characterized in that adopting dual principal component analysis (PCA) algorithm, strengthens standard feature face development method E (PC) with two-dimensional projection's method 2A is main, is auxilliary with Principal Component Analysis Method PCA, and the correlativity between end user's face major component detects the coupling of people's face; The concrete operations step is as follows:
(1) normalization of photo and video data: photo and video requency frame data with the RGB form are converted to gray level image, again gray-scale value is mapped between 0 to 1;
(2) location of people's face in photo: calculate the correlativity between going in the photo and going from top to bottom,, then show the border of having detected the crown when sudden change has taken place correlativity; From left to right the correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the left margin that has detected people's face; Correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the right margin that has detected people's face from right to left;
(3) major component of people's face in the calculating photo: in locating the people's face rectangular area that finishes, strengthen standard feature face development method E (PC) with two-dimensional projection's method respectively 2A and Principal Component Analysis Method PCA calculate the major component of people's face, and the major component of two kinds of method gained all sorts from big to small according to the size that influences of each composition;
(4) location of people's face in video: extract a frame every the number frame and carry out people's face location, the same step of localization method (2);
(5) major component of people's face in the calculating frame of video;
(6) mate first;
(7) handle matching result first: if having frame of video and the average matching degree Match_degree_A between three photos more than or equal to certain numerical value to surpass certain numerical value in the matching result of step (6), then the match is successful; Do not reach certain numerical value if having in the matching result of step (6) greater than the frame of video of certain numerical value and the average matching degree Match_degree_A between three photos, then coupling is unsuccessful; Jump to step (8) under other situations;
(8) secondary coupling;
(9) handle the secondary matching result: surpassed certain numerical value if having among the result of step (8) more than or equal to the frame of video of certain numerical value and the minimum M atch_degree_min of the matching degree between three photos, then the match is successful;
Otherwise coupling is unsuccessful.
Above-mentioned enhancing standard feature face development method E (PC) 2A is as follows:
Suppose I Mn(x y) is the gray-scale value of the people's face after the process normalization; X ∈ [1, row], y ∈ [1, col], m ∈ [1, p], n ∈ [1,3]; (x y) is the coordinate of pixel, and P is the number of the different people of people's face representative in the face database; The resolution of supposing each frame facial image is row*col, the line number of row presentation graphs picture frame, the columns of col presentation graphs picture frame; The concrete operations step is as follows:
1. J Mn(x, y)=I Mn(x, y) * I Mn(x, y), J MnBe J Mn(x, y) mean value of matrix;
V 2 mn ( x ) = 1 col Σ y = 1 col J mn ( x , y ) ;
H 2 mn ( y ) = 1 row Σ x = 1 row J mn ( x , y ) ;
P 2 mn ( x , y ) = V 2 mn ( x ) H 2 mn ( y ) J mn ‾ ;
V 1 mn ( x ) = 1 col Σ y = 1 col I mn ( x , y ) ;
H 1 mn ( y ) = 1 row Σ x = 1 row I mn ( x , y ) ;
P 1 mn ( x , y ) = V 1 mn ( x ) H 1 mn ( y ) I mn ( x , y ) ‾ , I Mn(x, y) expression I Mn(x, mean value y);
II mn ( x , y ) = I mn ( x , y ) + α P 1 mn ( x , y ) + β P 2 mn ( x , y ) 1 + α + β , α and β are constant;
9. calculate II MnPrincipal Component Analysis Method PCA major component, and sort from big to small according to the size that influences of each composition, R Mn
Above-mentioned calculating II MnThe method step of Principal Component Analysis Method PCA major component as follows:
A, be the matrix II of row * 1 with the size MnVge record diagram picture frame II MnThe mean value of each row;
B, II MnNew (x, y)=II Mn(x, y)-II MnVge (x, 1), II herein MnNew has write down II MnWith its mean value II MnVge makes the result after the difference;
C, to II MnNew carries out svd, [U, S, V]=SVD (II MnNew), wherein: S is a pair of angular moment battle array, and U and V represent two mutually orthogonal matrixes;
D, extract the diagonal element of diagonal matrix S and carry out descending sort, be designated as S_diag according to element value;
The number of the element of preceding percent i of e, calculating S_diag, the number of the element of preceding percent i of note S_diag is num_n;
F, the preceding num_n that gets matrix U are listed as II MnMajor component.
The major component step of people's face is in the above-mentioned calculating frame of video: step 1.-8. with above-mentioned step (3) step by step 1.-8. step is identical, step is 9. for calculating II MnMajor component, VR Mn
Above-mentioned coupling first is a total frame of video Video_frame_num frame in the hypothesis video-frequency band, and concrete steps are as follows:
1. read in video the num_frame=i frame through strengthening standard feature face development method E (PC) 2The major component of preceding percent n that A obtains, the number of supposing the major component of preceding percent n is num_n.For the first time during iteration, i=15;
2. read in the num_photo=j photos through strengthening standard feature face development method E (PC) 2Preceding num_n the major component that A obtains.When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree Ijj=j+1;
If 4. j is smaller or equal to 3, then continue step by step 2.;
5. calculate the mean value of correlativity between current video frame and three photos, get Match_degree_A i
⑥i=i+15;
If 7. 1. i then continues step smaller or equal to frame of video sum Video_frame_num.
Above-mentioned secondary mates, and supposes total frame of video Video_frame_num frame in the video-frequency band, and the concrete operations step is as follows:
1. read in the major component of preceding percent n that obtains through Principal Component Analysis Method PCA of the num_frame=i frame of video, the number of supposing the major component of preceding percent n is num_n; For the first time during iteration, i=15;
2. read in preceding num_n the major component through Principal Component Analysis Method PCA acquisition of num_photo=j photos; When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree_pca IjJ=j+1;
If 4. j is smaller or equal to 3, then continue step 2.;
5. calculate the minimum value of correlativity between current video frame and three photos, get Match_degree_min i
⑥i=i+15;
If 7. i is smaller or equal to Video_frame_num, then continue step 1..
The present invention has following conspicuous outstanding substantive distinguishing features and remarkable advantage compared with prior art:
Method of the present invention can obtain higher matching accuracy rate: owing to can make full use of existing sample based on people's face coupling of video and limited multiple photos, and strengthen standard feature face development method E (PC) 2The utilization of A and Principal Component Analysis Method PCA replenish, and then can therefrom find to have more the feature of representational people's face at limited multiple image.And people's face matching process in the past is owing to lacking in training sample and adopt single people's face matching process to fail to obtain matching accuracy rate than higher.Method of the present invention is applicable to by limited several front faces monitors coupling between the gained video in real time, can obtain to have more actual use value than higher test accuracy rate.
Description of drawings
Fig. 1 is the process flow diagram of people's face matching process of the present invention.
Fig. 2 is the operation interface synoptic diagram of people's face matching process of the present invention.
Fig. 3 is the facial image that image detection arrives in the one embodiment of the invention.
Fig. 4 is the facial image that Video Detection arrives in the present embodiment.
Embodiment
Below in conjunction with specific embodiment technical scheme of the present invention is described in further detail.
5 parts of the total samples of the people's face matching process database that adopts in the present embodiment do not comprise personal informations such as name.Every increment example all has JPEG front head photo and one section avi head video of 3 different expressions.
Referring to Fig. 1 and Fig. 2, whole people's face coupling implementation procedure is as follows:
(1) normalization of photo and video data: the photo with the RGB form is converted to gray level image, again gray-scale value is mapped between 0 to 1.From video, extract a frame every 15 frames and carry out normalization.As: the value of a certain pixel after the normalization is 0.56.
(2) location of people's face in photo: people's face and background color are more single owing to only existing in the photo, so can locate people's face by determining border, the crown, face left border, face right side boundary.Overhead in the mensuration on border, calculate the correlativity between row and the row from top to bottom, because background color is almost single, so when also not detecting border, the crown, the correlativity between the row is almost more than 90%, in case detected the crown, then correlativity is undergone mutation, and this moment, correlativity sharply descended, when continuing to calculate with the correlativity between descending and the row again, its value is gone up again at once about 90%, shows that then correlativity catastrophe point place is border, the crown.The method that detects people's face left margin and right margin is similar, and different is when detecting people's face left margin, is the correlativity between calculated column and the row from left to right, and when detecting people's face right margin, be from the right side turn left calculated column and be listed as between correlativity.After people's face location finishes, all human face regions are all fixed in the matrix [90*60] with identical size.As Fig. 3.
(3) major component of people's face in the calculating photo: in people's face rectangular area, strengthen standard feature face development method E (PC) with two-dimensional projection's method respectively 2A and Principal Component Analysis Method PCA calculate the major component of people's face, and the major component of two kinds of method gained all sorts from big to small according to the size that influences of each composition.
Strengthen standard feature face development method E (PC) 2A is specific as follows: suppose I Mn(x y) is the gray-scale value (x ∈ [1,90], y ∈ [1,60], m ∈ [1,5], n ∈ [1,3]) of the people's face after the process normalization
1. J Mn(x, y)=I Mn(x, y) * I Mn(x, y), J MnBe J Mn(x, y) mean value of matrix;
V 2 mn ( x ) = 1 60 Σ y = 1 60 J mn ( x , y ) ;
H 2 mn ( y ) = 1 90 Σ x = 1 90 J mn ( x , y ) ;
P 2 mn ( x , y ) = V 2 mn ( x ) H 2 mn ( y ) J mn ‾ ;
V 1 mn ( x ) = 1 60 Σ y = 1 60 I mn ( x , y ) ;
H 1 mn ( y ) = 1 90 Σ x = 1 90 I mn ( x , y ) ;
P 1 mn ( x , y ) = V 1 mn ( x ) H 1 mn ( y ) I mn ( x , y ) ‾ ;
II mn ( x , y ) = I mn ( x , y ) + α P 1 mn ( x , y ) + β P 2 mn ( x , y ) 1 + α + β ;
9. calculate II MnMajor component, and sort from big to small according to the size that influences of each composition, R MnGet α=0.25, β=1.5
PCA method: calculate people's face I Mn(x, major component y), and sort from big to small according to the size that influences of each composition, Q Mn
As: R MnBe one 90 * 60 matrix, Q MnIt is one 90 * 60 matrix.
(4) location of people's face in video: in the process of coupling, be not that each frame all is extracted as the frame that can mate, mate but get a frame every 15 frames.In each frame that extracts, the location of people's face is as described in the step (2).The result who obtains as shown in Figure 4.
(5) major component of people's face in the calculating frame of video:, strengthen standard feature face development method E (PC) with two-dimensional projection's method respectively for the frame of each extraction 2Major component with Principal Component Analysis Method PCA calculating people face.Suppose a total N frame in the video, then be extracted num=[N/15 altogether] frame, suppose with strengthening standard feature face development method E (PC) 2Preceding 68% the major component that A and Principal Component Analysis Method PCA calculate is respectively VE AbAnd VP Ab(a ∈ [1,5], b ∈ [1, num]).As: VE AbBe one 90 * 8 matrix, VP AbIt is one 90 * 6 matrix.
(6) mate first: (supposing total frame of video N frame in the video-frequency band)
1. read in video the num_frame=i frame through strengthening standard feature face development method E (PC) 2Preceding 68% the major component that A obtains supposes that the number of preceding 68% major component is num_n.For the first time during iteration, i=15.
2. read in the num_photo=j photos through strengthening standard feature face development method E (PC) 2Preceding num_n the major component that A obtains.When entering second iteration for the first time, j=1.
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree Ijj=j+1。As: the correlativity between the frame of video of certain coupling and certain photos is 0.83.
If 4. j is smaller or equal to 3, then continue step by step 2..
5. calculate the mean value of correlativity between current video frame and three photos, get Match_degree_A iAs:
The average of the correlativity between current video frame and three photos is 0.87.
⑥i=i+15。
If 7. i is smaller or equal to N, then continue step 1..
(7) handle matching result first: surpassed 70% if having more than or equal to the frame of video of [num/3] and the matching degree of photo in the matching result of step (6), then the match is successful; Do not reach 50% if having greater than the frame of video of [num/3] and the matching degree of photo in the matching result of step (6), then coupling is unsuccessful; Jump to step (8) under other situations.As: the result of matching process is as follows first for certain:
0.85,0.84,0.78,0.79,0.93,0.82,0.75,0.71,0.59,0.78,0.62,0.61,0.89,0.76, and 0.64}, then coupling is passed through first.
(8) secondary coupling: (supposing total frame of video N frame in the video-frequency band)
1. read in preceding 68% the major component that obtains through Principal Component Analysis Method PCA of the num_frame=i frame of video, suppose that the number of preceding 68% major component is num_n.For the first time during iteration, i=15.
2. read in preceding num_n the major component through Principal Component Analysis Method PCA acquisition of num_photo=j photos.When entering second iteration for the first time, j=1.
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree_pca Ijj=j+1。As: the correlativity between the frame of video of certain coupling and certain photos is 0.78.
If 4. j is smaller or equal to 3, then continue step by step 2..
5. calculate the minimum value of correlativity between current video frame and three photos, get Match_degree_min i
Minimum value as the correlativity between: current video frame and three photos is 0.73.
⑥i=i+15。
If 7. i is smaller or equal to N, then continue step 1..
(9) handle the secondary matching result: surpassed 70% if having more than or equal to the frame of video of [num/3] and the matching degree of photo among the result of step (8), then the match is successful; Otherwise coupling is unsuccessful.As: the result of certain secondary coupling is as follows: 0.85,0.83,0.72,0.81,0.76,0.79,0.60,0.75,0.86,0.90,0.70,0.84, and 0.88}, then the secondary coupling is passed through.

Claims (6)

1. the people's face matching process based on limited several facial images is characterized in that adopting dual principal component analysis (PCA) algorithm, strengthens standard feature face development method E (PC) with two-dimensional projection's method 2A is main, is auxilliary with Principal Component Analysis Method PCA, and the correlativity between end user's face major component detects the coupling of people's face; The concrete operations step is as follows:
(1) normalization of photo and video data: photo and video requency frame data with the RGB form are converted to gray level image, again gray-scale value is mapped between 0 to 1;
(2) location of people's face in photo: calculate the correlativity between going in the photo and going from top to bottom,, then show the border of having detected the crown when sudden change has taken place correlativity; From left to right the correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the left margin that has detected people's face; Correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the right margin that has detected people's face from right to left;
(3) major component of people's face in the calculating photo: in locating the people's face rectangular area that finishes, strengthen standard feature face development method E (PC) with two-dimensional projection's method respectively 2A and Principal Component Analysis Method PCA calculate the major component of people's face, and the major component of two kinds of method gained all sorts from big to small according to the size that influences of each composition;
(4) location of people's face in video: extract a frame every the number frame and carry out people's face location, the same step of localization method (2);
(5) major component of people's face in the calculating frame of video;
(6) mate first;
(7) handle matching result first: if having frame of video and the average matching degree Match_degree_A between three photos more than or equal to certain numerical value to surpass certain numerical value in the matching result of step (6), then the match is successful; Do not reach certain numerical value if having in the matching result of step (6) greater than the frame of video of certain numerical value and the average matching degree Match_degree_A between three photos, then coupling is unsuccessful; Jump to step (8) under other situations;
(8) secondary coupling;
(9) handle the secondary matching result: surpassed certain numerical value if having among the result of step (8) more than or equal to the frame of video of certain numerical value and the minimum M atch_degree_min of the matching degree between three photos, then the match is successful; Otherwise coupling is unsuccessful.
2. the people's face matching process based on limited several facial images according to claim 1 is characterized in that described enhancing standard feature face development method E (PC) 2A is as follows:
Suppose I Mn(x y) is the gray-scale value of the people's face after the process normalization;
X ∈ [1, row], y ∈ [1, col], m ∈ [1, p], n ∈ [1,3]; (x y) is the coordinate of pixel, and P is the number of the different people of people's face representative in the face database; The resolution of supposing each frame facial image is row*col, the line number of row presentation graphs picture frame, the columns of col presentation graphs picture frame; The concrete operations step is as follows:
1. J Mn(x, y)=I Mn(x, y) * I Mn(x, y), J MnBe J Mn(x, y) mean value of matrix;
V 2 mn ( x ) = 1 col Σ y = 1 col J mn ( x , y ) ;
H 2 mn ( y ) = 1 row Σ x = 1 row J mn ( x , y ) ;
P 2 mn ( x , y ) = V 2 mn ( x ) H 2 mn ( y ) J mn ‾ ;
V 1 mn ( x ) = 1 col Σ y = 1 col I mn ( x , y ) ;
H 1 mn ( y ) = 1 row Σ x = 1 row I mn ( x , y ) ;
P 1 mn ( x , y ) = V 1 mn ( x ) H 1 mn ( y ) I mn ( x , y ) ‾ , I in the formula Mn(x, y) expression I Mn(x, mean value y);
II mn ( x , y ) = I mn ( x , y ) + α P 1 mn ( x , y ) + β P 2 mn ( x , y ) 1 + α + β , α and β are constant;
9. calculate II MnPrincipal Component Analysis Method PCA major component, and sort from big to small according to the size that influences of each composition, R Mn
3. the people's face matching process based on limited several facial images according to claim 2 is characterized in that described calculating II MnThe method step of Principal Component Analysis Method PCA major component as follows:
A, be the matrix II of row * 1 with the size MnVge record diagram picture frame II MnThe mean value of each row;
B, II MnNew (x, y)=II Mn(x, y)-II MnVge (x, 1), II herein MnNew has write down II MnWith its mean value II MnVge makes the result after the difference;
C, to II MnNew carries out svd, [U, S, V]=SVD (II MnNew), wherein: S is a pair of angular moment battle array, and U and V represent two mutually orthogonal matrixes;
D, extract the diagonal element of diagonal matrix S and carry out descending sort, be designated as S_diag according to element value;
The number of the element of preceding percent i of e, calculating S_diag, the number of the element of preceding percent i of note S_diag is num_n;
F, the preceding num_n that gets matrix U are listed as II MnMajor component.
4. the people's face matching process based on limited several facial images according to claim 1 is characterized in that the major component step of calculating people's face in the frame of video in described (5) step is:
1.-8. in step and the claim 2 1.-8. step is identical;
9. calculate II MnMajor component, VR Mn
5. the people's face matching process based on limited several facial images according to claim 1 is characterized in that described coupling first is a total frame of video Video_frame_num frame in the hypothesis video-frequency band, and operation steps is as follows:
1. read in video the num_frame=i frame through strengthening standard feature face development method E (PC) 2The major component of preceding percent n that A obtains, the number of suppose major component of preceding percent n is num_n, the first time is during iteration, i=15;
2. read in the num_photo=j photos through strengthening standard feature face development method E (PC) 2Preceding num_n the major component that A obtains, when entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree Ij, j=j+1;
If 4. j is smaller or equal to 3, then continue step by step 2.;
5. calculate the mean value of correlativity between current video frame and three photos, get Match_degree_A i
⑥i=i+15;
If 7. 1. i then continues step smaller or equal to frame of video sum Video_frame_num.
6. according to claim 1 or 5 based on people's face matching process of limited several facial images, it is characterized in that described secondary coupling is a total frame of video Video_frame_num frame in the hypothesis video-frequency band, operation steps is as follows:
1. read in the major component of preceding percent n that obtains through Principal Component Analysis Method PCA of the num_frame=i frame of video, the number of supposing the major component of preceding percent n is num_n; For the first time during iteration, i=15;
2. read in preceding num_n the major component through Principal Component Analysis Method PCA acquisition of num_photo=j photos; When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree_pca IjJ=j+1;
If 4. j is smaller or equal to 3, then continue step 2.;
5. calculate the minimum value of correlativity between current video frame and three photos, get Match_degree_min i
⑥i=i+15;
If 7. i is smaller or equal to Video_frame_num, then continue step 1..
CNB2006101181620A 2006-11-09 2006-11-09 Personal face matching method based on limited multiple personal face images Expired - Fee Related CN100447807C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101181620A CN100447807C (en) 2006-11-09 2006-11-09 Personal face matching method based on limited multiple personal face images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101181620A CN100447807C (en) 2006-11-09 2006-11-09 Personal face matching method based on limited multiple personal face images

Publications (2)

Publication Number Publication Date
CN1949245A CN1949245A (en) 2007-04-18
CN100447807C true CN100447807C (en) 2008-12-31

Family

ID=38018759

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101181620A Expired - Fee Related CN100447807C (en) 2006-11-09 2006-11-09 Personal face matching method based on limited multiple personal face images

Country Status (1)

Country Link
CN (1) CN100447807C (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009003314A1 (en) * 2007-07-02 2009-01-08 Shanghai Isvision Technologies Co. Ltd. Method for realizing personal face login system
CN102467669B (en) * 2010-11-17 2015-11-25 北京北大千方科技有限公司 Method and equipment for improving matching precision in laser detection
CN103324918B (en) * 2013-06-25 2016-04-27 浙江中烟工业有限责任公司 The identity identifying method that a kind of recognition of face matches with lipreading recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1477588A (en) * 2003-07-01 2004-02-25 南京大学 Automatic human face identification method based on personal image
EP1416425A1 (en) * 2002-11-04 2004-05-06 Samsung Electronics Co., Ltd. System and method for detecting a face
US20050123202A1 (en) * 2003-12-04 2005-06-09 Samsung Electronics Co., Ltd. Face recognition apparatus and method using PCA learning per subgroup

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1416425A1 (en) * 2002-11-04 2004-05-06 Samsung Electronics Co., Ltd. System and method for detecting a face
JP2004158013A (en) * 2002-11-04 2004-06-03 Samsung Electronics Co Ltd Face detection system and method, and face image authentication method
CN1477588A (en) * 2003-07-01 2004-02-25 南京大学 Automatic human face identification method based on personal image
US20050123202A1 (en) * 2003-12-04 2005-06-09 Samsung Electronics Co., Ltd. Face recognition apparatus and method using PCA learning per subgroup

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Two-Dimensional PCA:A new approachto appearance-basedface representation and recognition. Jian Yang,David Zhang,Alejandro F. Frangi ,Jing-yu Yang.IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol.26 No.1. 2004
Two-Dimensional PCA:A new approachto appearance-basedface representation and recognition. Jian Yang,David Zhang,Alejandro F. Frangi,Jing-yu Yang.IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol.26 No.1. 2004 *
二维投影与PCA相结合的人脸识别算法. 张生亮,杨静宇.计算机工程,第32卷第16期. 2006
二维投影与PCA相结合的人脸识别算法. 张生亮,杨静宇. 计算机工程,第32卷第16期. 2006 *
基于双属性图表示的通用人脸图像识别系统. 熊志勇,沈理.计算机学报,第24卷第7期. 2001
基于双属性图表示的通用人脸图像识别系统. 熊志勇,沈理. 计算机学报,第24卷第7期. 2001 *
融合全局与局部特征的子空间人脸识别算法. 王蕴红,范伟,谭铁牛.计算机学报,第28卷第10期. 2005
融合全局与局部特征的子空间人脸识别算法. 王蕴红,范伟,谭铁牛. 计算机学报,第28卷第10期. 2005 *

Also Published As

Publication number Publication date
CN1949245A (en) 2007-04-18

Similar Documents

Publication Publication Date Title
CN102103698B (en) Image processing apparatus and image processing method
CN108875602A (en) Monitor the face identification method based on deep learning under environment
CN101526997A (en) Embedded infrared face image identifying method and identifying device
CN103262118A (en) Attribute value estimation device, attribute value estimation method, program, and recording medium
CN104573706A (en) Object identification method and system thereof
CN106127197A (en) A kind of saliency object detection method based on notable tag sorting
CN103473786A (en) Gray level image segmentation method based on multi-objective fuzzy clustering
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN102789578A (en) Infrared remote sensing image change detection method based on multi-source target characteristic support
CN100447807C (en) Personal face matching method based on limited multiple personal face images
CN103975343A (en) A system and method for enhancing human counting by fusing results of human detection modalities
CN110399866A (en) Based on the alternate space junk observation method of CCD camera different exposure time
CN106096513A (en) Fingerprint identification method, fingerprint recognition system and electronic equipment
CN109902616A (en) Face three-dimensional feature point detecting method and system based on deep learning
CN104376611A (en) Method and device for attendance of persons descending well on basis of face recognition
CN111695520A (en) High-precision child sitting posture detection and correction method and device
CN101320477B (en) Human body tracing method and equipment thereof
CN105404866A (en) Implementation method for multi-mode automatic implementation of human body state sensing
CN114937298A (en) Micro-expression recognition method based on feature decoupling
CN103049747A (en) Method for re-identifying human body images by utilization skin color
CN116805360B (en) Obvious target detection method based on double-flow gating progressive optimization network
CN109165542A (en) Based on the pedestrian detection method for simplifying convolutional neural networks
Wang et al. Student physical fitness test system and test data analysis system based on computer vision
CN102855501B (en) A kind of multi-direction subject image recognition methods
CN112364708A (en) Multi-mode human body action recognition method based on knowledge distillation and antagonistic learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHANGHAI JINDENGTAI INFORMATION TECHNOLOGY CO., LT

Free format text: FORMER OWNER: SHANGHAI UNIVERSITY

Effective date: 20130725

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 200444 BAOSHAN, SHANGHAI TO: 200092 HONGKOU, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20130725

Address after: 2, 200092 floor, No. 260, Yutian Road, Shanghai, Hongkou District, A10

Patentee after: SHANGHAI JINTAIDENG INFORMATION TECHNOLOGY CO., LTD.

Address before: 200444 Baoshan District Road, Shanghai, No. 99

Patentee before: Shanghai University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081231

Termination date: 20141109

EXPY Termination of patent right or utility model