People's face matching process based on limited several facial images
Technical field
The present invention relates to a kind of people's face matching process based on limited several facial images, relate to fields such as the detection of people's face, people's face coupling, pattern-recognition, intelligent monitoring, can directly judge detected people's face in the monitor video whether with multiple photos in people's face coupling.
Background technology
People's face coupling is a hot issue of present mode identification and field of intelligent monitoring, has a wide range of applications.Effectively carrying out of people's face coupling, can improve speed and the efficient of identification personage in the supervisory system, for all many-sides such as production and life bring great convenience, especially frequently take place in current terrorist incident in the world, under the environment of domestic some factors leading to social instability in addition, people's face coupling has actively and profound significance and influence for the safety inspection of public place.
Aspect people's face coupling many methods are being proposed at present, from PCA (principal component analysis (PCA)) to E (PC)
2A (the standard feature face expansion technique of enhancing), the people's face that matches based on the part from the people's face based on integral body mates, and arrives whole and local people's face matching technique of mixing mutually again.But also there are some limitations, 1) in the problem of being permitted the plurality of human faces coupling, all need to mate the unknown human face by training known person face, and in the process of training and coupling, exist " dimension disaster " problem of (calculated amount is exponential increase with the increase of dimension), it will directly cause the reduction of matching rate and the raising of time complexity.2) and in the present research, cause the wretched insufficiency of training sample with single width people face training sample condition precedent in the majority, this also will cause the reduction of matching rate.
Summary of the invention
The object of the present invention is to provide a kind of people's face matching process, real-time monitor video and limited several human face photos that collected are carried out people's face coupling based on limited several facial images.
In order to achieve this end, design of the present invention is: adopt dual principal component analysis (PCA) algorithm will strengthen standard feature face development method E (PC)
2A and Principal Component Analysis Method PCA organically combine.Strengthen standard feature face development method E (PC)
2A has good people's face matching capacity under the situation of having only the single width facial image, this method has adopted the method for two-dimensional projection, has strengthened the key feature position of people's face effectively, thereby has improved the matching capacity of people's face.And because light variation etc. all multifactor, if with Principal Component Analysis Method PCA as the replenishing of this method, and utilize limited several facial images that obtain, then can further improve the matching capacity of people's face.
Everyone has photo and one section real-time monitor video of 3 different expressions in people's face data that the present invention used.But must satisfy following the requirement:
1) the people's face in photo and the video is positive, all can within 5 degree in the inclination angle of upper and lower, left and right;
2) background of photo and video is single color, allows the slight variation of background color;
3) photo and video are advisable with natural light;
4) only comprise people's face in photo and the video.
According to above-mentioned inventive concept, the present invention adopts following technical proposals:
A kind of people's face matching process based on limited several facial images is characterized in that adopting dual principal component analysis (PCA) algorithm, strengthens standard feature face development method E (PC) with two-dimensional projection's method
2A is main, is auxilliary with Principal Component Analysis Method PCA, and the correlativity between end user's face major component detects the coupling of people's face; The concrete operations step is as follows:
(1) normalization of photo and video data: photo and video requency frame data with the RGB form are converted to gray level image, again gray-scale value is mapped between 0 to 1;
(2) location of people's face in photo: calculate the correlativity between going in the photo and going from top to bottom,, then show the border of having detected the crown when sudden change has taken place correlativity; From left to right the correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the left margin that has detected people's face; Correlativity between calculated column and the row when sudden change has taken place correlativity, then shows the right margin that has detected people's face from right to left;
(3) major component of people's face in the calculating photo: in locating the people's face rectangular area that finishes, strengthen standard feature face development method E (PC) with two-dimensional projection's method respectively
2A and Principal Component Analysis Method PCA calculate the major component of people's face, and the major component of two kinds of method gained all sorts from big to small according to the size that influences of each composition;
(4) location of people's face in video: extract a frame every the number frame and carry out people's face location, the same step of localization method (2);
(5) major component of people's face in the calculating frame of video;
(6) mate first;
(7) handle matching result first: if having frame of video and the average matching degree Match_degree_A between three photos more than or equal to certain numerical value to surpass certain numerical value in the matching result of step (6), then the match is successful; Do not reach certain numerical value if having in the matching result of step (6) greater than the frame of video of certain numerical value and the average matching degree Match_degree_A between three photos, then coupling is unsuccessful; Jump to step (8) under other situations;
(8) secondary coupling;
(9) handle the secondary matching result: surpassed certain numerical value if having among the result of step (8) more than or equal to the frame of video of certain numerical value and the minimum M atch_degree_min of the matching degree between three photos, then the match is successful;
Otherwise coupling is unsuccessful.
Above-mentioned enhancing standard feature face development method E (PC)
2A is as follows:
Suppose I
Mn(x y) is the gray-scale value of the people's face after the process normalization; X ∈ [1, row], y ∈ [1, col], m ∈ [1, p], n ∈ [1,3]; (x y) is the coordinate of pixel, and P is the number of the different people of people's face representative in the face database; The resolution of supposing each frame facial image is row*col, the line number of row presentation graphs picture frame, the columns of col presentation graphs picture frame; The concrete operations step is as follows:
1. J
Mn(x, y)=I
Mn(x, y) * I
Mn(x, y), J
MnBe J
Mn(x, y) mean value of matrix;
②
③
④
⑤
⑥
⑦
I
Mn(x, y) expression I
Mn(x, mean value y);
⑧
α and β are constant;
9. calculate II
MnPrincipal Component Analysis Method PCA major component, and sort from big to small according to the size that influences of each composition, R
Mn
Above-mentioned calculating II
MnThe method step of Principal Component Analysis Method PCA major component as follows:
A, be the matrix II of row * 1 with the size
MnVge record diagram picture frame II
MnThe mean value of each row;
B, II
MnNew (x, y)=II
Mn(x, y)-II
MnVge (x, 1), II herein
MnNew has write down II
MnWith its mean value II
MnVge makes the result after the difference;
C, to II
MnNew carries out svd, [U, S, V]=SVD (II
MnNew), wherein: S is a pair of angular moment battle array, and U and V represent two mutually orthogonal matrixes;
D, extract the diagonal element of diagonal matrix S and carry out descending sort, be designated as S_diag according to element value;
The number of the element of preceding percent i of e, calculating S_diag, the number of the element of preceding percent i of note S_diag is num_n;
F, the preceding num_n that gets matrix U are listed as II
MnMajor component.
The major component step of people's face is in the above-mentioned calculating frame of video: step 1.-8. with above-mentioned step (3) step by step 1.-8. step is identical, step is 9. for calculating II
MnMajor component, VR
Mn
Above-mentioned coupling first is a total frame of video Video_frame_num frame in the hypothesis video-frequency band, and concrete steps are as follows:
1. read in video the num_frame=i frame through strengthening standard feature face development method E (PC)
2The major component of preceding percent n that A obtains, the number of supposing the major component of preceding percent n is num_n.For the first time during iteration, i=15;
2. read in the num_photo=j photos through strengthening standard feature face development method E (PC)
2Preceding num_n the major component that A obtains.When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree
Ijj=j+1;
If 4. j is smaller or equal to 3, then continue step by step 2.;
5. calculate the mean value of correlativity between current video frame and three photos, get Match_degree_A
i
⑥i=i+15;
If 7. 1. i then continues step smaller or equal to frame of video sum Video_frame_num.
Above-mentioned secondary mates, and supposes total frame of video Video_frame_num frame in the video-frequency band, and the concrete operations step is as follows:
1. read in the major component of preceding percent n that obtains through Principal Component Analysis Method PCA of the num_frame=i frame of video, the number of supposing the major component of preceding percent n is num_n; For the first time during iteration, i=15;
2. read in preceding num_n the major component through Principal Component Analysis Method PCA acquisition of num_photo=j photos; When entering second iteration for the first time, j=1;
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree_pca
IjJ=j+1;
If 4. j is smaller or equal to 3, then continue step 2.;
5. calculate the minimum value of correlativity between current video frame and three photos, get Match_degree_min
i
⑥i=i+15;
If 7. i is smaller or equal to Video_frame_num, then continue step 1..
The present invention has following conspicuous outstanding substantive distinguishing features and remarkable advantage compared with prior art:
Method of the present invention can obtain higher matching accuracy rate: owing to can make full use of existing sample based on people's face coupling of video and limited multiple photos, and strengthen standard feature face development method E (PC)
2The utilization of A and Principal Component Analysis Method PCA replenish, and then can therefrom find to have more the feature of representational people's face at limited multiple image.And people's face matching process in the past is owing to lacking in training sample and adopt single people's face matching process to fail to obtain matching accuracy rate than higher.Method of the present invention is applicable to by limited several front faces monitors coupling between the gained video in real time, can obtain to have more actual use value than higher test accuracy rate.
Description of drawings
Fig. 1 is the process flow diagram of people's face matching process of the present invention.
Fig. 2 is the operation interface synoptic diagram of people's face matching process of the present invention.
Fig. 3 is the facial image that image detection arrives in the one embodiment of the invention.
Fig. 4 is the facial image that Video Detection arrives in the present embodiment.
Embodiment
Below in conjunction with specific embodiment technical scheme of the present invention is described in further detail.
5 parts of the total samples of the people's face matching process database that adopts in the present embodiment do not comprise personal informations such as name.Every increment example all has JPEG front head photo and one section avi head video of 3 different expressions.
Referring to Fig. 1 and Fig. 2, whole people's face coupling implementation procedure is as follows:
(1) normalization of photo and video data: the photo with the RGB form is converted to gray level image, again gray-scale value is mapped between 0 to 1.From video, extract a frame every 15 frames and carry out normalization.As: the value of a certain pixel after the normalization is 0.56.
(2) location of people's face in photo: people's face and background color are more single owing to only existing in the photo, so can locate people's face by determining border, the crown, face left border, face right side boundary.Overhead in the mensuration on border, calculate the correlativity between row and the row from top to bottom, because background color is almost single, so when also not detecting border, the crown, the correlativity between the row is almost more than 90%, in case detected the crown, then correlativity is undergone mutation, and this moment, correlativity sharply descended, when continuing to calculate with the correlativity between descending and the row again, its value is gone up again at once about 90%, shows that then correlativity catastrophe point place is border, the crown.The method that detects people's face left margin and right margin is similar, and different is when detecting people's face left margin, is the correlativity between calculated column and the row from left to right, and when detecting people's face right margin, be from the right side turn left calculated column and be listed as between correlativity.After people's face location finishes, all human face regions are all fixed in the matrix [90*60] with identical size.As Fig. 3.
(3) major component of people's face in the calculating photo: in people's face rectangular area, strengthen standard feature face development method E (PC) with two-dimensional projection's method respectively
2A and Principal Component Analysis Method PCA calculate the major component of people's face, and the major component of two kinds of method gained all sorts from big to small according to the size that influences of each composition.
Strengthen standard feature face development method E (PC)
2A is specific as follows: suppose I
Mn(x y) is the gray-scale value (x ∈ [1,90], y ∈ [1,60], m ∈ [1,5], n ∈ [1,3]) of the people's face after the process normalization
1. J
Mn(x, y)=I
Mn(x, y) * I
Mn(x, y), J
MnBe J
Mn(x, y) mean value of matrix;
②
③
④
⑤
⑥
⑦
⑧
9. calculate II
MnMajor component, and sort from big to small according to the size that influences of each composition, R
MnGet α=0.25, β=1.5
PCA method: calculate people's face I
Mn(x, major component y), and sort from big to small according to the size that influences of each composition, Q
Mn
As: R
MnBe one 90 * 60 matrix, Q
MnIt is one 90 * 60 matrix.
(4) location of people's face in video: in the process of coupling, be not that each frame all is extracted as the frame that can mate, mate but get a frame every 15 frames.In each frame that extracts, the location of people's face is as described in the step (2).The result who obtains as shown in Figure 4.
(5) major component of people's face in the calculating frame of video:, strengthen standard feature face development method E (PC) with two-dimensional projection's method respectively for the frame of each extraction
2Major component with Principal Component Analysis Method PCA calculating people face.Suppose a total N frame in the video, then be extracted num=[N/15 altogether] frame, suppose with strengthening standard feature face development method E (PC)
2Preceding 68% the major component that A and Principal Component Analysis Method PCA calculate is respectively VE
AbAnd VP
Ab(a ∈ [1,5], b ∈ [1, num]).As: VE
AbBe one 90 * 8 matrix, VP
AbIt is one 90 * 6 matrix.
(6) mate first: (supposing total frame of video N frame in the video-frequency band)
1. read in video the num_frame=i frame through strengthening standard feature face development method E (PC)
2Preceding 68% the major component that A obtains supposes that the number of preceding 68% major component is num_n.For the first time during iteration, i=15.
2. read in the num_photo=j photos through strengthening standard feature face development method E (PC)
2Preceding num_n the major component that A obtains.When entering second iteration for the first time, j=1.
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree
Ijj=j+1。As: the correlativity between the frame of video of certain coupling and certain photos is 0.83.
If 4. j is smaller or equal to 3, then continue step by step 2..
5. calculate the mean value of correlativity between current video frame and three photos, get Match_degree_A
iAs:
The average of the correlativity between current video frame and three photos is 0.87.
⑥i=i+15。
If 7. i is smaller or equal to N, then continue step 1..
(7) handle matching result first: surpassed 70% if having more than or equal to the frame of video of [num/3] and the matching degree of photo in the matching result of step (6), then the match is successful; Do not reach 50% if having greater than the frame of video of [num/3] and the matching degree of photo in the matching result of step (6), then coupling is unsuccessful; Jump to step (8) under other situations.As: the result of matching process is as follows first for certain:
0.85,0.84,0.78,0.79,0.93,0.82,0.75,0.71,0.59,0.78,0.62,0.61,0.89,0.76, and 0.64}, then coupling is passed through first.
(8) secondary coupling: (supposing total frame of video N frame in the video-frequency band)
1. read in preceding 68% the major component that obtains through Principal Component Analysis Method PCA of the num_frame=i frame of video, suppose that the number of preceding 68% major component is num_n.For the first time during iteration, i=15.
2. read in preceding num_n the major component through Principal Component Analysis Method PCA acquisition of num_photo=j photos.When entering second iteration for the first time, j=1.
3. calculate the correlativity between preceding num_n the major component of num_frame=i frame frame of video and num_photo=j photos, be designated as match_degree_pca
Ijj=j+1。As: the correlativity between the frame of video of certain coupling and certain photos is 0.78.
If 4. j is smaller or equal to 3, then continue step by step 2..
5. calculate the minimum value of correlativity between current video frame and three photos, get Match_degree_min
i
Minimum value as the correlativity between: current video frame and three photos is 0.73.
⑥i=i+15。
If 7. i is smaller or equal to N, then continue step 1..
(9) handle the secondary matching result: surpassed 70% if having more than or equal to the frame of video of [num/3] and the matching degree of photo among the result of step (8), then the match is successful; Otherwise coupling is unsuccessful.As: the result of certain secondary coupling is as follows: 0.85,0.83,0.72,0.81,0.76,0.79,0.60,0.75,0.86,0.90,0.70,0.84, and 0.88}, then the secondary coupling is passed through.