CN101226591A - Personal identification method based on mobile phone pick-up head combining with human face recognition technique - Google Patents

Personal identification method based on mobile phone pick-up head combining with human face recognition technique Download PDF

Info

Publication number
CN101226591A
CN101226591A CNA2008100332872A CN200810033287A CN101226591A CN 101226591 A CN101226591 A CN 101226591A CN A2008100332872 A CNA2008100332872 A CN A2008100332872A CN 200810033287 A CN200810033287 A CN 200810033287A CN 101226591 A CN101226591 A CN 101226591A
Authority
CN
China
Prior art keywords
human face
input
mobile phone
face recognition
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008100332872A
Other languages
Chinese (zh)
Inventor
陈华曦
刘决仕
申瑞民
童任
金晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CNA2008100332872A priority Critical patent/CN101226591A/en
Publication of CN101226591A publication Critical patent/CN101226591A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

Disclosed is an identification method based on the combination of cell phone cameras and face recognition technique in the field of wireless network technique and pattern recognition. A mobile network is used to transfer easily cognizable information of to-be-identified persons and face image information collected by the cell phone cameras to a server which combines the data meeting the specifications of sex, age and the like in a data bank to a candidate data set according to the easily cognizable information. The sever identifies the face data in the candidate data set by employing principal component analysis and linear discrimination analysis for training, identifies image-transferred face areas and selects the most similar results, the number of which is set by users, from the identification results to transfer back to cell phones for final judgment by the users. The invention realizes the purposes of largely reducing cost and difficulty and increasing speed and accuracy of identification.

Description

Personal identification method based on mobile phone pick-up head combining with human face recognition technique
Technical field
What the present invention relates to is the method in a kind of image recognition technology field, specifically is a kind of personal identification method based on mobile phone pick-up head combining with human face recognition technique.
Background technology
The biometric identity recognition methods is the biological characteristic that utilizes human body, the method that people's identity is discerned.Traditional biological characteristic comprises people's face, fingerprint, human eye iris, vocal print, smell or the like.Biometric identity is identified in different fields such as public security, bank, personal information preservation and has obtained to use widely.People's face detects to refer to and detect human face region in piece image, and recognition of face then refers to utilize people's face vision biological information to carry out authentication.These two technology all are the important technologies in the area of pattern recognition.People's face detects and recognition technology is widely applied to many fields such as identification, man-machine interface, content-based retrieval, Digital Video Processing, visual monitoring.Traditional biometric identity recognition methods for example fingerprint recognition and human eye iris recognition all needs special checkout equipment, and this kind equipment is expensive and heavy, only can use in the minority occasion, be unfavorable for widespread use.
Find through retrieval prior art, Chinese patent application number is 200410097054.0, patent name is " utilizing the camera that is provided with on the mobile phone to confirm the device and method of identity ", this invention utilization is arranged on camera on the mobile phone and takes face and import after the characteristic information of health, from the image that comprises background, extract face's field profile and extract face feature and be sent to identity information database and retrieve, and image as a result re-send on the mobile phone, thereby confirm identity quickly and accurately.But this method need be too much in the information that mobile phone end is handled, and the mobile phone end performance expends too high, makes system accuracy and real-time all descend.The described extraction of this method people face eyebrow on the other hand, features such as bone do not have the method for practical value at present, so its operability and realizability are not strong.
Summary of the invention
The present invention is directed to above-mentioned the deficiencies in the prior art, a kind of personal identification method based on mobile phone pick-up head combining with human face recognition technique has been proposed, make it utilize the mobile network that person's to be identified easy identification information and the human face photo image information by the mobile phone cam collection are sent to server end, the inquiry decision system of identification server is formed a candidate data collection according to easy identification information will satisfy easy identification information in the database data, after people's face data that identification server by utilizing principal component analysis (PCA) (PCA) and linear discriminant analysis (LDA) are concentrated candidate data are carried out recognition training, human face region to input picture is discerned, and the result who selects the user to set number from recognition result sends back mobile phone end confession user by MMS (Multimedia Message Service) (MMS) and finally judges.
The present invention is achieved by the following technical solutions, the present invention includes following concrete steps:
Step 1, the user obtains person's to be identified easy identification information, and with easy identification information input handset end;
Step 2 utilizes mobile phone cam to gather person's facial image to be identified;
Step 3, easy identification information and facial image that step 1 and step 2 are obtained respectively are sent to server end by mobile Internet, and send the identification request to server end;
Step 4, utilization is selected the data the identity database of server end from the person's to be identified that mobile phone end is uploaded easy identification information, and the data that will conform to the easy identification information of person to be identified are formed a candidate data collection D New
Step 5, server end utilize principal component analysis (PCA) and linear discriminant analysis to candidate data collection D NewIn people's face data carry out recognition of face training, generate human face recognition transformation matrix H;
Step 6 utilizes oval complexion model to handle the facial image that mobile phone end transmits, and obtains area of skin color wherein;
Step 7 utilizes people's face cascade sort detecting device to detect area of skin color, obtains human face region wherein and generates people's face F to be identified Input
Step 8 is from candidate data collection D NewThe middle width of cloth facial image of selecting is as comparison people face F i, the human face recognition transformation matrix H that generates with step 5 is to comparison people face F iAnd people's face F to be identified InputCarry out linear transformation, generate the human face recognition vector v iAnd v Input, v wherein iBe comparison people face F iThe human face recognition transformation results, v InputBe people's face F to be identified InputThe human face recognition transformation results;
Step 9 is calculated the human face recognition vector v iAnd v InputEuclidean distance d I, input, i=1,2...i are candidate data collection D NewMiddle identity information quantity;
Step 10 is to candidate data collection D NewEveryone face data repeating step eight and step 9, and the Euclidean distance d that is generating I, inputIn, according to Euclidean distance order from small to large choose the user set number identity data return mobile phone end as recognition result, the mobile phone holder will carry out final identification according to the identity data that this user sets number.
Described easy identification information, be meant person to be identified information external, that be easy to gather, comprise: sex, the range of age, height etc., utilize easy identification information the information filtering that obviously is not inconsistent in the database, reduce the scope of recognition of face, improve the accuracy rate of identification.
Person's facial image to be identified of described collection, be meant the facial image of gathering the RGB color space by mobile phone cam, images acquired resolution size is 160 * 120, in gathering the facial image process, make person's eyes to be identified be positioned at the position of human eye frame that acquisition software is set, and the looking natural of face, illumination are good, and image background is simple.
Described identity database, be meant the database of storage personally identifiable information, wherein each identity information essential information and a human face photo, essential information comprises: name, sex, age, height, body weight, blood group, human face photo detects the cascade classifier processing through oval complexion model and people's face, has obtained human face region and has removed background.
Described principal component analysis (PCA), be meant by extracting the main feature of data set and reduce the data set dimension, it is a linear transformation, in data projection to a new linear space, in new linear space according to how many sorting data proper vectors from high to low of information content, select the higher low volume data of information content, form the new data set of the low dimension that has kept the most information that former data set comprised.
Described principal component analysis (PCA), its linear transformation expression formula is as follows:
y=Ψ Tx
Wherein: x is a vector before the conversion, and Ψ is the principal component analysis (PCA) transformation matrix, and T represents transposition, and y is the principal component analysis (PCA) result vector.
Described principal component analysis (PCA) transformation matrix Ψ, its method of determining is specific as follows:
If data set X, wherein total N M dimension data x i, i=1...N, N<<M, data set X is expressed as the matrix of a M * N, be referred to as matrix X, establish Φ=[φ 1, φ 2... φ M], φ wherein i(i=1...M) be the M dimensional vector, so Φ is M * M square formation, then has:
Xφ i=λ iφ i
Wherein: λ iBe eigenwert, ∑ XBe the covariance matrix of matrix X, ∑ X=E[(X-E[X]) (X-E[X]) T]
Be φ i(i=1...M) be the covariance matrix ∑ of X XCorresponding to eigenvalue iProper vector;
With eigenvalue i(i=1...M) be λ according to ordering from big to small (1), λ (2)... λ (M), its characteristic of correspondence vector is respectively φ (1), φ (2)... φ (M), then
Ψ T = φ ( 1 ) T φ ( 2 ) T · · · φ ( 300 ) T
Ψ=(Ψ T) TBe the principal component analysis (PCA) transformation matrix.
Described linear appreciation analysis, being meant by extracting the categories of datasets distinguishing characteristic makes the data set dimension reduce, the same with principal component analysis (PCA), it is a kind of linear transformation, linear appreciation is analyzed in N class data projection to the new linear space, make in this new linear space that Euclidean distance is nearest between the different sample datas in the same classification, Euclidean distance between the inhomogeneity farthest simultaneously, projection has not only reduced the dimension of data like this, it also handles what is more important at classification information, has improved the accuracy of classification.
Described linear appreciation analysis, its linear transformation expression formula is as follows:
z=W Ty
Wherein: z is the human face recognition vector, and y is the result vector of principal component analysis (PCA), is in the human face recognition linear space, and W is linear appreciation analytic transformation matrix.
Described linear appreciation analytic transformation matrix W, it determines that method is as follows:
If data set Y has N kind C i, i=1...N, data are the M dimension, and each class data comprises 1 or a plurality of sample y i, i=1,2.. has for this N class problem,
S w - 1 S b W = Λ w W
Wherein: Λ wBe eigenvalue matrix, W is linear appreciation analytic transformation matrix, is matrix S w -1S bCorresponding to Λ wEigenvectors matrix, S wBe class inscattering matrix, characterize the situation of data variance in the same class,
S w = Σ i = 1 N { Pr ( C i ) E [ ( y - E ( y i ) ) ( y - E ( y i ) ) T | C = C i ] }
And S bBe scattering matrix between class, characterize data variance distribution situation between inhomogeneity,
S b = Σ i = 1 N { Pr ( C i ) ( E ( y i ) - E ( y ) ) ( E ( y i ) - E ( y ) ) T }
Pr (C in wherein above-mentioned two formulas i) be class C iPrior probability, expression formula is:
Pr ( C i ) = 1 N , ( i = 1 . . . N )
The prior probability of each class is equal,
By obtaining matrix S w -1S bEigenvectors matrix and eigenvalue matrix, just can unique definite linear appreciation analytic transformation matrix W.
Described human face recognition transformation matrix H is specially: H T=W TΨ T
Human face recognition is transformed to: z=H Tx
Wherein: Ψ and W are respectively above-mentioned principal component analysis (PCA) transformation matrix and linear Identification analytic transformation matrixes, and x is input facial image vector, and z is the human face recognition vector.
The oval complexion model of described utilization is handled the facial image that mobile phone end transmits, and comprises following concrete steps:
1. the rgb value of each pixel in the facial image that mobile phone end is collected (R, G B) convert to (Y, Cb, Cr) value, wherein, Y represents that monochrome information, Cb represent that blue colour difference information, Cr represent red colour difference information;
2. the parameter of oval complexion model is set;
3. to each pixel in the image, carry out skin color segmentation, obtain black and white two-value broca scale picture;
4. be communicated with in the border of area of skin color at four of black and white two-value broca scale picture, obtain the area of skin color positional information.
Described people's face cascade sort detecting device, be that linear combination by Weak Classifier obtains a strong classifier, again the strong classifier combination is obtained a complete cascade classifier, utilize the repeatedly classification of cascade to improve the accuracy that detects, the Weak Classifier that is used to detect, and utilize Adaboost (self-adaptation adjustment) method training to obtain by the strong classifier that weak typing linear combination obtains, people's face cascade sort detecting device that final training is come out comprises 20 strong classifiers, people's face cascade sort detecting device constructs Weak Classifier successively by reading the classifier data file that trains, strong classifier and cascade classifier.Whether Weak Classifier directly provides according to the input rectangular area is the classification results of people's face, strong classifier is determined classification results according to a plurality of Weak Classifier results' weighting, cascade classifier is obtained by the cascade of multilayer strong classifier, if certain strong classifier is non-face with this territorial classification, then cascade classifier thinks that detected zone is non-face, direct detection of end, otherwise, cascade classifier is sent data into next strong classifier classification, when having only all strong classifiers to think that all this zone is for human face region, people's face cascade sort detecting device just provides the zone and is the classification results of people's face.
The described human face recognition transformation matrix H that generates that uses is respectively to comparison people face F iAnd people's face F to be identified InputCarry out the human face recognition linear transformation, and generate corresponding human face recognition vector v iAnd v Input, specific as follows:
v i=H TF i
v input=H TF input
Wherein, F iBe the people's face that from database, takes out, F InputBe people's face to be identified that mobile phone end is uploaded, above-mentioned two formulas are respectively to F iAnd F InputCarried out the human face recognition conversion, v iAnd v InputBe corresponding human face recognition vector, final recognition of face will be undertaken by comparing these two vectors.
Described calculating human face recognition vector v iAnd v InputEuclidean distance d I, input, specific as follows:
d i , input = Σ j = 1 300 ( v i , j - v input , j ) 2
Wherein: v I, jAnd v Input, jIt is respectively the human face recognition vector v iAnd v InputJ component, d I, inputIt is vector v iAnd v InputEuclidean distance, this is apart from characterizing vector v iAnd v InputSimilarity degree, d I, inputMore little then similarity degree is high more.
Compared with prior art, the present invention has following beneficial effect: owing to adopt mature technology, and characteristics such as the movability of mobile phone, low cost and high popularity rate, the present invention realizes that cost and difficulty descend greatly.The mobile phone that can reach this system mobile phone end requirement is One's name is legion not only, and price is lower, and common hundreds of units get final product.The present invention uses user's input information that identity database is filtered before identification on the other hand, provides 10 data to select for the user when carrying out final identification, has improved the speed and the accuracy of identification.Show that after tested the identification accuracy of this system has reached more than 85%, make the reliability of this system reach the requirement of commercial application level.Above-mentioned 2 make the widespread deployment of this identification system become a reality, and also making the related scientific research achievements conversion is that the ability of commercial product is improved.The present invention all has very high application and commercial value in different field such as safety guarantee, traffic administration, population managements.
Description of drawings
Fig. 1 is a workflow synoptic diagram of the present invention;
Fig. 2 is inventor's face cascade sort detecting device testing process synoptic diagram;
Fig. 3 is inventor's face identification process synoptic diagram;
Fig. 4 (a) is a mobile phone end man face image acquiring synoptic diagram;
Fig. 4 (b) is people's face cascade sort detecting device testing result figure;
Fig. 5 is the mutual synoptic diagram of identification server-mobile phone end.
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are elaborated: present embodiment has provided detailed embodiment and process being to implement under the prerequisite with the technical solution of the present invention, but protection scope of the present invention is not limited to following embodiment.
The server end basic condition that present embodiment adopted is as follows:
Server end has adopted Microsoft SQL Server 2005 that database service is provided, recognizer adopts C# language based on Microsoft.NET Framework 2.0 exploitations, this program is provided services on the Internet at front end, obtain the identification request from mobile phone end, rear end and SQL database carry out carrying out concrete identification alternately; Behind the end of identification result is sent back mobile phone by MMS (Multimedia Message Service).
Collected 353 people's identity information in the database of server end, human face region totally 5 information have only been kept in name, sex, age, human face photo, the photo for the sake of simplicity, wherein name is as unique definite these personnel of tables of data major key, everyone one of human face photo, amount to 353, every human face photo size is 160 * 120 pixels, these photos all pass through oval complexion model and people's face cascade sort detector processes, obtained human face region information, the distribution situation of these data is as shown in table 1:
Table 1 age-sex distribution table
Age sex The man The woman Amount to
10-20 54 46 100
21-30 118 87 205
31-40 28 20 48
Amount to 200 153 353
The mobile phone end basic condition is as follows in the present embodiment: mobile phone end adopts the smart mobile phone that carries Symbian (letter is than peace) S60 operating system as the mobile phone end platform, the mobile phone end program adopts C Plus Plus to develop based on SymbianS60 SDK, need open GPRS online and two services of multimedia messages (MMS) that China Mobile provides when the mobile phone end program is used, mobile phone end can be carried out with server end alternately like this.
As shown in Figure 1, present embodiment comprises the steps:
Step 1, Information Monitoring: mobile phone end is imported person's to be identified easy identification information, for example: import following information: " man, 21-30 year ";
Gather person's to be identified facial image, the facial image size is 160 * 120 pixels, and shown in Fig. 4 (a), human eye is arranged in the position of human eye frame, and face is proper to look natural, and background is simple.
Step 2, upload information: mobile phone end is opened GPRS (GPS) network automatically and is connected, set up communication with server end and be connected, obtain server end connect set up response after, with the information easy to identify of step 1 collection and facial image by the GPRS network end that uploads onto the server.
Step 3, server end filtering information: after server end is received above-mentioned information, all information that satisfy " man, 21-30 year " in the identity database are selected, formed candidate data collection D New, comprise 118 identity informations, corresponding 118 facial images.
Step 4, human face recognition transformation matrix H training: human face recognition transformation matrix H training comprises principal component analysis (PCA) (PCA) transformation matrix Ψ training and linear appreciation analysis (LDA) transformation matrix w training two parts.
Described principal component analysis (PCA) transformation matrix Ψ training, specific as follows:
1. according to aforesaid analysis, candidate data collection D NewIn comprise 118 facial images, each image size is 160 * 120 pixels, be the data of 160 * 120=19200 dimension, so principal component analysis (PCA) conversion training dataset X is 19200 * 118 matrix.
According to the formula ∑ X=E[(X-E[X]) (X-E[X]) T]
Ask the covariance matrix of X, after calculating finishes, ∑ XIt is 19200 * 19200 square formation;
2. according to the formula ∑ Xφ iiφ i, find the solution ∑ XProper vector and eigenwert be respectively
φ i,λi(i=1...19200)
3. with λ i(i=1...19200) be λ according to ordering from big to small (1), λ (2)... λ (19200)Its characteristic of correspondence vector is respectively φ (1), φ (2)... φ (19200), then
Ψ T = φ ( 1 ) T φ ( 2 ) T · · · φ ( 300 ) T
Be 300 * 19200 matrix, training dataset X is carried out principal component analysis (PCA) projective transformation Y=Ψ TX, Y are linear appreciation analyzing and training data set, and data set Y is 300 * 118 matrixes.
Described linear appreciation analytic transformation matrix W training, specific as follows:
According to formula: S w = Σ i = 1 N { Pr ( C i ) E [ ( y - E ( y i ) ) ( y - E ( y i ) ) T | C = C i ] }
S b = Σ i = 1 N { Pr ( C i ) ( E ( y i ) - E ( y ) ) ( E ( y i ) - E ( y ) ) T }
Pr ( C i ) = 1 N , ( i = . . . N )
Find the solution S wAnd S b, find the solution the back matrix S wAnd S bBe 300 * 300 matrixes.
According to formula
S w - 1 S b W = W Λ w
Find the solution S w -1S bProper vector and eigenwert be respectively: w i, λ i(i=1...300) linear appreciation analytic transformation matrix W is
W=[w (1),w (2)...w (300)]
Linear appreciation analytic transformation matrix W is 300 * 300 matrixes.
Described human face recognition transformation matrix H T=W TΨ T, H TBe 300 * 19200 matrixes.
Step 5 is treated recognition image and is carried out people's face and detect, and people's face detects and comprises two parts: use that oval complexion model obtains the area of skin color positional information at the YCbCr color space, end user's face cascade sort detecting device detects human face region from area of skin color.
The oval complexion model of described use obtains the area of skin color positional information at the YCbCr color space, is specially:
1. with the rgb value of each pixel in the facial image that collects in the step 1 (R, G, B), by following formula convert to (Y, Cb, Cr) value:
Y = 0.299 R + 0.587 G + 0.114 B Cb = - R 6 - G 3 + B 2 + 128 Cr = R 2 - 5 G 12 - B 12 + 128
Wherein, Y represents that monochrome information, Cb represent that blue colour difference information, Cr represent red colour difference information.
Parameter and value thereof that 2. oval complexion model is set are shown below:
Cx=109.38,Cy=152.02,ecx=1.60,ecy=2.41
a=25.29,b=14.03,θ=2.53
3. to each pixel in the image, carry out skin color segmentation, obtain two-value broca scale picture:
By the value of this pixel in the YCbCr space be (Y, Cb Cr) calculate the skin tone value of this pixel by following formula:
x=(Cb-Cx)×cosθ+(Cr-Cy)×sinθ
y=(Cb-Cx)×(-sinθ)+(Cr-Cy)×cosθ
Figure S2008100332872D00101
If this skin tone value is less than 1, then this pixel belongs to area of skin color, is provided with that this pixel is a colour of skin point in the two-value broca scale picture; Otherwise do not belong to area of skin color, be provided with that this pixel is a background dot in the two-value broca scale picture.
As shown in Figure 2, described end user's face fruit cascade detectors detects human face region from area of skin color, be specially: whether Weak Classifier directly provides according to the input rectangular area is the classification results of people's face, strong classifier is determined classification results according to a plurality of Weak Classifier results' weighting, cascade classifier is obtained by the cascade of multilayer strong classifier, if certain strong classifier is non-face with this territorial classification, then cascade classifier thinks that detected zone is non-face, direct detection of end, otherwise, cascade classifier is sent data into next strong classifier classification, when having only all strong classifiers to think that all this zone is for human face region, cascade classifier just provides the zone and is the classification results of people's face.The testing result of people's face cascade sort detecting device is seen accompanying drawing 4 (b), and white box institute enclosing region is human face region among the figure.
Step 6 is carried out the human face recognition conversion to input people's face and comparison facial image.
Described human face recognition conversion is meant
v i=H TF i
v input=H TF input
Comparison people face F in above-mentioned two formulas iReach people's face F to be identified that mobile phone end is uploaded InputBe 19200 * 1 vector, and H TBe 300 * 19200 matrixes, so the human face recognition vector v after the conversion end iAnd v InputBe 300 * 1 vectors.
Step 7, the Euclidean distance between the human face recognition vector that calculation procedure six generates.
Described Euclidean distance is calculated according to following formula:
d i , input = Σ j = 1 300 ( v i , j - v input , j ) 2
Wherein: v I, jAnd v Input, j is respectively the human face recognition vector v iAnd v InputJ component, d I, inputIt is vector v iAnd v InputEuclidean distance, this is apart from characterizing vector v iAnd v InputSimilarity degree, d I, inputMore little then similarity degree is high more.
The recognition of face flow process of above-mentioned steps six and step 7 as shown in Figure 3.
Step 8 to all data repeating steps six, seven of satisfy condition " man, 21-30 year ", is preserved all Euclidean distance d that generate I, input, and from the data that satisfy above-mentioned condition, choose the identity data of 10 correspondences as recognition result according to Euclidean distance order from small to large.
Step 9 returns the recognition result of above-mentioned generation to mobile phone end.Recognition result will be divided into 10 multimedia messagess (MMS) and be sent to mobile phone end by the multimedia messages center, and the mobile phone holder will carry out final identification according to these 10 multimedia messagess (MMS).Described mobile phone end of step 1 and step 9 and server end reciprocal process are as shown in Figure 5.
After above-mentioned steps finishes, the cellphone subscriber will obtain enough information and carry out identification, and and then take further action.
Present embodiment combines mobile phone by utilizing mature technology (C# .NET technology, SQL Server, Symbian S60) with recognition of face, realized the reliable identity recognition technology.This technology has overcome limitation such as existing GPRS mobile Internet bandwidth is lower, and the smart mobile phone performance limitations is more, has realized a system that can carry out practical application.Its initial two ends participate in controlled characteristic by the user, make this system have the high accuracy that common face identification system did not have.The probability that comprises correct recognition result in its 10 width of cloth images that finally return of test shows to this embodiment is greater than 90%, and promptly the rate of accuracy reached of this system is to 90%.Show also that through test an identification request on average only needs 10 second time from issuing a request to return results on the other hand.

Claims (10)

1. the personal identification method based on mobile phone pick-up head combining with human face recognition technique is characterized in that, comprises the steps:
Step 1, the user obtains person's to be identified easy identification information, and with easy identification information input handset end;
Step 2 utilizes mobile phone cam to gather person's facial image to be identified;
Step 3, easy identification information and facial image that step 1 and step 2 are obtained respectively are sent to server end by mobile Internet, and send the identification request to server end;
Step 4, utilization is selected the data the identity database of server end from the person's to be identified that mobile phone end is uploaded easy identification information, and the data that will conform to the easy identification information of person to be identified are formed a candidate data collection D New
Step 5, server end utilize principal component analysis (PCA) and linear discriminant analysis to candidate data collection D NewIn people's face data carry out recognition of face training, generate human face recognition transformation matrix H;
Step 6 utilizes oval complexion model to handle the facial image that mobile phone end transmits, and obtains area of skin color wherein;
Step 7 utilizes people's face cascade sort detecting device to detect area of skin color, obtains human face region wherein and generates people's face F to be identified Input
Step 8 is from candidate data collection D NewThe middle width of cloth facial image of selecting is as comparison people face F i, the human face recognition transformation matrix H that generates with step 5 is to comparison people face F iAnd people's face F to be identified InoutCarry out linear transformation, generate the human face recognition vector v iAnd v Input, v wherein iBe comparison people face F iThe human face recognition transformation results, v InputBe people's face F to be identified InputThe human face recognition transformation results;
Step 9 is calculated the human face recognition vector v iAnd v InputEuclidean distance d I, input, i=1,2...i are candidate data collection D NewMiddle identity information quantity;
Step 10 is to candidate data collection D NewEveryone face data repeating step eight and step 9, and the Euclidean distance d that is generating I, inputIn, according to Euclidean distance order from small to large choose the user set number identity data return mobile phone end as recognition result, the mobile phone holder will carry out final identification according to the identity data that this user sets number.
2. a kind of personal identification method according to claim 1 based on mobile phone pick-up head combining with human face recognition technique, it is characterized in that, person's facial image to be identified of described collection, be meant the facial image of gathering the RGB color space by mobile phone cam, images acquired resolution size is 160 * 120, in gathering the facial image process, makes person's eyes to be identified be positioned at the position of human eye frame that acquisition software is set, and the looking natural of face, illumination are good, and image background is simple.
3. a kind of personal identification method according to claim 1 based on mobile phone pick-up head combining with human face recognition technique, it is characterized in that, described identity database, be meant the database of storage personally identifiable information, wherein each identity information comprises essential information and a human face photo, essential information comprises: name, sex, age, height, body weight, blood group, human face photo detects the cascade classifier processing through oval complexion model and people's face, has obtained human face region and has removed background.
4. a kind of personal identification method according to claim 1 based on mobile phone pick-up head combining with human face recognition technique, it is characterized in that, described principal component analysis (PCA), be meant by extracting the main feature of data set and reduce the data set dimension, it is a linear transformation, in data projection to a new linear space, in new linear space according to how many sorting data proper vectors from high to low of information content, select the higher low volume data of information content, form the new data set of the low dimension that has kept the most information that former data set comprised;
The linear transformation expression formula of principal component analysis (PCA) is as follows:
y=Ψ Tx
Wherein: x is a vector before the conversion, and Ψ is the principal component analysis (PCA) transformation matrix, and T represents transposition, and y is the principal component analysis (PCA) result vector;
Wherein, definite method of principal component analysis (PCA) transformation matrix Ψ is specific as follows:
If data set X, wherein total N M dimension data x i, i=1...N, N<<M, data set X is expressed as the matrix of a M * N, be referred to as matrix X, establish Φ=[φ 1, φ 2... φ M], φ wherein i(i=1...M) be the M dimensional vector, so Φ is M * M square formation, then has:
Xφ i=λ iφ i
Wherein: λ iBe eigenwert, Σ XBe the covariance matrix of matrix X, ∑ X=E[(X-E[X]) (X-E[X]) T] be φ i(i=1...M) be the covariance matrix ∑ of X XCorresponding to eigenvalue iProper vector;
With eigenvalue i(i=1...M) be λ according to ordering from big to small (1), λ (2)... λ (M), its characteristic of correspondence vector is respectively φ (1), φ (2)... φ (M), then
Ψ T = φ ( 1 ) T φ ( 2 ) T · · · φ ( 300 ) T
Ψ=(Ψ T) TBe the principal component analysis (PCA) transformation matrix.
5. the personal identification method based on mobile phone pick-up head combining with human face recognition technique according to claim 1, it is characterized in that, described linear appreciation analysis, being meant by extracting the categories of datasets distinguishing characteristic makes the data set dimension reduce, the same with principal component analysis (PCA), it is a kind of linear transformation, linear appreciation is analyzed in N class data projection to the new linear space, make in this new linear space that Euclidean distance is nearest between the different sample datas in the same classification, the Euclidean distance between the inhomogeneity farthest simultaneously;
The linear transformation expression formula that linear appreciation is analyzed is as follows:
z=W Ty
Wherein: z is the human face recognition vector, and y is the result vector of principal component analysis (PCA), is in the human face recognition linear space, and W is linear appreciation analytic transformation matrix;
Wherein: definite method of linear appreciation analytic transformation matrix W is as follows:
If data set Y has N kind C i, i=1...N, data are the M dimension, and each class data comprises 1 or a plurality of sample y i, i=1,2.. has for this N class problem,
S w - 1 S b W = Λ w W
Wherein: Λ wBe eigenvalue matrix, W is linear appreciation analytic transformation matrix, is matrix S w -1S bCorresponding to Λ wEigenvectors matrix, S wBe class inscattering matrix, characterize the situation of data variance in the same class,
S w = Σ i = 1 N { Pr ( C i ) E [ ( y - E ( y i ) ) ( y - E ( y i ) ) T | C = C i ] }
And S bBe scattering matrix between class, characterize data variance distribution situation between inhomogeneity,
S b = Σ i = 1 N { Pr ( C i ) ( E ( y i ) - E ( y ) ) ( E ( y i ) - E ( y ) ) T }
Pr (C in wherein above-mentioned two formulas i) be class C iPrior probability, expression formula is:
Pr ( C i ) = 1 N , ( i = 1 . . . N )
The prior probability of each class is equal,
By obtaining matrix S w -1S bEigenvectors matrix and eigenvalue matrix, uniquely determine linear appreciation analytic transformation matrix W.
6. the personal identification method based on mobile phone pick-up head combining with human face recognition technique according to claim 1 is characterized in that, described human face recognition transformation matrix H is specially: H T=W TΨ T,
Human face recognition is transformed to: z=H Tx
Wherein: Ψ and W are respectively principal component analysis (PCA) transformation matrix and linear Identification analytic transformation matrix, and x is input facial image vector, and z is the human face recognition vector.
7. a kind of personal identification method based on mobile phone pick-up head combining with human face recognition technique according to claim 1 is characterized in that, the oval complexion model of described utilization is handled the facial image that mobile phone end transmits, and comprises following concrete steps:
1. the rgb value of each pixel in the facial image that mobile phone end is collected (R, G B) convert to (Y, Cb, Cr) value, wherein, Y represents that monochrome information, Cb represent that blue colour difference information, Cr represent red colour difference information;
2. the parameter of oval complexion model is set;
3. to each pixel in the image, carry out skin color segmentation, obtain black and white two-value broca scale picture;
4. be communicated with in the border of area of skin color at four of black and white two-value broca scale picture, obtain the area of skin color positional information.
8. a kind of personal identification method according to claim 1 based on mobile phone pick-up head combining with human face recognition technique, it is characterized in that, described people's face cascade sort detecting device, be that linear combination by Weak Classifier obtains a strong classifier, again the strong classifier combination is obtained a cascade classifier, utilize the repeatedly classification of cascade to improve the accuracy that detects, the Weak Classifier that is used to detect, and utilize self-adapting regulation method training to obtain by the strong classifier that weak typing linear combination obtains, people's face cascade sort detecting device that final training is come out comprises 20 strong classifiers, people's face cascade sort detecting device is by reading the classifier data file that trains, construct Weak Classifier successively, strong classifier and cascade classifier, whether Weak Classifier directly provides according to the input rectangular area is the classification results of people's face, strong classifier is determined classification results according to a plurality of Weak Classifier results' weighting, cascade classifier is obtained by the cascade of multilayer strong classifier, if certain strong classifier is non-face with this territorial classification, then cascade classifier thinks that detected zone is non-face, direct detection of end, otherwise, cascade classifier is sent data into next strong classifier classification, when having only all strong classifiers to think that all this zone is for human face region, people's face cascade sort detecting device just provides the zone and is the classification results of people's face.
9. a kind of personal identification method based on mobile phone pick-up head combining with human face recognition technique according to claim 1 is characterized in that, the described human face recognition transformation matrix H that generates that uses is respectively to comparison people face F iAnd people's face F to be identified InputCarry out the human face recognition linear transformation, and generate corresponding human face recognition vector v iAnd v Input, specific as follows:
v i=H TF i
v input=H TF input
Wherein, F iBe the people's face that from database, takes out, F InputBe people's face to be identified that mobile phone end is uploaded, above-mentioned two formulas are respectively to F iAnd F InputCarried out the human face recognition conversion, v iAnd v InputBe corresponding human face recognition vector, final recognition of face will be undertaken by comparing these two vectors.
10. a kind of personal identification method based on mobile phone pick-up head combining with human face recognition technique according to claim 1 is characterized in that, described calculating human face recognition vector v iAnd v InputEuclidean distance d I, input, specific as follows:
d i , input = Σ j = 1 300 ( v i , j - v input , j ) 2
Wherein: v I, jAnd v Input, jIt is respectively the human face recognition vector v iAnd v InputJ component, d I, inputIt is vector v iAnd v InputEuclidean distance, this is apart from characterizing vector v iAnd v InputSimilarity degree, d I, inputMore little then similarity degree is high more.
CNA2008100332872A 2008-01-31 2008-01-31 Personal identification method based on mobile phone pick-up head combining with human face recognition technique Pending CN101226591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008100332872A CN101226591A (en) 2008-01-31 2008-01-31 Personal identification method based on mobile phone pick-up head combining with human face recognition technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008100332872A CN101226591A (en) 2008-01-31 2008-01-31 Personal identification method based on mobile phone pick-up head combining with human face recognition technique

Publications (1)

Publication Number Publication Date
CN101226591A true CN101226591A (en) 2008-07-23

Family

ID=39858576

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008100332872A Pending CN101226591A (en) 2008-01-31 2008-01-31 Personal identification method based on mobile phone pick-up head combining with human face recognition technique

Country Status (1)

Country Link
CN (1) CN101226591A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102035929A (en) * 2009-09-29 2011-04-27 比亚迪股份有限公司 Method, system and terminal for identifying identities of terminal users
CN102117400A (en) * 2009-07-01 2011-07-06 手持产品公司 System and method to capture and analyze image data of object
CN102208009A (en) * 2010-03-31 2011-10-05 索尼公司 Classifier and classification method
CN102222232A (en) * 2011-06-24 2011-10-19 常州锐驰电子科技有限公司 Multi-level rapid filtering and matching device and method for human faces
CN102609729A (en) * 2012-02-14 2012-07-25 中国船舶重工集团公司第七二六研究所 Method and system for recognizing faces shot by multiple cameras
US8311293B2 (en) 2008-09-04 2012-11-13 Sony Corporation Image processing apparatus and associated methodology for facial recognition
CN103218615A (en) * 2013-04-17 2013-07-24 哈尔滨工业大学深圳研究生院 Face judgment method
CN103258190A (en) * 2013-05-13 2013-08-21 苏州福丰科技有限公司 Face recognition method used for mobile terminal
CN103279701A (en) * 2013-06-19 2013-09-04 汪德嘉 Information interaction method based on intelligent mobile terminal shooting
CN103577838A (en) * 2013-11-25 2014-02-12 苏州大学 Face recognition method and device
CN103810468A (en) * 2012-11-05 2014-05-21 东芝泰格有限公司 Commodity recognition apparatus and commodity recognition method
CN103988207A (en) * 2011-12-14 2014-08-13 英特尔公司 Techniques for skin tone activation
CN104699237A (en) * 2013-12-10 2015-06-10 宏达国际电子股份有限公司 Method, interactive device, and computer readable medium for recognizing user behavior
CN104778462A (en) * 2015-04-28 2015-07-15 哈尔滨理工大学 Face recognition method and device
WO2016074248A1 (en) * 2014-11-15 2016-05-19 深圳市三木通信技术有限公司 Verification application method and apparatus based on face recognition
CN107278369A (en) * 2016-12-26 2017-10-20 深圳前海达闼云端智能科技有限公司 Method, device and the communication system of people finder
CN107273796A (en) * 2017-05-05 2017-10-20 珠海数字动力科技股份有限公司 A kind of fast face recognition and searching method based on face characteristic
CN107330404A (en) * 2017-06-30 2017-11-07 重庆科技学院 Personal identification method based on cell neural network autoassociative memories model
CN108093178A (en) * 2018-01-03 2018-05-29 上海传英信息技术有限公司 A kind of method and shooting mobile phone that the variation of the photo colour of skin is realized by PCA linear transformations
CN108416273A (en) * 2018-02-09 2018-08-17 厦门通灵信息科技有限公司 A kind of Distributive System of Face Recognition and its recognition methods
CN109034052A (en) * 2018-07-24 2018-12-18 深圳市科脉技术股份有限公司 Method for detecting human face and device
CN109284694A (en) * 2018-08-31 2019-01-29 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109284675A (en) * 2018-08-13 2019-01-29 阿里巴巴集团控股有限公司 A kind of recognition methods of user, device and equipment
CN109472183A (en) * 2017-09-08 2019-03-15 上海银晨智能识别科技有限公司 Image-recognizing method and device, system of deploying to ensure effective monitoring and control of illegal activities, computer readable storage medium
CN110147458A (en) * 2019-05-24 2019-08-20 涂哲 A kind of photo screening technique, system and electric terminal
CN110874571A (en) * 2015-01-19 2020-03-10 阿里巴巴集团控股有限公司 Training method and device of face recognition model
CN111464459A (en) * 2020-03-20 2020-07-28 西安交通大学 Network flow characteristic extraction method based on principal component analysis and linear discriminant analysis
CN112528893A (en) * 2020-12-15 2021-03-19 南京中兴力维软件有限公司 Abnormal state identification method and device and computer readable storage medium
CN113177489A (en) * 2021-05-07 2021-07-27 艾拉物联网络(深圳)有限公司 High-precision portrait recognition method and system for security monitoring
TWI801751B (en) * 2020-09-09 2023-05-11 普匯金融科技股份有限公司 Identity verification device and identity verification method

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311293B2 (en) 2008-09-04 2012-11-13 Sony Corporation Image processing apparatus and associated methodology for facial recognition
CN101667248B (en) * 2008-09-04 2013-03-27 索尼株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
CN102117400A (en) * 2009-07-01 2011-07-06 手持产品公司 System and method to capture and analyze image data of object
CN102035929A (en) * 2009-09-29 2011-04-27 比亚迪股份有限公司 Method, system and terminal for identifying identities of terminal users
CN102208009A (en) * 2010-03-31 2011-10-05 索尼公司 Classifier and classification method
CN102222232A (en) * 2011-06-24 2011-10-19 常州锐驰电子科技有限公司 Multi-level rapid filtering and matching device and method for human faces
CN103988207A (en) * 2011-12-14 2014-08-13 英特尔公司 Techniques for skin tone activation
CN102609729A (en) * 2012-02-14 2012-07-25 中国船舶重工集团公司第七二六研究所 Method and system for recognizing faces shot by multiple cameras
CN103810468A (en) * 2012-11-05 2014-05-21 东芝泰格有限公司 Commodity recognition apparatus and commodity recognition method
CN103218615A (en) * 2013-04-17 2013-07-24 哈尔滨工业大学深圳研究生院 Face judgment method
CN103218615B (en) * 2013-04-17 2016-06-22 哈尔滨工业大学深圳研究生院 Face judgment method
CN103258190A (en) * 2013-05-13 2013-08-21 苏州福丰科技有限公司 Face recognition method used for mobile terminal
CN103279701A (en) * 2013-06-19 2013-09-04 汪德嘉 Information interaction method based on intelligent mobile terminal shooting
CN103577838A (en) * 2013-11-25 2014-02-12 苏州大学 Face recognition method and device
CN104699237A (en) * 2013-12-10 2015-06-10 宏达国际电子股份有限公司 Method, interactive device, and computer readable medium for recognizing user behavior
CN104699237B (en) * 2013-12-10 2018-01-30 宏达国际电子股份有限公司 Recognize the method and related interactive device and computer-readable medium of user's operation
US9971411B2 (en) 2013-12-10 2018-05-15 Htc Corporation Method, interactive device, and computer readable medium storing corresponding instructions for recognizing user behavior without user touching on input portion of display screen
WO2016074248A1 (en) * 2014-11-15 2016-05-19 深圳市三木通信技术有限公司 Verification application method and apparatus based on face recognition
CN110874571B (en) * 2015-01-19 2023-05-05 创新先进技术有限公司 Training method and device of face recognition model
CN110874571A (en) * 2015-01-19 2020-03-10 阿里巴巴集团控股有限公司 Training method and device of face recognition model
CN104778462A (en) * 2015-04-28 2015-07-15 哈尔滨理工大学 Face recognition method and device
CN107278369A (en) * 2016-12-26 2017-10-20 深圳前海达闼云端智能科技有限公司 Method, device and the communication system of people finder
WO2018119599A1 (en) * 2016-12-26 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for searching for person and communication system
CN107273796A (en) * 2017-05-05 2017-10-20 珠海数字动力科技股份有限公司 A kind of fast face recognition and searching method based on face characteristic
CN107330404A (en) * 2017-06-30 2017-11-07 重庆科技学院 Personal identification method based on cell neural network autoassociative memories model
CN109472183A (en) * 2017-09-08 2019-03-15 上海银晨智能识别科技有限公司 Image-recognizing method and device, system of deploying to ensure effective monitoring and control of illegal activities, computer readable storage medium
CN108093178A (en) * 2018-01-03 2018-05-29 上海传英信息技术有限公司 A kind of method and shooting mobile phone that the variation of the photo colour of skin is realized by PCA linear transformations
CN108416273A (en) * 2018-02-09 2018-08-17 厦门通灵信息科技有限公司 A kind of Distributive System of Face Recognition and its recognition methods
CN109034052B (en) * 2018-07-24 2021-04-02 深圳市科脉技术股份有限公司 Face detection method and device
CN109034052A (en) * 2018-07-24 2018-12-18 深圳市科脉技术股份有限公司 Method for detecting human face and device
CN109284675A (en) * 2018-08-13 2019-01-29 阿里巴巴集团控股有限公司 A kind of recognition methods of user, device and equipment
CN109284675B (en) * 2018-08-13 2022-06-07 创新先进技术有限公司 User identification method, device and equipment
CN109284694A (en) * 2018-08-31 2019-01-29 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110147458A (en) * 2019-05-24 2019-08-20 涂哲 A kind of photo screening technique, system and electric terminal
CN111464459A (en) * 2020-03-20 2020-07-28 西安交通大学 Network flow characteristic extraction method based on principal component analysis and linear discriminant analysis
TWI801751B (en) * 2020-09-09 2023-05-11 普匯金融科技股份有限公司 Identity verification device and identity verification method
CN112528893A (en) * 2020-12-15 2021-03-19 南京中兴力维软件有限公司 Abnormal state identification method and device and computer readable storage medium
CN113177489A (en) * 2021-05-07 2021-07-27 艾拉物联网络(深圳)有限公司 High-precision portrait recognition method and system for security monitoring

Similar Documents

Publication Publication Date Title
CN101226591A (en) Personal identification method based on mobile phone pick-up head combining with human face recognition technique
CN102902959B (en) Face recognition method and system for storing identification photo based on second-generation identity card
CN101763503B (en) Face recognition method of attitude robust
Perez et al. Methodological improvement on local Gabor face recognition based on feature selection and enhanced Borda count
CN103116763B (en) A kind of living body faces detection method based on hsv color Spatial Statistical Character
CN103886301B (en) Human face living detection method
CN109522853B (en) Face datection and searching method towards monitor video
CN106845328B (en) A kind of Intelligent human-face recognition methods and system based on dual camera
CN106446772A (en) Cheating-prevention method in face recognition system
CN111126240B (en) Three-channel feature fusion face recognition method
CN107341688A (en) The acquisition method and system of a kind of customer experience
CN103077378B (en) Contactless face recognition algorithms based on extension eight neighborhood Local textural feature and system of registering
CN103839042B (en) Face identification method and face identification system
CN107615298A (en) Face identification method and system
CN102799870A (en) Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding
Mady et al. Face recognition and detection using Random forest and combination of LBP and HOG features
CN104036247A (en) Facial feature based face racial classification method
CN106845513B (en) Manpower detector and method based on condition random forest
CN103902978A (en) Face detection and identification method
WO2022088626A1 (en) Cat nose print identification method and apparatus based on cat nose print feature extraction model
CN109544523A (en) Quality of human face image evaluation method and device based on more attribute face alignments
CN107194314B (en) Face recognition method fusing fuzzy 2DPCA and fuzzy 2DLDA
CN104036291A (en) Race classification based multi-feature gender judgment method
CN110598574A (en) Intelligent face monitoring and identifying method and system
CN105550642B (en) Gender identification method and system based on multiple dimensioned linear Differential Characteristics low-rank representation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080723