CN102938065B - Face feature extraction method and face identification method based on large-scale image data - Google Patents
Face feature extraction method and face identification method based on large-scale image data Download PDFInfo
- Publication number
- CN102938065B CN102938065B CN201210495625.0A CN201210495625A CN102938065B CN 102938065 B CN102938065 B CN 102938065B CN 201210495625 A CN201210495625 A CN 201210495625A CN 102938065 B CN102938065 B CN 102938065B
- Authority
- CN
- China
- Prior art keywords
- face
- facial image
- image
- database
- base layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of face feature extraction method and face identification method based on large-scale image data, this method is:1) a face database A is set up, wherein everyone has a variety of expressions, posture, the facial image of illumination;With a face database B;2) Face datection is carried out to each image in database A and B, and exports the rectangle frame and two pixel coordinates for containing face;3) each facial image is corrected and scaled, generate an established standardses format-pattern, and extract base layer texture characteristic vector;4) for each people a in database A, a grader that can recognize that the people a is trained using algorithm of support vector machine;5) by new Face image synthesis established standardses format-pattern, then the base layer texture characteristic vector of the new face is differentiated using each grader, and a vector is set up according to differentiation fraction, the characteristic vector of the new facial image is used as.The present invention greatly improves the robustness of extracted face characteristic.
Description
Technical field
The invention belongs to image identification technical field, it is related to a kind of facial image analysis and processing method, more particularly to one
Face feature extraction method and face identification method based on large-scale image data are planted, recognition of face/face is can be applied to and tests
Face characteristic extraction unit based on the application scenarios such as card/face search.
Background technology
The feature extraction of facial image is the focus of computer vision research, and its technical research achievement is widely used in various
The tasks such as recognition of face/face verification/face search/facial Expression Analysis.Its basic task is the facial image for input
Produce a vector representation so that can be very good to retain the information of face identity and reject the information unrelated with face identity.
Its elementary object is that the facial image input of same person produces as far as possible similar, the less vector output of mutual distance, without
Facial image input with people produces more significant difference output.
The researcher of facial image research field proposes the extracting method of a variety of face characteristics.
Matthew Turk and Alex Pentland exist " Face Recognition Using Eigenfaces " propositions
The vector that first facial image pixel value is drawn into, then calculates these vectors on the face database collected in advance
Principal component direction.The feature of last new facial image is projection point of the image pixel value vector on these principal component directions
Amount.
Peter N.Belhumeur et al. are in " Eigenfaces vs.Fisherfaces:Recognition Using
ClassSpecific Linear Projection " propose Fisherface feature extracting method.This method is equally used
The pixel value vector of facial image goes out one group of linear projection direction, finally as essential characteristic using linear discriminant Rule Extraction
Facial image feature be exactly the component on that group of obtained linear projection direction.
Wiskott et al. " is being used in Face recognition by elastic bunch graph matching "
The response that one group of Gabor filter obtains the key point of some faces is used as the feature of description face texture.This method is still
It is to describe face using the textural characteristics of image local.
T.Ahonen et al. exists " Face Recognition with Local Binary Patterns " describe so
Technology:Each pixel using local binarization template in image does coded quantization, obtains the quantized value of one 0 to 255
To represent the gray values structures of this pixel part, then by whole facial image grid division shape image sheet, then each
The histogram of quantized value is extracted in individual image sheet, most the histogram of all image sheets is cascaded as facial image at last
Feature.The finer local pixel gray-level structure for featuring facial image of this method, but for the posture table of face
The change of feelings is very sensitive.Therefore this method is mainly tried out is in front in shooting face, neutral situation of expressing one's feelings.
These classical face characteristic extraction algorithms, the analysis for being all based on the Local textural feature structure of image is represented.
Principal component analysis such as based on image pixel gray level value vector, the vector based on image pixel gray level value.And for example Gabor is analyzed
Technology is only extracted response of the input face under one group of Gabor filter and exported as characteristic vector.For another example LBP feature extractions,
Also simply analyze input face local pixel gray value relative size relation, by this magnitude relationship carry out coding so as to
Produce histogram output.
Due to above-mentioned face characteristic extraction algorithm only make use of input facial image Local textural feature, therefore they
Feature extraction algorithm by facial image illumination/expression and shoot when human face posture influenceed very big.The face of same person
Image, if illumination/expression/posture when shooting is different, the characteristic vector of output will have very big difference, and this is to influence these
Method is applied to the basic bottleneck of recognition of face/face verification/face search application.
The content of the invention
The purpose of the present invention, which is that, overcomes above-mentioned classical facial image feature extracting method for human face posture and expression
Serious correlation the problem of, propose it is a kind of based on the face feature extraction method of large-scale image data and recognition of face side
Method, so as to obtain a kind of facial image feature for being applied to various scenes and application scenario of more robust.
The present invention initially sets up two face databases, i.e., set up famous person's face based on the Internet images data as follows
Storehouse and " background " face database.
The name of various famous persons is collected first as the search keyword of internet facial image, is used as structure famous person's face
The basis of image data base.The present invention has collected the search key of N number of famous person altogether.It is each in famous person's face database
People has a variety of expressions, posture, about n face pictures of illumination.
The face for being not belonging to any of collected famous person's face database famous person is collected by internet simultaneously
View data, build one as subsequently train negative sample " " facial image database, the scale in the storehouse is in M in the present invention for background
Open human face data or so.Background face database includes the facial image of a variety of expression/posture/illumination.M, m, N are
Natural number, its specific numeral can be chosen according to actual needs.
Using Face datection algorithm for each face figure in famous person's facial image database and " background " facial image database
As data are detected, so that the normal place and size of locating human face.By the above-mentioned each face Geometry rectification detected
To setting normal place and normalize to fixed size, i.e., each face Geometry rectification and fixed resolution will be scaled
Standard faces picture format.The present invention uses X*Y standard image size, and ensures the eye position of face being placed in
The prior defined position of standard picture.The process is referred to as demarcation.
All famous person's facial image databases and " background " facial image database are extracted with any classical face characteristic extraction algorithm
The base layer texture characteristic vector of the middle face images by demarcation.The present invention has tested LBP, HOG (Histogram Of
) and the classical features of three kinds of Gabor etc. Gradients.
For each famous person in famous person storehouse, will belong to the famous person facial image base layer texture characteristic vector as
Positive sample, by " " the base layer texture characteristic vector of the facial image in face database utilizes SVM algorithm (branch to background as negative sample
Hold vector machine algorithm) training one can recognize the grader of the famous person, and it is stand-by that result is stored in into database.
After storehouse is built in the training for completing above-mentioned each famous person's face classification device, the present invention calculates one newly using following steps
The feature of facial image.
The new facial image is used into above-mentioned identical Face datection algorithm, Geometry rectification and method for normalizing are produced and name
The standard picture of people's face database/same specification of " background " face database.
The base layer texture characteristic vector of calibrated new facial image is extracted with any classical face characteristic extraction algorithm.
The base layer texture characteristic vector of the new facial image of extraction is applied to each famous person point in famous person's grader
Class device, by obtained all graders differentiate fraction output be arranged in a new vector (because there is N people in famous person storehouse, the vector
Shared N-dimensional).What the vector so produced was included is no longer the Local textural feature of original facial image, but the face and name
The similarity degree of each famous person in personal data storehouse.Due to being used when building the grader of each famous person in database of experts
The positive sample (image of the famous person) of various expression/posture/illumination and the negative sample of various expression/posture/illumination, so so
Obtained grader will automatically be absorbed in essential difference constant between people and people, and ignore the table that there is various change
Feelings/posture/illumination, therefore such N-dimensional vector is robust.Vector dimension does not have hard requirement.
Just recognition of face/face verification/face can be carried out using the final feature of the new facial image to the new person to search for
Deng processing.
Compared with prior art, the positive effect of the present invention is:
The present invention sets up famous person's facial image grader storehouse using the Internet images data, finally realizes based on internet
The face characteristic extraction algorithm of view data.The present invention the advantage is that, be trained using multiple SVM compared with classical way
The output of robust human face grader as face characteristic structure basic module, can reduce well the final feature of generation to
The correlation for human face light/expression/posture is measured, so that finally lifting is known using the various faces of face characteristic of the present invention
The accuracy of not/checking/searching algorithm.Be further noted that the present invention calculate each famous person's face classification device output when,
Parallel computation can be very easily carried out, so as to greatly promote the speed for extracting final face characteristic.
Brief description of the drawings
Fig. 1 is facial image database sharing flow chart;
Fig. 2 is the normal place and size figure of locating human face's image;
Fig. 3 is the base layer texture characteristic vector flow chart for extracting facial image;
Fig. 4 extracts flow chart for the face characteristic of new facial image.
Embodiment
The present invention is described in detail with instantiation below in conjunction with the accompanying drawings.
Two database creation processes that the present invention sets up famous person's face database and " background " face database are described first.Detailed mistake
Journey includes the following steps described in accompanying drawing 1.
Step S11:The celebrity name that collecting N bars relatively has attention rate is used as the keyword of search engine.
Step S12:Facial image is searched on the internet using the celebrity name of collection, and each keyword is obtained
Top n image result is downloaded and is stored in a local file system.There is single file path in the retrieval result of each famous person
Under.
Step S13:All facial images detected under each famous person's file path are subjected to manual screening, by mistake
" face " rejecting of detection, which will be face, but be not belonging to the face of the famous person is stored in " background " face picture library.By above-mentioned steps
Afterwards, N number of famous person is had in famous person's face database, each famous person finally retains n face pictures.Finally protected in " background " face database
Stay M face pictures.
Step S14:Using the ripe Face datection algorithm of a business for all faces for being downloaded in S12, S13 step
Image file carries out Face datection, and for each facial image, its output is a rectangle frame and the picture of two for containing face
Plain coordinate, referring to accompanying drawing 2.
Step S15:Obtained all faces will be detected, calculated according to rectangle frame and two pixel coordinates in a plane
Similarity transformation, thus by facial image Geometry rectification into the X*Y resolution ratio of a normal size image.This step is referred to as
" demarcation ".
Step S16:Using LBP, (or HOG/Gabor, the present invention has tested three kinds of classical ways, and performance is by test base
This quite for convenience, uses LBP objects as an example in follow-up illustrate) extract the bottom line of every facial image
Manage face characteristic.Specific steps are as described in the step in accompanying drawing 3.Its step is explained in detail below.
Step S161:For a facial image demarcated, the circumference around each of which location of pixels is equidistant
Eight coordinate points are taken successively.The calculation formula of coordinate points is【Xn=x0+r*cos (2*pi*n/8), yn=y0+r*sin
(2*pi*n/8)】.
Step S162:The pixel value at eight coordinate points is calculated using bilinear interpolation, then with the pixel of central point
Value is compared, and if greater than the pixel value of central point, just writes down 1 in the position, is otherwise 0.
Step S163:Previous step is write down 1 or 0 is circumferentially taken out successively, as one bit, finally gives one
Individual 8bit signless integer, scope is 0-255.This integer is the LBP encoded radios of the pixel.
Step S164:Whole facial image is divided into 5*7 latticed picture block, counted in each picture block
The LBP coding histograms vector of dimension (one 256), then by the histogram in all 35 picture block regions be connected into it is one long to
Amount, as final LBP face feature vectors.
Step S17:For each famous person in famous person storehouse, using his/her n face pictures as positive sample, will " back of the body
" the face LBP textures extracted in step s 16 are inputted SVM algorithm to scape by the face picture in face database as negative sample
Module, thus obtain can with robust identification the famous person face classification device.All N number of famous person's face classification devices are stored in number
According to storehouse.
So far, the establishment of famous person's face database and calculation processes terminate.
Feature extraction detailed step of the invention for new facial image as shown in Figure 4, is comprised the following steps that.
Step S21:The rectangle frame of face and two places in user's face detection algorithm detection output institute input picture
Position.
Step S22:The face picture newly inputted is become using Geometry rectification/normalization algorithm using S21 testing result
Change the picture of a standard x * Y-resolution into, X, Y are natural number, and size can be set as needed.
Step S23:The LBP texture feature vectors of new input facial image are extracted using LBP Texture Segmentation Algorithms.
Step S24:The texture feature vector that step S23 is obtained is applied to the famous person's grader built in step S17, i.e.,
The obtained texture feature vectors of step S23 are differentiated respectively using each famous person's grader built in step S17, respectively
A differentiation fraction is obtained, the differentiation fraction output of N number of SVM classifier can be so obtained.
Just recognition of face/face verification/face can be carried out using the final feature of the new facial image to the new person to search for
Deng processing.
So far, the feature extraction for being based ultimately upon internet data of new input face is finished.The method that the present invention is described
Large-scale internet has been based on it and has referred to facial image, has used face classification device based on LBP (or HOG, Gabor)
Input, a collection of face grader is trained using SVM algorithm, so that finally using these graders for new input face picture
Differentiate that output is used as feature.
Above example is only a kind of present invention specific embodiment therein.It should be pointed out that for this area
Other technologies and researcher, without departing from the inventive concept of the premise, can make various forms of modification and improvement, this
Belong to protection scope of the present invention.Therefore, scope of patent protection of the invention should be defined by appended claims.
Claims (6)
1. a kind of face feature extraction method based on large-scale image data, its step is:
1) a face database A is set up, each of which people has a variety of different expressions, posture, multiple people of illumination respectively
Face image;A face database B is set up, it includes a variety of different expressions, posture, the facial image of illumination;
2) using Face datection algorithm to each facial image number in face database A and face database B
According to progress Face datection, and export the rectangle frame and two pixel coordinates for containing face;
3) facial image is corrected using the corresponding rectangle frame of each facial image detected and two pixel coordinates
And scaling, generate an established standardses format-pattern;
4) the base layer texture characteristic vector of each reference format facial image of generation in being extracted 3) using Face datection algorithm,
The base layer texture characteristic vector includes the base layer texture feature of whole face;
5) for each people a in face database A, by its face images in face database A
Base layer texture characteristic vector is special by all facial image base layer textures for belonging to people in face database B as positive sample
Vector is levied as negative sample, a grader that can recognize that the people a is trained using algorithm of support vector machine;
6) for a new facial image, utilize step 2), Face datection 3) and correction, Zoom method, generate an established standardses
Format-pattern, and extract the base layer texture characteristic vector of the reference format image;
7) by the base layer texture characteristic vector parallel input to step 5) in obtained each grader, utilize the classification
Device differentiates to the base layer texture characteristic vector of the new facial image, respectively obtains a grader and differentiates fraction, described to sentence
The similarity of the face of other fraction representation new person's face recognizable people corresponding with the grader;
8) differentiate that fraction sets up a vector according to all graders, be used as the characteristic vector of the new facial image.
2. the method as described in claim 1, it is characterised in that the facial image of any people is equal in the face database B
It is not belonging to someone facial image in the face database A.
3. method as claimed in claim 1 or 2, it is characterised in that the face database A and face database B
Method for building up be:
1) a plurality of celebrity name is collected as the keyword of search engine, and facial image is searched on the internet;
2) several image results are downloaded and are stored under a single file path before each keyword is obtained;
3) all facial images detected under each famous person's file path are screened, by the facial image of mistake detection
Rejecting, which will be facial image, but be not belonging to the facial image of the famous person is stored in face database B;All famous person's file paths
Lower remaining facial image constitutes face database A.
4. method as claimed in claim 1 or 2, it is characterised in that the antidote is:By each face detected
Image calculates the similarity transformation in a plane according to its corresponding rectangle frame and two pixel coordinates.
5. the method as described in claim 1, it is characterised in that the extraction algorithm of the base layer texture characteristic vector is filtered for Gabor
Ripple device algorithm, or HOG algorithms, or LBP algorithms.
6. the face identification method that a kind of face characteristic based on large-scale image data is extracted, it is characterised in that will based on right
The characteristic vector of the new facial image of 1 methods described extraction is sought, recognition of face is carried out to the new person.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210495625.0A CN102938065B (en) | 2012-11-28 | 2012-11-28 | Face feature extraction method and face identification method based on large-scale image data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210495625.0A CN102938065B (en) | 2012-11-28 | 2012-11-28 | Face feature extraction method and face identification method based on large-scale image data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102938065A CN102938065A (en) | 2013-02-20 |
CN102938065B true CN102938065B (en) | 2017-10-20 |
Family
ID=47696960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210495625.0A Active CN102938065B (en) | 2012-11-28 | 2012-11-28 | Face feature extraction method and face identification method based on large-scale image data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102938065B (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218609B (en) * | 2013-04-25 | 2016-01-20 | 中国科学院自动化研究所 | A kind of Pose-varied face recognition method based on hidden least square regression and device thereof |
CN103258190A (en) * | 2013-05-13 | 2013-08-21 | 苏州福丰科技有限公司 | Face recognition method used for mobile terminal |
CN103353942A (en) * | 2013-07-30 | 2013-10-16 | 上海电机学院 | Interactive face identification system and method |
CN103793697B (en) * | 2014-02-17 | 2018-05-01 | 北京旷视科技有限公司 | The identity mask method and face personal identification method of a kind of facial image |
CN103824090B (en) * | 2014-02-17 | 2017-02-08 | 北京旷视科技有限公司 | Adaptive face low-level feature selection method and face attribute recognition method |
CN104020848A (en) * | 2014-05-15 | 2014-09-03 | 中航华东光电(上海)有限公司 | Static gesture recognizing method |
CN104008395B (en) * | 2014-05-20 | 2017-06-27 | 中国科学技术大学 | A kind of bad video intelligent detection method based on face retrieval |
CN105069475B (en) * | 2015-08-06 | 2018-12-18 | 电子科技大学 | The image processing method of view-based access control model attention mechanism model |
CN107977647B (en) * | 2017-12-20 | 2020-09-04 | 上海依图网络科技有限公司 | Face recognition algorithm evaluation method suitable for public security actual combat |
CN109934116B (en) * | 2019-02-19 | 2020-11-24 | 华南理工大学 | Standard face generation method based on confrontation generation mechanism and attention generation mechanism |
CN109978552B (en) * | 2019-03-29 | 2022-09-20 | 吴伟运 | Payment processing method, device and equipment based on identity card information |
CN110096989B (en) * | 2019-04-24 | 2022-09-09 | 深圳爱莫科技有限公司 | Image processing method and device |
CN110287835A (en) * | 2019-06-14 | 2019-09-27 | 南京云创大数据科技股份有限公司 | A kind of Asia face database Intelligent Establishment method |
CN110807108A (en) * | 2019-10-15 | 2020-02-18 | 华南理工大学 | Asian face data automatic collection and cleaning method and system |
CN112861893B (en) * | 2019-11-27 | 2023-03-24 | 四川大学 | Stranger identification algorithm based on CSI amplitude-subcarrier probability distribution |
CN111414803A (en) * | 2020-02-24 | 2020-07-14 | 北京三快在线科技有限公司 | Face recognition method and device and electronic equipment |
CN112699810B (en) * | 2020-12-31 | 2024-04-09 | 中国电子科技集团公司信息科学研究院 | Method and device for improving character recognition precision of indoor monitoring system |
CN116012924B (en) * | 2023-01-30 | 2023-06-27 | 人民网股份有限公司 | Face gallery construction method and device and computing equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6292575B1 (en) * | 1998-07-20 | 2001-09-18 | Lau Technologies | Real-time facial recognition and verification system |
CN1908960A (en) * | 2005-08-02 | 2007-02-07 | 中国科学院计算技术研究所 | Feature classification based multiple classifiers combined people face recognition method |
CN101719223A (en) * | 2009-12-29 | 2010-06-02 | 西北工业大学 | Identification method for stranger facial expression in static image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682309B (en) * | 2011-03-14 | 2014-11-19 | 汉王科技股份有限公司 | Face feature registering method and device based on template learning |
-
2012
- 2012-11-28 CN CN201210495625.0A patent/CN102938065B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6292575B1 (en) * | 1998-07-20 | 2001-09-18 | Lau Technologies | Real-time facial recognition and verification system |
CN1908960A (en) * | 2005-08-02 | 2007-02-07 | 中国科学院计算技术研究所 | Feature classification based multiple classifiers combined people face recognition method |
CN101719223A (en) * | 2009-12-29 | 2010-06-02 | 西北工业大学 | Identification method for stranger facial expression in static image |
Also Published As
Publication number | Publication date |
---|---|
CN102938065A (en) | 2013-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102938065B (en) | Face feature extraction method and face identification method based on large-scale image data | |
Afifi | 11K Hands: Gender recognition and biometric identification using a large dataset of hand images | |
Xu et al. | Data uncertainty in face recognition | |
Zhan et al. | Face detection using representation learning | |
Perez et al. | Methodological improvement on local Gabor face recognition based on feature selection and enhanced Borda count | |
CN103136516B (en) | The face identification method that visible ray and Near Infrared Information merge and system | |
CN105550657B (en) | Improvement SIFT face feature extraction method based on key point | |
WO2015101080A1 (en) | Face authentication method and device | |
CN111814574A (en) | Face living body detection system, terminal and storage medium applying double-branch three-dimensional convolution model | |
CN102629320B (en) | Ordinal measurement statistical description face recognition method based on feature level | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
CN105956560A (en) | Vehicle model identification method based on pooling multi-scale depth convolution characteristics | |
Khan et al. | Facial expression recognition on real world face images using intelligent techniques: A survey | |
Geng et al. | Fully automatic face recognition framework based on local and global features | |
Angadi et al. | Face recognition through symbolic modeling of face graphs and texture | |
Fawwad Hussain et al. | Gray level face recognition using spatial features | |
Salah et al. | Recognize Facial Emotion Using Landmark Technique in Deep Learning | |
Kaushik et al. | Recognition of facial expressions extracting salient features using local binary patterns and histogram of oriented gradients | |
Alsubari et al. | Facial expression recognition using wavelet transform and local binary pattern | |
CN110287973B (en) | Image feature extraction method based on low-rank robust linear discriminant analysis | |
Chuang et al. | Hand posture recognition and tracking based on bag-of-words for human robot interaction | |
Annagrebah et al. | Real-time Face Recognition based on Deep neural network methods to solve occlusion problems | |
Nguyen et al. | LAWNet: A lightweight attention-based deep learning model for wrist vein verification in smartphones using RGB images | |
Kaur et al. | Comparative study of facial expression recognition techniques | |
Del Coco et al. | Assessment of deep learning for gender classification on traditional datasets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |