CN101561875B - Method for positioning two-dimensional face images - Google Patents

Method for positioning two-dimensional face images Download PDF

Info

Publication number
CN101561875B
CN101561875B CN2009101433254A CN200910143325A CN101561875B CN 101561875 B CN101561875 B CN 101561875B CN 2009101433254 A CN2009101433254 A CN 2009101433254A CN 200910143325 A CN200910143325 A CN 200910143325A CN 101561875 B CN101561875 B CN 101561875B
Authority
CN
China
Prior art keywords
human face
dimension human
shape
face image
attitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2009101433254A
Other languages
Chinese (zh)
Other versions
CN101561875A (en
Inventor
丁晓青
方驰
王丽婷
丁镠
刘长松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2009101433254A priority Critical patent/CN101561875B/en
Publication of CN101561875A publication Critical patent/CN101561875A/en
Application granted granted Critical
Publication of CN101561875B publication Critical patent/CN101561875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a method for positioning two-dimensional face images, pertaining to the field of computer sight and model recognition. The method comprises the steps of: acquiring two-dimensional face images in a preset database; constructing two-dimensional face shape models by using the two-dimensional face images in the database; constructing two-dimensional face localtexture models by using the two-dimensional face images in the database; and positioning the two-dimensional face images according to the two-dimensional face shape models and the two-dimensional fac e local texture models. The method establishes the two-dimensional face shape models and two-dimensional face local texture models by utilizing the preset database, thus realizing the accurate positioning of the two-dimensional face images, and utilizes the method of combining use point comparison features and feature choosing upon establishing models of local texture, thus improving the computation speed and the positioning effect of feature points.

Description

A kind of method of positioning two-dimensional face
Technical field
The present invention relates to computer vision and area of pattern recognition, particularly a kind of method of positioning two-dimensional face.
Background technology
Face identification system is a core with the face recognition technology, is an emerging biological identification technology, is the high-quality precision and sophisticated technology of current International Technology field tackling key problem.The cooperation that people's face is not reproducible because of having, collection is convenient, do not need the one be shooted makes face identification system have widely and uses.
A location that key issue is a facial image of recognition of face.Usually in the process of recognition of face, need earlier facial image to be positioned, further carry out recognition of face according to positioning result then.
Face identification method also has a series of insoluble problems, and for example when bigger variation takes place for human face posture, expression and ambient lighting (PIE, Pose Illumination Expression), discrimination will sharply descend.How to solve the identification problem of people's face under different attitudes, illumination and expression condition, remain the focus of current research.Recognition of face problem for attitude and illumination variation uses conventional methods, and must obtain people's face training image of being used to learn under abundant different attitudes and the illumination condition, yet under many circumstances, these images also are not easy to obtain.In order to realize not relying on the recognition of face of attitude and ambient lighting, following method is proposed in the prior art:
The first kind is an attitude invariant features method for distilling, and these class methods solve the identification problem that attitude changes through extracting the characteristic that can overcome the attitude variation; Second type of solution that is based on the various visual angles facial image is such as the conventional subspace method being expanded to the various visual angles subspace; The 3rd type of method that is based on human face three-dimensional model, after Blanz proposed the three-dimensional face modeling method, the method that generates people's each attitude virtual images of face (Virtual Image) based on human face three-dimensional model had obtained achievement preferably in solving the attitude problem.
But also there is a lot of shortcomings in prior art, and the major defect of attitude invariant features method for distilling is to extract relatively difficulty of the constant characteristic of attitude; Based on the solution of various visual angles facial image, its major defect is to be difficult to the attitude of people's face is definitely demarcated, and wrong attitude is estimated to reduce the recognition of face performance; And based on the method for human face three-dimensional model; Though can solve the attitude problem preferably, also there are a lot of difficulties, big such as calculated amount, speed slow and recover low precision; And need manual location feature point to be used for initialization, these all are the location of facial image and further discern and brought difficulty.
Summary of the invention
In order to realize robotization and recognition of face fast and accurately, the embodiment of the invention provides a kind of method of positioning two-dimensional face, comprising: obtain the two-dimension human face image in the preset database; Utilize the two-dimension human face image in the said database; Make up the two-dimension human face shape; Comprise: the two-dimension human face image in the said database is divided according to attitude, and the facial image of every kind of attitude is carried out the demarcation of unique point, obtains said characteristic point coordinates value; Utilize said characteristic point coordinates value to make up the shape vector of the two-dimension human face image under the corresponding attitude; Said shape vector is carried out normalized processing, obtain shape vector, the said shape vector of handling through normalization is carried out principal component analysis through the normalization processing; Make up the shape of corresponding attitude according to the principal component analysis result, make up the two-dimension human face shape by the said shape of all attitudes; Utilize the two-dimension human face image in the said database, make up the two-dimension human face local texture model; According to said two-dimension human face shape and two-dimension human face local texture model, said two-dimension human face image is positioned.
The embodiment of the invention is set up two-dimension human face shape and two-dimension human face local texture model through utilizing preset database; Realized two-dimension human face image is accurately located; The method of when the local grain modeling, using some contrast characteristic and feature selecting to combine has improved the locating effect of computing velocity and unique point.
Description of drawings
The method flow diagram of a kind of positioning two-dimensional face that Fig. 1 provides for the embodiment of the invention 1;
The attitude that Fig. 2 provides for the embodiment of the invention 1 two-dimension human face shape left;
The two-dimension human face shape in the front that Fig. 3 provides for the embodiment of the invention 1.
Embodiment
For making the object of the invention, technical scheme and advantage clearer, embodiment of the present invention is done to describe in detail further below in conjunction with accompanying drawing.
Embodiment 1
The embodiment of the invention provides a kind of method of positioning two-dimensional face.This method is carried out the shape modeling of many subspaces to the two-dimension human face image in the database, obtains the two-dimension human face shape; Two-dimension human face image is carried out the local grain modeling, obtain the two-dimension human face local texture model; According to two-dimension human face shape and local texture model, two-dimension human face image is accurately located.As shown in Figure 1, present embodiment comprises:
101: obtain the two-dimension human face image in the preset database;
Two-dimension human face database in the present embodiment is taken from 2000 Europeans and Asian two-dimension human face image, and the data of each two-dimension human face comprise: and data texturing (R, G, B) and data such as the attitude of people's face, expression and illumination variation.
102: the two-dimension human face image in the database is carried out the shape modeling of many subspaces, obtain the two-dimension human face shape.
Set up the two-dimension human face shape according to the two-dimension human face database, concrete steps comprise:
102a: the two-dimension human face image in the database is divided according to attitude; The facial image of every kind of attitude is carried out the demarcation of unique point, obtain the characteristic point coordinates value; Utilize the characteristic point coordinates value to make up the shape vector of the two-dimension human face image under the corresponding attitude.
Concrete, two-dimension human face image is divided into left according to attitude, to the right; Upwards, with positive five kinds, be example downwards with attitude facial image left; Attitude two-dimension human face data left are total N in the tentation data storehouse, demarcate the individual unique point of 88 (also can be the numerical value beyond 88) of everyone face of this attitude, obtain characteristic point coordinates (x; Y) as raw data, and raw data carried out quantification treatment, obtain the shape vector of people's face.
Wherein, the method for feature point for calibration can have multiple, and common method is manual mark method; Present embodiment adopts semi-automatic interactively manual mask method; Semi-automatic mark is different from manual mark, need not mark each point is all manual, but through modes such as pullings; Demarcate the unique point of people's face, can use relevant software to realize.
Constitute the shape vector of people's face according to 88 characteristic point coordinates:
X i=[x i0,y i0,x i1,y i1…x ij,y ij…x i87,y i87] T (6)
102b: shape vector is carried out the normalization of center, yardstick and direction.
When the normalization of carrying out facial image is handled, partly be that reference point carries out the normalization processing with the eyes in the image usually.Concrete, utilize following formula to carry out center normalization:
x ‾ i = 1 m Σ j = 1 m x ij y ‾ i = 1 m Σ j = 1 m y ij x ij = x ij - x ‾ i y ij = y ij - y ‾ i , ∀ j = 1 . . . m - - - ( 7 )
Utilize following formula to carry out yardstick normalization:
| | S i ′ | | = Σ j = 1 m ( x ij ′ 2 + y ij ′ 2 ) x ij ′ ′ = x ij ′ / | | S i ′ | | y ij ′ ′ = y ij ′ / | | S i ′ | | , ∀ j = 1 . . . m - - - ( 8 )
Utilize the normalization of Procrust Analysis algorithm travel direction, rotation in the plane of elimination people face.
102c: all shape vectors to after the normalization carry out principal component analysis, make up the shape of corresponding attitude according to the principal component analysis result; Shape by all attitudes makes up the two-dimension human face shape.
Shape vector to attitude two-dimension human face data left carries out principal component analysis, and is specific as follows:
1) the shape vector average and the covariance matrix of calculating two-dimension human face data.
Concrete, calculate the following formula of shape vector average utilization: X ‾ = 1 N Σ i = 1 N X i - - - ( 9 )
Calculate the following formula of covariance matrix utilization: C x = 1 N Σ i = 1 N ( X i - X ‾ ) ( X i - X ‾ ) T - - - ( 10 )
2) make up the shape of corresponding attitude according to the principal component analysis result, make up the two-dimension human face shape by the shape of all attitudes.Specific as follows:
Obtain proper vector P according to shape vector average and covariance matrix, make up the shape of attitude two-dimension human face left: X=X+Pb, wherein, b is the form parameter of principal component analysis (PCA, Principal component analysis).
Concrete, as shown in Figure 2, be that example describes with the shape of attitude facial image left, through being set, different form parameter b can obtain different shapes, and make shape have certain variation range.
Accordingly, be illustrated in figure 3 as the shape of front face.
Facial image to all attitudes carries out shape modeling respectively, obtains the shape of all attitudes, and the shape modeling method is the same, repeats no more.
Further, can any people's face shape X be expressed as: X=T a(X+Pb).Wherein a is a geometric parameter, comprises the translation vector X of level, vertical direction t, Y t, scaling vector S and angle vector θ.Ta representes how much variations of shape, as shown in the formula:
a = ( X t , Y t , s , θ ) ; T X t , Y t , s , θ x y = X t Y t + s cos θ - s sin θ s sin θ s cos θ x y - - - ( 11 )
Again step, comprehensively can get the two-dimension human face shape by the shape of all attitudes.For example, use M i, i=1,2,3,4,5, correspondence to the right, makes progress and downward and positive five kinds of attitude models left respectively, and i is an attitude parameter, for every kind of attitude model M i, its mean vector is expressed as X i, the proper vector of principal component analysis is P i, the two-dimension human face shape that comprehensively obtains is: X = T a i ( X ‾ i + P i b i ) .
103: two-dimension human face image is carried out the local grain modeling, obtain the two-dimension human face local texture model.Specifically comprise:
Use the duscriminant learning method in the present embodiment; Analyze each unique point texture and near other on every side and put the difference of texture on every side; Method with identification solves the orientation problem of unique point, uses point that characteristic is relatively combined with the feature selection approach of random forest and carries out the description of local grain.
Concrete, the location feature that the embodiment of the invention proposes is a point to characteristic relatively, the i.e. comparison of any two picture element gray scales size in the image.The modeling of present embodiment local grain is to be sorter of each unique point design, and whole people's face need design 88 sorters altogether.With the left eye angle is example, chooses to select two some p1 arbitrarily in the preset range, and p2 compares, and is concrete, and preset range can be 5 * 5 coordinate range, and with the gray-scale value of I (p) remarked pixel point, then the mathematical formulae of classifier result can be represented as follows:
h n = 1 if I ( p 1 ) ≥ I ( p 2 ) 0 otherwise - - - ( 12 )
Promptly when I (p1) >=I (p2), the result of Weak Classifier is 1, otherwise the Weak Classifier result is 0.For the image block of one 32 * 32 size, choosing two points arbitrarily has C 1024 2Plant combination, the Weak Classifier total number is about 520,000.
Selected point only need be taken up an official post in original-gray image to characteristic relatively and got 2 relatively sizes of gray-scale values, need not carry out computings such as various conversion and multiplication and division, evolution, and therefore this characteristic has stablely, calculates characteristics fast.Secondly, point is clearer and more definite to the geometric position of comparing Feature Selection point, aspect the location of unique point, than Gabor characteristic, gradient characteristic or Haar characteristic etc. in the prior art better performance is arranged.
But because some contrast characteristic number is a lot, the feature selection approach that therefore must be combined, what present embodiment used is the random forest method, its basic thought is that a lot of Weak Classifiers are integrated into a strong classifier.A random forest is made up of N decision tree, and every decision tree (like decision tree T1 T2...TN) is a decision tree classification device, and each node of decision tree all is a Weak Classifier, and the result of decision of random forest is the average of all decision tree classification results.In training process, the difference of every decision tree in the random forest is the training sample set, is respectively a sub-set of concentrating picked at random from total sample; And the training method of every decision tree is identical, and decision tree is all chosen the best Weak Classifier of current classifying quality at each node.In assorting process; Classification problem with a C class is an example, and the C class is promptly exported C degree of confidence, each degree of confidence p (n; P) (f (p)=c) has represented that a sample p belongs to the probability of C class; Sample p has C output result through each decision tree classification device Tn, and the judgement of last random forest is average based on all decision tree results', is shown below.
Figure G2009101433254D00051
104:, two-dimension human face image is accurately located according to two-dimension human face shape and local texture model.
Concrete, to the shape of each two-dimension human face image X = T a i ( X ‾ i + P i b i ) Be optimized, obtain optimum attitude model M i, and the geometric parameter a of the optimum under this attitude model iWith form parameter b iThereby, obtain the shape of the optimum of this two-dimension human face image, obtained the optimum shape model, just realized accurate location to two-dimension human face image.Concrete grammar is following:
Objective function according to the Traditional parameter optimized Algorithm:
( a ^ , b ^ ) = min a , b | | Y - T a ( X ‾ + Pb ) | | 2 = min a , b ( Y - T a ( X ‾ + Pb ) ) T ( Y - T a ( X ‾ + Pb ) ) - - - ( 14 )
Add attitude parameter i, optimized Algorithm is improved, the objective function of the optimized Algorithm that present embodiment proposes is:
Figure G2009101433254D00054
The objective function (15) of the optimized Algorithm that present embodiment proposes has be different from traditional objective function (14) at 3, and at first, objective function (15) is a matrix W with the result of each random forest sorter output iJoin among the optimization aim, i.e. i attitude model M iThe result that the random forest sorter obtains.Secondly, add form parameter and drop on this restriction of zone of compacting in the model parameter space of shape principal component analysis, add limit entry Limit the form parameter b of principal component analysis iAt last, the two-dimensional shapes model is optimized, according to the two-dimensional shapes model M of optimum i, two-dimension human face image is accurately located.Through the optimization aim function, the model parameter that can make optimization is more near expectation value.
Further, the execution in step of the optimized Algorithm of the model parameter of present embodiment proposition is following:
1) to all attitude model M i, { 1,2,3,4,5} carries out initialization to i ∈, positions through the two-dimension human face figure of the part of the eyes in the facial image to different attitudes, and obtains corresponding geometric parameter a iWith form parameter b i
2) characteristic of choosing is optimized, the point of the random forest sorter output probability maximum in the unique point preset range of selected shape model Central Plains is as new unique point.Concrete, preset range can be chosen 5 * 5 coordinate range.
3) geometric parameter of optimization attitude: a ^ i = Min a i ( Y - T a i ( X ‾ i + P i b i ) ) T W i ( Y - T a i ( X ‾ i + P i b i ) ) . - - - ( 16 )
4) optimised shape parameter: b ^ i = Min b i ( Y - T a ^ i ( X ‾ i + P i b i ) ) T W i ( Y - T a ^ i ( X ‾ i + P i b i ) ) + Σ j = 1 t b Ij 2 / σ j 2 . - - - ( 17 )
5) if | | a ^ i - a i | | + | | b ^ i - b i | | < &epsiv; , Then stop to optimize computing; Otherwise, order a i = a ^ i ; b i = b ^ i , Return step 2).
6) the optimal characteristics point location result of more every kind of attitude model chooses and makes the minimized result of formula (15) as optimal result, obtains optimum attitude i and corresponding a iAnd b i
Make up optimum people's face shape model based on optimum parameters, realize accurate location each two-dimension human face image.
The embodiment of the invention is set up two-dimension human face shape and two-dimension human face local texture model through utilizing preset database; Realized two-dimension human face image is accurately located; The method of when the local grain modeling, using some contrast characteristic and feature selecting to combine has improved the locating effect of computing velocity and unique point.
Above-described embodiment, the present invention embodiment a kind of more preferably just, common variation that those skilled in the art carries out in technical scheme scope of the present invention and replacement all should be included in protection scope of the present invention.

Claims (5)

1. the method for a positioning two-dimensional face is characterized in that, comprising:
Obtain the two-dimension human face image in the preset database;
Utilize the two-dimension human face image in the said database; Make up the two-dimension human face shape; Comprise: the two-dimension human face image in the said database is divided according to attitude, and the facial image of every kind of attitude is carried out the demarcation of characteristic point, obtains said characteristic point coordinates value; Utilize said characteristic point coordinates value to make up the shape vector of the two-dimension human face image under the corresponding attitude; Said shape vector is carried out normalized processing, obtain shape vector, the said shape vector of handling through normalization is carried out principal component analysis through the normalization processing; Make up the shape of corresponding attitude based on the principal component analysis result, make up the two-dimension human face shape by the said shape of all attitudes;
Utilize the two-dimension human face image in the said database, make up the two-dimension human face local texture model;
According to said two-dimension human face shape and two-dimension human face local texture model, said two-dimension human face image is positioned.
2. the method for positioning two-dimensional face as claimed in claim 1 is characterized in that, utilizes the two-dimension human face image in the said database, makes up the two-dimension human face local texture model, comprising:
Two-dimension human face image in the said database is carried out the local grain modeling, obtain the two-dimension human face local texture model.
3. the method for positioning two-dimensional face as claimed in claim 2 is characterized in that, said two-dimension human face image in the said database is carried out the local grain modeling, obtains the two-dimension human face local texture model, comprising:
Obtain the unique point coordinate figure on the said two-dimension human face image;
Two pixel gray scale sizes in the preset range of said two-dimension human face image unique point compare, and obtain a contrast characteristic;
The use characteristic system of selection is selected to handle to said some contrast characteristic, obtains selecting result;
Make up the two-dimension human face local texture model based on said selection result.
4. the method for positioning two-dimensional face as claimed in claim 3 is characterized in that, said use characteristic system of selection is selected to handle to said some contrast characteristic, obtains selecting result, comprising:
Use random forest feature selecting method that said some contrast characteristic selected to handle, obtain selecting result.
5. the method for positioning two-dimensional face as claimed in claim 1 is characterized in that, saidly according to said two-dimension human face shape and two-dimension human face local texture model said two-dimension human face image is positioned, and comprising:
According to preset algorithm the shape of said two-dimension human face image to be identified is optimized processing, obtains optimum attitude parameter, geometric parameter and form parameter;
Utilize attitude parameter, geometric parameter and the form parameter of said optimum, make up the optimum shape model of said two-dimension human face image to be identified;
Utilize said optimum shape model and said two-dimension human face local texture model, said two-dimension human face image to be identified is accurately located.
CN2009101433254A 2008-07-17 2008-07-17 Method for positioning two-dimensional face images Active CN101561875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009101433254A CN101561875B (en) 2008-07-17 2008-07-17 Method for positioning two-dimensional face images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101433254A CN101561875B (en) 2008-07-17 2008-07-17 Method for positioning two-dimensional face images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN2008101167815A Division CN101320484B (en) 2008-07-17 2008-07-17 Three-dimensional human face recognition method based on human face full-automatic positioning

Publications (2)

Publication Number Publication Date
CN101561875A CN101561875A (en) 2009-10-21
CN101561875B true CN101561875B (en) 2012-05-30

Family

ID=41220672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101433254A Active CN101561875B (en) 2008-07-17 2008-07-17 Method for positioning two-dimensional face images

Country Status (1)

Country Link
CN (1) CN101561875B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102640168B (en) * 2009-12-31 2016-08-03 诺基亚技术有限公司 Method and apparatus for facial Feature Localization based on local binary pattern
CN104091148B (en) * 2014-06-16 2017-06-27 联想(北京)有限公司 A kind of man face characteristic point positioning method and device
CN104036255B (en) * 2014-06-21 2017-07-07 电子科技大学 A kind of facial expression recognizing method
CN104598873A (en) * 2014-12-24 2015-05-06 苏州福丰科技有限公司 Three-dimensional face recognition method of door lock
CN111299815B (en) * 2020-02-13 2021-02-09 西安交通大学 Visual detection and laser cutting trajectory planning method for low-gray rubber pad

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1811793A (en) * 2006-03-02 2006-08-02 复旦大学 Automatic positioning method for characteristic point of human faces
CN1945595A (en) * 2006-10-30 2007-04-11 邹采荣 Human face characteristic positioning method based on weighting active shape building module

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1811793A (en) * 2006-03-02 2006-08-02 复旦大学 Automatic positioning method for characteristic point of human faces
CN1945595A (en) * 2006-10-30 2007-04-11 邹采荣 Human face characteristic positioning method based on weighting active shape building module

Also Published As

Publication number Publication date
CN101561875A (en) 2009-10-21

Similar Documents

Publication Publication Date Title
CN101320484B (en) Three-dimensional human face recognition method based on human face full-automatic positioning
CN101561874B (en) Method for recognizing face images
CN106228185B (en) A kind of general image classifying and identifying system neural network based and method
CN102663413B (en) Multi-gesture and cross-age oriented face image authentication method
Felzenszwalb et al. Visual object detection with deformable part models
CN103870811B (en) A kind of front face Quick method for video monitoring
CN103295025B (en) A kind of automatic selecting method of three-dimensional model optimal view
CN103761536B (en) Human face beautifying method based on non-supervision optimal beauty features and depth evaluation model
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN105243139A (en) Deep learning based three-dimensional model retrieval method and retrieval device thereof
CN103984953A (en) Cityscape image semantic segmentation method based on multi-feature fusion and Boosting decision forest
CN105608441B (en) Vehicle type recognition method and system
CN103942577A (en) Identity identification method based on self-established sample library and composite characters in video monitoring
CN105046272B (en) A kind of image classification method based on succinct non-supervisory formula convolutional network
CN104794441B (en) Human face characteristic positioning method based on active shape model and POEM texture models under complex background
CN102930300B (en) Method and system for identifying airplane target
CN104700076A (en) Face image virtual sample generating method
CN101561875B (en) Method for positioning two-dimensional face images
CN107092931B (en) Method for identifying dairy cow individuals
CN105373777A (en) Face recognition method and device
CN104598885A (en) Method for detecting and locating text sign in street view image
CN109871892A (en) A kind of robot vision cognitive system based on small sample metric learning
CN106503748A (en) A kind of based on S SIFT features and the vehicle targets of SVM training aids
Beksi et al. Object classification using dictionary learning and rgb-d covariance descriptors
CN104050460B (en) The pedestrian detection method of multiple features fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant