CN102214299A - Method for positioning facial features based on improved ASM (Active Shape Model) algorithm - Google Patents

Method for positioning facial features based on improved ASM (Active Shape Model) algorithm Download PDF

Info

Publication number
CN102214299A
CN102214299A CN2011101674084A CN201110167408A CN102214299A CN 102214299 A CN102214299 A CN 102214299A CN 2011101674084 A CN2011101674084 A CN 2011101674084A CN 201110167408 A CN201110167408 A CN 201110167408A CN 102214299 A CN102214299 A CN 102214299A
Authority
CN
China
Prior art keywords
shape
vector
model
face
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011101674084A
Other languages
Chinese (zh)
Inventor
解梅
魏云龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN2011101674084A priority Critical patent/CN102214299A/en
Publication of CN102214299A publication Critical patent/CN102214299A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for positioning facial features based on an improved ASM (Active Shape Model) algorithm, belonging to the technical field of visual and image treatment of a computer. The method comprises the following steps of: firstly, manually calibrating feature points; secondly, establishing a statistical shape model and a local gray model of an upper model and a lower model; thirdly, separately searching and matching the feature points in the upper model and the lower model; and finally, generating an example of a comprehensive model restrained by an energy function. To solve the problem that the traditional ASM method is difficult to position features under the facial expression conditions of a human face, the method for positioning facial features based on the improved ASM algorithm, disclosed by the invention, comprises the following steps of: carrying out the regional division of the facial features into an upper shape region and a lower shape region according to the change correlation, separately modeling the statistical shape model and the local gray model, generating examples of comprehensive shapes for the upper model and the lower model to restrain errors by introducing an energy function to the feature matching and searching process, and finally, obtaining an accurate feature positioning result. Due to the method for positioning facial features based on the improved ASM algorithm, the feature positioning accuracy of the ASM algorithm to the existing facial expression conditions are further improved.

Description

A kind of human face characteristic positioning method based on improved ASM algorithm
The invention belongs to computer vision and technical field of image processing, relate generally to the face recognition technology in the biological characteristic discriminating, be specifically related to a kind of human face characteristic positioning method based on the ASM algorithm.
Background technology
In recent years, along with fast development of information technology, identity how to discern a people quickly and accurately becomes a key technical problem of being badly in need of solution to ensure information security and public safety.For this reason, biometrics identification technology arises at the historic moment, and becomes the main flow research topic of our times information security field.Biometrics identification technology utilizes intrinsic physiological structure of human body and behavioural characteristic to carry out personal identification and identifies.Face recognition technology is as an important branch of biometrics identification technology, in conjunction with computer vision and Flame Image Process correlation technique, by characteristics such as its uniqueness, stability and ease for use and user's acceptance height, become the popular investigative technique of living things feature recognition direction.Face recognition technology is widely used at aspects such as community's security protection, Network Video Surveillance, entry-exit management detection, staff attendance and home entertainings, has huge economic and realistic meaning.At present, face recognition technology has been applied to the user login management of border security inspection, bank's gate control system, automobile and electronic product and community's safety management etc., can make people break away from the management of all kinds of bank cards, credit card, I.D., community's insurance certificate simultaneously.Along with the development of computer vision technique and image processing techniques, face recognition technology more and more is subject to people's attention.See document for details: A.K.Jain, A.Ross, and S.Prabhakar, " An Introduction to Biometric Recognition " IEEE Transactions on CSVT, Special Issue on Image-and Video-Based Biometrics, 4-20,2003 is described.
In face recognition technology, the accurate extraction that how to realize the facial image feature is the essential step of recognition of face, and its execution accuracy will directly influence the discrimination of whole face identification system.In practice, because expression and attitude that people's face portion often comprises to a certain degree change, the accuracy of human face characteristic point extraction algorithm and validity are still waiting further raising.How in the inferior quality facial image that comprises expression shape change and attitude variable effect problem, carrying out quickly and accurately that human face characteristic point extracts is the subject matter that we study.See document for details: John G.Daugman, " High Confidence Visual Recognition of Persons by a Test of Statistical Independence; " IEEE Transaction on Pattern Analvsis and Machine Intelligence, volume15, no.11, pp.1148-1161,1993.
Present face feature extraction method mainly comprises three kinds: based on the characteristic positioning method of the colour of skin, based on the localization method of transform domain with based on statistical localization method.
(1) based on the characteristic positioning method of the colour of skin.At first target image is carried out area of skin color and cut apart, find out candidate region, the zone of containing the colour of skin as people's face; Then the face template coupling is carried out in all candidate regions or utilize the method for locating eyes and then location human face region earlier to realize the feature extraction of people's face.This kind method is had relatively high expectations to illumination, and accuracy is low, in the practical application under the situation of lighting change positioning result often be difficult to expect, particularly background color and human body complexion near the time, almost just can not extract face characteristic.See document for details: C.N.R.Kumar and A.Bindu.An efficient skin illumination compensation model for efficient face detection.IEEE 32nd Annual Conference on Industrial Electronics, 2006,3444-3449.
(2) based on the localization method of transform domain.People's face-positioning method based on wavelet transformation.It is by certain operator, such as Sobel, Canny operator etc., extracts the composition of the different frequency in the facial image, thereby searches the position at people's face place.Its shortcoming is the interference that is background detail, makes that people's face positioning result is inaccurate.See document for details: Jarmo Ilonen, Joni-Kristian Kamarainen. " Image Feature Localization by Multiple Hypothesis Testing of Gabor Features " IEEE Transactions on image processing, Vo1.17, No.3,2008,311-325.
(3) based on statistical feature extracting method.Based on statistical method the shape of people's face and texture information are carried out modeling with the method for statistics, based on the model that obtains human face region to be measured is searched for then, targetedly target signature is extracted, thereby can be obtained effect more accurately.Mainly comprise based on statistical method: based on the face feature extraction method of ASM with based on the face feature extraction method of AAM.The classical patent " a kind of human face characteristic positioning method based on ASM algorithm " of University of Electronic Science and Technology in application in 2009, the number of patent application: 200910059648.5 of seeing for details.This patent is quickened the standardization alignment operation of training sample, though accelerated travelling speed, when the expression shape change situation was contained in people's face portion, locating effect was not ideal enough.
Summary of the invention
Task of the present invention provides a kind of face feature extraction method based on improved ASM algorithm, and it has in people's face portion and comprises the characteristics that still can carry out feature extraction exactly under the expression shape change situation.
In order to describe content of the present invention easily, at first some terms are defined.
Definition 1: contour feature point.Be meant the calibration point with higher curvature that can characterize each organ shape profile of people's face.
Definition 2: sample normalization.By operations such as translation, convergent-divergent and rotations, make the center of gravity of all samples align, direction and size are close as far as possible, and keep present situation separately constant.
Definition 3: principal component analysis (PCA) (PCA).By sample data is carried out statistical study, data set is compressed to lower dimensional space from higher dimensional space, realize that the compression of data is handled.
Definition 4: characteristic value decomposition.Generally obtain by svd, m * n rank matrix A can be write as the form of A=USV ', and wherein U is m rank orthogonal matrixes, and V is n rank orthogonal matrixes, S=diag (σ 1, σ 2..., σ r), σ i>0 (i=1 ..., r), r=rank (A).Be respectively the singular vector group of A among U and the V, and S is the singular value of A.The quadrature unit character vector of AA ' is formed U, and eigenwert is formed S ' S, and the quadrature unit character vector of AA ' is formed V, and eigenwert is formed S ' S.
Technical scheme of the present invention is as follows:
A kind of human face characteristic positioning method based on improved ASM algorithm comprises modelling process and match search process; Wherein said modelling process may further comprise the steps as shown in Figure 1:
Step 1: the L pictures of choosing in the face database carries out the manual demarcation of unique point as training sample set and to it;
Each pictures that training sample is concentrated carries out the unique point demarcation, and carry out the staking-out work subregion: first comprises people's face outline and face, and m unique point obtains shape vector x altogether u=(x 1, y 1, L, x m, y m) T, second portion comprises eyes, eyebrow and nose, n unique point obtains down shape vector x altogether d=(x 1, y 1, L, x m, y m) TThe last shape vector of all pictures that training sample is concentrated constitutes goes up shape vector collection X u=(x U1, x U2, L, x UL), the following shape vector of all pictures constitutes shape vector collection X down d=(x D1, x D2, L, x DL); Wherein the calibration point at people's face outline two ends has certain restriction, promptly requires its position roughly to be on the connecting line of two eye center;
Step 2: to the upper and lower shape vector collection of step 1 gained X uAnd X dCarry out following standardization alignment operation: step 2-1 respectively: with the upper and lower shape vector collection of step 1 gained X uAnd X dIn last shape vector x uWith following shape vector x dConvert the matrix form of m * 2 or n * 2 to, promptly be converted to x ui = x 1 , x 2 , L , x m y 1 , y 2 , L , y m T Or x di = x 1 , x 2 , L , x n y 1 , y 2 , L , y n T , i=1,2,L,L;
Step 2-2: concentrate the upper and lower shape vector of an optional training sample as initial average shape at training sample
Figure BDA0000069923010000033
Step 2-3: by translation, rotation and zoom operations, with all the other each sample shape vector x iWith average shape
Figure BDA0000069923010000034
Alignment;
Shift factor T=G wherein m-G i, and G mBe average shape
Figure BDA0000069923010000035
Center of gravity, G iBe shape vector x iCenter of gravity and
Figure BDA0000069923010000036
The calculating of rotation matrix R is passed through matrix
Figure BDA0000069923010000037
The method of carrying out svd obtains, and establishes decomposition result to be
Figure BDA0000069923010000038
R=UV then TZoom factor S obtains by asking the mark computing,
Figure BDA0000069923010000039
Calculate each sample shape vector x iWith respect to average shape
Figure BDA00000699230100000310
Translation vector T, rotation matrix R and zoom factor S after, can realize alignment operation by it is carried out corresponding conversion, obtain new training sample shape collection simultaneously.
Step 2-4: calculate new average shape
Figure BDA0000069923010000041
Contrast the new average shape and the number percent of original average shape change point, if the number percent of change point is no more than threshold value T (span of threshold value T is [5%, 10%]), then execution in step 3; If the number percent of change point surpasses threshold value T, then return step 2-3;
Step 3: step 2 is carried out the new sample shape vector of standardization alignment operation gained carry out the PCA operation, set up statistical shape model;
Step 3-1: the mean vector that calculates all shape vectors in the new sample shape vector set
Figure BDA0000069923010000042
And covariance matrix Cov = 1 L - 1 Σ i = 1 L ( x i - x ‾ ) ( x i - x ‾ ) T ;
Step 3-2: covariance matrix Cov is carried out characteristic value decomposition, obtain the eigenvalue of covariance matrix Cov kWith proper vector P kProper vector requires to satisfy
Figure BDA0000069923010000044
Wherein 1≤k≤2m or 2n; Eigenwert by descending series arrangement, is chosen the shape facility vector of preceding l eigenwert characteristic of correspondence vector as statistical shape model then, and l requires to satisfy
Figure BDA0000069923010000046
The weights factor alpha gets 0.98 or 0.95 usually;
Step 3-3: obtain statistical shape model Wherein P is people's face shape eigenmatrix P=(P 1, P 2, L, P l), a is corresponding shape weight vector a=(a 1, a 2, L, a l) T
Step 4: set up the local gray level model;
To j calibration point of i width of cloth sample image, in its outline normal both sides, be the center with the calibration point, respectively get k pixel (as shown in Figure 4) equably, 2≤k≤7 have so just obtained the intensity profile vector of the 2k+1 dimension at this calibration point place, are expressed as: g Ij=(g 1, g 2, L, g (2k+1)) TIn order to obtain its changes in distribution rule, this intensity profile vector is carried out difference operation, obtain the difference vector d of 2k dimension Ij=(g 2-g 1, g 3-g 2, L, g (2k+1)-g 2k) T=(d 1, d 2, L, d 2k) TCarry out obtaining after the normalization:
Figure BDA0000069923010000048
This calibration point place to all training sample image correspondences carries out aforesaid operations, by the vectorial u that averages jWith covariance matrix C j, obtain the local gray level model of j calibration point: u j = 1 L Σ i = 1 L u ij , C j = 1 L - 1 Σ i = 1 L ( u ij - u j ) ( u ij - u j ) T ;
Described match search process specifically comprises following steps as shown in Figure 2:
Step 5: people's face picture to be measured is carried out positioning feature point:
Step 5-1: utilize and orient human face region, original shape weight vector a=(0,0, L, 0) is set based on the method for detecting human face of Adaboost algorithm T, obtain the upper and lower shape initial model example of people's face picture to be measured;
Step 5-2: the shape that upper and lower two new shape instance that step 5-1 gained upper and lower shape initial model example or step 5-4 match search are finished are carried out on how much is comprehensive;
Step 5-3: pass through energy function F = ω 1 Σ i = 1 m D ( v ui , u ui ) + ω 2 Σ j = 1 n D ( v dj , u dj ) The shape instance that comprehensively obtains and the matching degree of target people face shape are measured, wherein
Figure BDA0000069923010000052
Figure BDA0000069923010000053
v UiBe last shape gray scale vector to be measured, v DjBe following shape gray scale vector to be measured, u UiBe the mean vector of last shape local gray level model, u DjBe the mean vector of following shape local gray level model, D (v Ui, u Ui) be v UiWith u UiBetween mahalanobis distance, D (v Di, u Di) be v DjWith u DjBetween mahalanobis distance; If energy function F≤0.1 or reach default maximum cycle, then search finishes and exports people's face picture feature point location result to be measured; If energy function F>0.1 or do not reach default maximum cycle, then execution in step 5-4;
Step 5-4: to the upper and lower shape original shape of step 5-1 gained model instance, the upper and lower local gray level model of the correspondence of being set up by the modelling process independently carries out the match search of unique point, and as shown in Figure 3, concrete steps are as follows:
Step 5-4-1: with each unique point in the current shape instance X is the center, respectively gets 2~7 pixels in its outline normal direction both sides, forms gray scale Vector Groups v to be measured j(as shown in Figure 4);
Step 5-4-2: calculate the mahalanobis distance D=(v between each gray scale vector in local gray level model and the gray scale Vector Groups to be measured j-u j) TC j(v j-u j), search obtains the optimum position at each unique point, writes down the change information of all characteristic point positions, obtains change vector dX;
Step 5-4-3: because current shape instance X is by average shape Obtain after change of shape and attitude variation, wherein change of shape is to obtain standard shape x by the shape weight vector a that changes in the statistical shape model, promptly Attitude changes, and to be standard shape x obtain X=M through translation T, rotation θ and convergent-divergent S operation that (wherein (S is a zoom factor to M for S, θ) representative rotation and zoom operations, and θ is a twiddle factor, and T is a shift factor for S, θ) [x]+T; So there is relation: X+dX=M (S+dS, θ+d θ) [x+dx]+T+dT, deriving obtains the variation dX=M ((S+dS) of standard shape x -1,-(θ+d θ)) [M (S, θ) [x]+dX-dT]-x;
Step 5-4-4: by statistical shape model
Figure BDA0000069923010000056
Obtain
Figure BDA0000069923010000057
Be similar to and obtain dx=Pda, thereby obtain shape weight vector da=P -1Dx brings statistical shape model once more into
Figure BDA0000069923010000061
Obtain new standard shape x ′ = x ‾ + P ( a + da ) ;
Step 5-4-5: to new standard shape carry out translation T, rotation θ and convergent-divergent S operation obtains new shape instance X '=M (S, θ) [x ']+T; The changing value that returns step 5-4-1 cycle calculations mahalanobis distance D between each gray scale vector in local gray level model and gray scale Vector Groups to be measured is less than 0.01 or reach default maximum cycle, and changes step 5-2.
Need to prove:
1. calculate the mean vector of all shape vectors in the new shape vector set among the step 3-1
Figure BDA0000069923010000063
Time acquiescence has been converted to the form matrix in the step 2 form of vector, promptly carries out the inverse operation of step 2-1.
2. the pixel number of gathering on target image among the step 5-4-1 should be not excessive, avoids the search procedure of each unique point to intersect, and causes the model instance deformity of producing.
The present invention divides people's face portion feature by traditional ASM method is improved by changing the degree of correlation, independently carry out the modeling of statistical shape model and local gray level model.By introducing energy function the compages of the upper and lower model instance that generates in each iteration are carried out the error constraint in the feature point extraction process of facial image to be measured, finally obtain feature extraction result accurately.The present invention has made full use of the advantage of ASM algorithm in face characteristic extracts, and by modelling and search procedure being divided-processing of formula always, has further improved there is feature location under the expression situation in the ASM algorithm to people's face portion accuracy.
Description of drawings
Fig. 1 is a process flow diagram of setting up people's face statistical shape model.
Fig. 2 is the general flow chart of Feature Points Matching search.
Fig. 3 is unique point search routine figure in the upper and lower model.
Fig. 4 is the process of the choosing synoptic diagram of gray scale vector to be measured in the search procedure.
Fig. 5 is a gray scale vector mobile search synoptic diagram.
Embodiment
Implementation procedure of the present invention has at first been carried out the realization and the emulation of algorithm based on the Matlab platform.
In the modelling process, adopt 240 images that comprise illumination variation, expression shape change and attitude variation of 40 people in the imm_face_db face database to carry out the foundation of model as sample image.Go up model in the unique point calibration process and comprise 37 calibration points, drag comprises 23 calibration points.Last model average shape that finally obtains and proper vector are the vector of 74*1 dimension, and wherein proper vector needs 16; Drag average shape that obtains and proper vector are the vector of 46*1 dimension, and wherein proper vector needs 12.Choose 5 pixel values at each character fixed point both sides, the local gray level model that obtains is the vector of 10*1 dimension.
In the match search process, the maximum cycle that whole cycle index and upper and lower model independently mate is set is 20.The process of choosing of gray scale vector to be measured is respectively gathered 8 points in the unique point both sides, thereby obtain comprising in the Vector Groups to be measured at each unique point 8 vectors, obtain optimal match point by the mahalanobis distance between mobile computing vector to be measured and the local gray level model, by the error constraint of energy function, finally obtain the facial Feature Localization information of whole people's face again.

Claims (3)

1. the human face characteristic positioning method based on improved ASM algorithm comprises modelling process and match search process; Wherein said modelling process may further comprise the steps:
Step 1: the L pictures of choosing in the face database carries out the manual demarcation of unique point as training sample set and to it;
Each pictures that training sample is concentrated carries out the unique point demarcation, and carry out the staking-out work subregion: first comprises people's face outline and face, and m unique point obtains shape vector x altogether u=(x 1, y 1, L, x m, y m) T, second portion comprises eyes, eyebrow and nose, n unique point obtains down shape vector x altogether d=(x 1, y 1, L, x n, y n) TThe last shape vector of all pictures that training sample is concentrated constitutes goes up shape vector collection X u=(x U1, x U2, L, x UL), the following shape vector of all pictures constitutes shape vector collection X down d=(x D1, x D2, L, x DL); Wherein the calibration point at people's face outline two ends has certain restriction, promptly requires its position roughly to be on the connecting line of two eye center;
Step 2: to the upper and lower shape vector collection of step 1 gained X uAnd X dCarry out following standardization alignment operation respectively:
Step 2-1: with the upper and lower shape vector collection of step 1 gained X uAnd X dIn last shape vector x uWith following shape vector x dConvert the matrix form of m * 2 or n * 2 to, promptly be converted to x ui = x 1 , x 2 , L , x m y 1 , y 2 , L , y m T Or x di = x 1 , x 2 , L , x n y 1 , y 2 , L , y n T , i=1,2,L,L;
Step 2-2: concentrate the upper and lower shape vector of an optional training sample as initial average shape at training sample
Figure FDA0000069923000000013
Step 2-3: by translation, rotation and zoom operations, with all the other each sample shape vector x iWith average shape
Figure FDA0000069923000000014
Alignment; Shift factor T=G wherein m-G i, and G mBe average shape Center of gravity, G iBe shape vector x iCenter of gravity and
Figure FDA0000069923000000016
The calculating of rotation matrix R is passed through matrix
Figure FDA0000069923000000017
The method of carrying out svd obtains, and establishes decomposition result to be R=UV then TZoom factor S obtains by asking the mark computing,
Figure FDA0000069923000000019
Calculate each sample shape vector x iWith respect to average shape
Figure FDA00000699230000000110
Translation vector T, rotation matrix R and zoom factor S after, can realize alignment operation by it is carried out corresponding conversion, obtain new training sample shape collection simultaneously.
Step 2-4: calculate new average shape
Figure FDA00000699230000000111
Contrast the new average shape and the number percent of original average shape change point, if the number percent of change point is no more than threshold value T, then execution in step 3; If the number percent of change point surpasses threshold value T, then return step 2-3;
Step 3: step 2 is carried out the new sample shape vector of standardization alignment operation gained carry out the PCA operation, set up statistical shape model;
Step 3-1: the mean vector that calculates all shape vectors in the new sample shape vector set
Figure FDA0000069923000000021
And covariance matrix Cov = 1 L - 1 Σ i = 1 L ( x i - x ‾ ) ( x i - x ‾ ) T ;
Step 3-2: covariance matrix Cov is carried out characteristic value decomposition, obtain the eigenvalue of covariance matrix Cov kWith proper vector P kProper vector requires to satisfy
Figure FDA0000069923000000023
Figure FDA0000069923000000024
Wherein 1≤k≤2m or 2n; Eigenwert by descending series arrangement, is chosen the shape facility vector of preceding l eigenwert characteristic of correspondence vector as statistical shape model then, and l requires to satisfy The weights factor alpha gets 0.98 or 0.95 usually;
Step 3-3: obtain statistical shape model
Figure FDA0000069923000000026
Wherein P is people's face shape eigenmatrix P=(P 1, P 2, L, P l), a is corresponding shape weight vector a=(a 1, a 2, L, a l) T
Step 4: set up the local gray level model;
To j calibration point of i width of cloth sample image, in its outline normal both sides, be the center with the calibration point, respectively get k pixel equably, 2≤k≤7 have so just obtained the intensity profile vector of the 2k+1 dimension at this calibration point place, are expressed as: g Ij=(g 1, g 2, L, g (2k+1)) TIn order to obtain its changes in distribution rule, this intensity profile vector is carried out difference operation, obtain the difference vector d of 2k dimension Ij=(g 2-g 1, g 3-g 2, L, g (2k+1)-g 2k) T=(d 1, d 2, L, d 2k) TCarry out obtaining after the normalization: This calibration point place to all training sample image correspondences carries out aforesaid operations, by the vectorial u that averages jWith covariance matrix C j, obtain the local gray level model of j calibration point: u j = 1 L Σ i = 1 L u ij , C j = 1 L - 1 Σ i = 1 L ( u ij - u j ) ( u ij - u j ) T ;
Described match search process specifically comprises following steps:
Step 5: people's face picture to be measured is carried out positioning feature point:
Step 5-1: utilize and orient human face region, original shape weight vector a=(0,0, L, 0) is set based on the method for detecting human face of Adaboost algorithm T, obtain the upper and lower shape initial model example of people's face picture to be measured;
Step 5-2: the shape that upper and lower two new shape instance that step 5-1 gained upper and lower shape initial model example or step 5-4 match search are finished are carried out on how much is comprehensive;
Step 5-3: pass through energy function F = ω 1 Σ i = 1 m D ( v ui , u ui ) + ω 2 Σ j = 1 n D ( v dj , u dj ) The shape instance that comprehensively obtains and the matching degree of target people face shape are measured, wherein
Figure FDA0000069923000000032
v UiBe last shape gray scale vector to be measured, v DjBe following shape gray scale vector to be measured, u UiBe the mean vector of last shape local gray level model, u DjBe the mean vector of following shape local gray level model, D (v Ui, u Ui) be v UiWith u UiBetween mahalanobis distance, D (v Di, u Di) be v DjWith u DjBetween mahalanobis distance; If energy function F≤0.1 or reach default maximum cycle, then search finishes and exports people's face picture feature point location result to be measured; If energy function F>0.1 or do not reach default maximum cycle, then execution in step 5-4;
Step 5-4: to the upper and lower shape original shape of step 5-1 gained model instance, the upper and lower local gray level model of the correspondence of being set up by the modelling process independently carries out the match search of unique point, and concrete steps are as follows:
Step 5-4-1: with each unique point in the current shape instance X is the center, respectively gets 2~10 pixels in its outline normal direction both sides, forms gray scale Vector Groups v to be measured j
Step 5-4-2: calculate the mahalanobis distance D=(v between each gray scale vector in local gray level model and the gray scale Vector Groups to be measured j-u j) TC j(v j-u j), search obtains the optimum position at each unique point, writes down the change information of all characteristic point positions, obtains change vector dX;
Step 5-4-3: because current shape instance X is by average shape
Figure FDA0000069923000000034
Obtain after change of shape and attitude variation, wherein change of shape is to obtain standard shape x by the shape weight vector a that changes in the statistical shape model, promptly Attitude changes, and to be standard shape x obtain X=M through translation T, rotation θ and convergent-divergent S operation that (wherein (S is a zoom factor to M for S, θ) representative rotation and zoom operations, and θ is a twiddle factor, and T is a shift factor for S, θ) [x]+T; So there is relation: X+dX=M (S+dS, θ+d θ) [x+dx]+T+dT, deriving obtains the variation dX=M ((S+dS) of standard shape x -1,-(θ+d θ)) [M (S, θ) [x]+dX-dT]-x;
Step 5-4-4: by statistical shape model
Figure FDA0000069923000000036
Obtain Be similar to and obtain dx=Pda, thereby obtain shape weight vector da=P -1Dx brings statistical shape model once more into
Figure FDA0000069923000000038
Obtain new standard shape x ′ = x ‾ + P ( a + da ) ;
Step 5-4-5: to new standard shape carry out translation T, rotation θ and convergent-divergent S operation obtains new shape instance X '=M (S, θ) [x ']+T; The changing value that returns step 5-4-1 cycle calculations mahalanobis distance D between each gray scale vector in local gray level model and gray scale Vector Groups to be measured is less than 0.01 or reach default maximum cycle, and changes step 5-2.
2. a kind of human face characteristic positioning method based on improved ASM algorithm according to claim 1 is characterized in that the span of the T of threshold value described in the step 2-4 is [5%, 10%].
3. a kind of human face characteristic positioning method based on improved ASM algorithm according to claim 1 is characterized in that maximum cycle is 20 times described in step 5-3 and the step 5-4-5.
CN2011101674084A 2011-06-21 2011-06-21 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm Pending CN102214299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011101674084A CN102214299A (en) 2011-06-21 2011-06-21 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011101674084A CN102214299A (en) 2011-06-21 2011-06-21 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm

Publications (1)

Publication Number Publication Date
CN102214299A true CN102214299A (en) 2011-10-12

Family

ID=44745599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011101674084A Pending CN102214299A (en) 2011-06-21 2011-06-21 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm

Country Status (1)

Country Link
CN (1) CN102214299A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116764A (en) * 2013-03-02 2013-05-22 西安电子科技大学 Brain cognitive state judgment method based on polyteny principal component analysis
CN103136513A (en) * 2013-02-05 2013-06-05 山东神思电子技术股份有限公司 Improved automatic storage management (ASM) facial feature point locating method
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
CN104992151A (en) * 2015-06-29 2015-10-21 华侨大学 Age estimation method based on TFIDF face image
CN106022215A (en) * 2016-05-05 2016-10-12 北京海鑫科金高科技股份有限公司 Face feature point positioning method and device
CN106682575A (en) * 2016-11-21 2017-05-17 广东工业大学 Human eye point cloud feature location with ELM (Eye Landmark Model) algorithm
CN106875442A (en) * 2016-12-26 2017-06-20 上海蔚来汽车有限公司 Vehicle positioning method based on image feature data
CN106980809A (en) * 2016-01-19 2017-07-25 深圳市朗驰欣创科技股份有限公司 A kind of facial feature points detection method based on ASM
CN108400972A (en) * 2018-01-30 2018-08-14 北京兰云科技有限公司 A kind of method for detecting abnormality and device
CN108510583A (en) * 2018-04-03 2018-09-07 北京华捷艾米科技有限公司 The generation method of facial image and the generating means of facial image
CN109284729A (en) * 2018-10-08 2019-01-29 北京影谱科技股份有限公司 Method, apparatus and medium based on video acquisition human face recognition model training data
CN110826534A (en) * 2019-11-30 2020-02-21 杭州趣维科技有限公司 Face key point detection method and system based on local principal component analysis

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731416A (en) * 2005-08-04 2006-02-08 上海交通大学 Method of quick and accurate human face feature point positioning
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN101593272A (en) * 2009-06-18 2009-12-02 电子科技大学 A kind of human face characteristic positioning method based on the ASM algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731416A (en) * 2005-08-04 2006-02-08 上海交通大学 Method of quick and accurate human face feature point positioning
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN101593272A (en) * 2009-06-18 2009-12-02 电子科技大学 A kind of human face characteristic positioning method based on the ASM algorithm

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136513B (en) * 2013-02-05 2015-11-11 山东神思电子技术股份有限公司 A kind of ASM man face characteristic point positioning method of improvement
CN103136513A (en) * 2013-02-05 2013-06-05 山东神思电子技术股份有限公司 Improved automatic storage management (ASM) facial feature point locating method
CN103116764A (en) * 2013-03-02 2013-05-22 西安电子科技大学 Brain cognitive state judgment method based on polyteny principal component analysis
CN103116764B (en) * 2013-03-02 2016-10-05 西安电子科技大学 A kind of brain cognitive state decision method based on polyteny pivot analysis
CN104765739B (en) * 2014-01-06 2018-11-02 南京宜开数据分析技术有限公司 Extensive face database search method based on shape space
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
CN104992151A (en) * 2015-06-29 2015-10-21 华侨大学 Age estimation method based on TFIDF face image
CN106980809A (en) * 2016-01-19 2017-07-25 深圳市朗驰欣创科技股份有限公司 A kind of facial feature points detection method based on ASM
CN106022215A (en) * 2016-05-05 2016-10-12 北京海鑫科金高科技股份有限公司 Face feature point positioning method and device
CN106022215B (en) * 2016-05-05 2019-05-03 北京海鑫科金高科技股份有限公司 Man face characteristic point positioning method and device
CN106682575A (en) * 2016-11-21 2017-05-17 广东工业大学 Human eye point cloud feature location with ELM (Eye Landmark Model) algorithm
CN106875442A (en) * 2016-12-26 2017-06-20 上海蔚来汽车有限公司 Vehicle positioning method based on image feature data
CN108400972A (en) * 2018-01-30 2018-08-14 北京兰云科技有限公司 A kind of method for detecting abnormality and device
CN108510583A (en) * 2018-04-03 2018-09-07 北京华捷艾米科技有限公司 The generation method of facial image and the generating means of facial image
CN108510583B (en) * 2018-04-03 2019-10-11 北京华捷艾米科技有限公司 The generation method of facial image and the generating means of facial image
CN109284729A (en) * 2018-10-08 2019-01-29 北京影谱科技股份有限公司 Method, apparatus and medium based on video acquisition human face recognition model training data
CN110826534A (en) * 2019-11-30 2020-02-21 杭州趣维科技有限公司 Face key point detection method and system based on local principal component analysis
CN110826534B (en) * 2019-11-30 2022-04-05 杭州小影创新科技股份有限公司 Face key point detection method and system based on local principal component analysis

Similar Documents

Publication Publication Date Title
CN102214299A (en) Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
CN106326886B (en) Finger vein image quality appraisal procedure based on convolutional neural networks
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN101593272B (en) Human face feature positioning method based on ASM algorithm
CN106778468B (en) 3D face identification method and equipment
CN100557624C (en) Face identification method based on the multicomponent and multiple characteristics fusion
CN105447441B (en) Face authentication method and device
CN100557625C (en) Face identification method and device thereof that face component feature and Gabor face characteristic merge
CN102332084B (en) Identity identification method based on palm print and human face feature extraction
CN105138972A (en) Face authentication method and device
CN103632132A (en) Face detection and recognition method based on skin color segmentation and template matching
CN102034288A (en) Multiple biological characteristic identification-based intelligent door control system
CN105138968A (en) Face authentication method and device
CN101630364A (en) Method for gait information processing and identity identification based on fusion feature
CN103268497A (en) Gesture detecting method for human face and application of gesture detecting method in human face identification
Bharadi et al. Off-line signature recognition systems
CN105117708A (en) Facial expression recognition method and apparatus
CN103839033A (en) Face identification method based on fuzzy rule
CN105760815A (en) Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait
CN105354555A (en) Probabilistic graphical model-based three-dimensional face recognition method
Zuobin et al. Feature regrouping for cca-based feature fusion and extraction through normalized cut
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
CN103942572A (en) Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20111012