CN102880866A - Method for extracting face features - Google Patents

Method for extracting face features Download PDF

Info

Publication number
CN102880866A
CN102880866A CN2012103767514A CN201210376751A CN102880866A CN 102880866 A CN102880866 A CN 102880866A CN 2012103767514 A CN2012103767514 A CN 2012103767514A CN 201210376751 A CN201210376751 A CN 201210376751A CN 102880866 A CN102880866 A CN 102880866A
Authority
CN
China
Prior art keywords
parameter
depth
image
aam
apparent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103767514A
Other languages
Chinese (zh)
Other versions
CN102880866B (en
Inventor
赵杰煜
金秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201210376751.4A priority Critical patent/CN102880866B/en
Publication of CN102880866A publication Critical patent/CN102880866A/en
Application granted granted Critical
Publication of CN102880866B publication Critical patent/CN102880866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting face feature. Accordingly, body posture analytical data and depth data provided by a Kinect camera are combined with a Depth-Active Appearance Model (AMM) algorithm, and the method based on 2.5 dimensional images is formed. The method comprises steps of training the AMM of the Depth-AMM algorithm by using a principal component analysis method and extracting face features based on the AMM of the Depth-AMM algorithm after completed training.

Description

A kind of face feature extraction method
Technical field
The present invention relates to the image analysis technology field, specifically is a kind of face feature extraction method.
Background technology
Facial Feature Extraction Technology is exactly the accurate location of automatically orienting each organ of people's face by computing machine on a width of cloth facial image, and all need the position of extract minutiae comprising eyes, nose, face and people's face outline etc.Face characteristic extracts can provide corresponding basic data for research work such as recognition of face, expression posture analysis, face trackings.Current many feature extraction algorithms such as main signature analysis (PCA), local binary (LBP), linear discriminant analysis (LDA), the Gabor wavelet transformations etc. of existing can be used for extracting face characteristic, can be than good berth but these methods can only be descended at specified conditions (light, posture, cosmetic and countenance are appropriate), and what obtain all is information some bottoms, complicated, is used for the effect that recognition of face and cluster are difficult to obtain.
Initiatively by being applied in a lot of fields of success, there is the modeling of people's face in the field that relates to apparent model (Active Appearance Model, AAM), the human eye modeling, Facial expression recognition, image segmentation and analysis, attitude is estimated, face tracking and gesture identification etc.The face characteristic location algorithm roughly can be divided into two classes according to the usage data dimension: the face characteristic based on two dimensional image is located, is located based on the face characteristic of 3-D view.Locate because existing people's face detects the inherent limitations of cutting techniques based on the face characteristic of two dimensional image, illumination, background and personage's attitude etc. have larger impact to result.Used expensive triplex scanner to be used for the generating three-dimensional facial image based on the face characteristic of 3-D view location, used curvature to calculate and the algorithm of global registration, to computing machine and treatment facility require too highly, be difficult to practical popularization.Initiatively apparent model (AAM) is exactly to use a kind of method of locating based on the face characteristic of two dimensional image comparatively widely.
Kinect camera 2010 goes on the market in the U.S., this small and exquisite cheap depth camera equipment, also can reach on the commercial hardware more than per second 200 frames, under complex background and personage's attitude condition, can accurately follow the tracks of and cut apart face image, bring a series of revolutionary variations for the fields such as computer vision, computer graphics, man-machine interaction.As can be known aforementioned based on this section, although the Kinect camera has been widely used in human body attitude analysis identification, and under complex background and personage's attitude condition, can accurately follow the tracks of and cut apart face image, do not come method that face characteristic is positioned but also utilize so far human body attitude that the Kinect camera provides to analyze data and depth data, yet there are no namely that the human body attitude that utilizes the Kinect camera to provide is analyzed data and depth data carries out the method that face characteristic extracts.
Summary of the invention
Technical matters to be solved by this invention is, provides a kind of human body attitude that the Kinect camera is provided to analyze data and depth data is fused in the Depth-AAM algorithm, forms the face feature extraction method based on 2.5 dimension images.
Technical scheme of the present invention is, a kind of face feature extraction method is provided, may further comprise the steps,
1) adopt the principal component analysis method to train the apparent model of Depth-AAM algorithm:
1. utilize the training of Kinect collected by camera texture image and the depth image of facial image, depth image is compressed to 0 ~ 255 pixel coverage from 0 ~ 65535 pixel coverage, the α passage of substitution four-way image, merge into RGBD four-way image with texture image again, and it is carried out craft demarcate several point;
2. define people's face shape for forming v apex coordinate s=(x of grid 1, y 1..., x v, y v) TThe shape vector that the summit consists of is set up the two-dimensional linear model with the principal component analysis method, and shape vector is expressed as basic configuration s 0Add m shape vector s iLinear combination
Figure BDA00002215071300021
P=(p 1..., p m) TThe feature value vector of form matrix, s 0Be the standard attitude of facial image, s iEigenwert p iThe characteristic of correspondence vector;
3. with s 0, RGBD four-way image I iWith its corresponding manual markings s i *Be transformed into RGBD four-way facial image under the standard attitude, i.e. s with the piecemeal affined transformation 0With s i' triangle gridding corresponding one by one, piecemeal affined transformation expression formula is x'=a 1X+a 2Y+a 3And y'=b 1X+b 2Y+b 3, (x, y) is s 0A upper coordinate, (x ', y ') be s i' upper the coordinate corresponding with (x, y), a 1With b 2Be the convergent-divergent yardstick of directions X and Y-direction, a 2And b 1Be rotation yardstick, a 3And b 3Translation size for directions X and Y-direction adopts the method for undetermined coefficients to obtain corresponding parameter (a 1, a 2, a 3, b 1, b 2, b 3);
4. 3. all training facial images are gone on foot conversion through the, obtain its facial image I under the standard attitude i', and adopt the principal component analysis method
Figure BDA00002215071300022
Here λ iThe parameter of i apparent vector, apparent parameter vector λ={ λ 1, λ 2..., λ nThat input picture is corresponding to the eigenwert of the apparent parameter of this AAM model, to represent the full detail of input picture, i apparent vectorial A i(x) corresponding to the large eigenwert of i in the apparent parameter vector;
By 1. 2. 3. 4. the step trains the eigenwert that obtains each appearance features namely to finish the training of the apparent model of Depth-AAM algorithm;
The apparent model of the Depth-AAM algorithm of 2) finishing based on training carries out face characteristic and extracts:
5. the Kinect camera adopts the API of Kinect winsdk to be partitioned into human body image, and obtains head node location coordinate, head joint rotation direction θ and degree of confidence Conf thereof according to the human body depth image θAccording to the human body depth image, human region is put white, zone outside the human body is put black, node location constantly enlarges the hunting zone about up and down simultaneously from the head, when all reaching black border about upper, stops the search of lower boundary, and the maximum region of definite people's face, its top left corner apex coordinate is designated as (x HeadLU, y HeadLU), zone length and width are designated as (length Head, width Head);
6. the global shape parameter q is defined as N ( x ; q ) = 1 + a - b b 1 + a x y + t x t y , Parameter (a, b) is expressed as a=kcos θ-and b=ksin θ, (t x, t y) be the translation of directions X and Y-direction, in order to write conveniently (a, b, t x, t y) be designated as (q 1, q 2, q 3, q 4) be the global shape parameter q;
7. the objective function of Depth-AAM algorithm match is the absolute value of input picture and formal synthesis image difference E = Σ x ∈ s 0 [ A 0 ( x ) + Σ i = 1 n λ i A i ( x ) - I ( N ( W ( x ; p ) ; q ) ) ] , Rotation direction θ initialization q is used in primary Depth-AAM match 1, q 2, the people's face positional information initialization q that obtains in the use 5. 3, q 4P represents the standard attitude after the initialization, finding the solution parameter p and q makes the coordinate figure of the poor described point that hour obtains of image energy and apparent parameter vector namely finish face characteristic to extract, be specially, to parameter p and q differentiate, try to achieve variation delta p and the Δ q of parameter p and q, it is poor that iteration is asked for minimum image energy Δq = H - 1 Σ i = 0 n S D i [ I ( N ( W ( x ; p ) ; q ) ) - A 0 ( x ) ] , The scope of i is i=1 ..., 4, Δp = H - 1 Σ i = 0 n SD j + 4 [ I ( N ( W ( x ; p ) ; q ) ) - A 0 ( x ) ] , The scope of j is j=1. .., and, wherein
Figure BDA00002215071300033
The scope of k is k=1 ..., 4,
Figure BDA00002215071300034
The scope of l is l=1 ..., 68, H is the Hessian matrix, H = Σ x ∈ S 0 [ SD k ( x ) ] T [ SD k ( x ) ] .
Adopt the pyramid algorith of layering, will cut apart the facial image that obtains and narrow down to
Figure BDA00002215071300036
With
Figure BDA00002215071300037
Right first
Figure BDA00002215071300038
The target facial image of size carries out primary Depth-AAM match, obtains rough form parameter p 1With overall deformation parameter q, with primary Depth-AAM fitted shapes parameter p 1Amplify 2 times, substitution Depth-AAM carries out the match second time and obtains form parameter p 2With overall deformation parameter q 2, again with p 2Amplify 2 times, substitution Depth-AAM carries out for the third time match and obtains p 3, p 368 point coordinates that namely match obtains, vectorial λ is exactly apparent parameter; Described pyramid algorith is divided into 3 layers, and every layer of pyramid maximum iteration time is 30 times, if the modulus of the difference of 2 p in front and back is considered as iteration convergence less than 0.001.
After adopting said method, the present invention compared with prior art, have following remarkable advantage and beneficial effect: the present invention is based on the Kinect camera, adopted up-to-date human body attitude recognizer, also can reach on the commercial hardware more than per second 200 frames, under complex background and personage's attitude condition, can accurately follow the tracks of and cut apart face image, estimate that its head 3 d pose and degree of confidence thereof are in order to instruct the global shape conversion of Depth-AAM, the Depth-AAM algorithm is with texture image and exist the depth information of error to be combined into the four-way information of RGBD in order to train the Depth-AAM apparent model, and the four-way information of RGBD all inputted as the Depth-AAM algorithm data, use image pyramid algorithm and inverse compositional algorithm to accelerate iterative process, thereby the face characteristic that carries out accurate robust is demarcated, so that the human body attitude that the present invention can provide the Kinect camera is analyzed data and depth data is fused in the Depth-AAM algorithm, form the face feature extraction method based on 2.5 dimension images.
As improvement, point is 68, and manual the demarcation according to the facial image outline line is standard, be that take the right eye canthus the 1st point, other 67 some positions also are to determine along the facial image profile is unique, demarcate like this, operand is little, more is conducive to of the present inventionly efficiently carry out.
As improvement, described step is the facial image I under the Plays attitude 4. i' resolution sizes is 42*43, apparent vectorial A iResolution sizes also is 42*43, and like this, operand is little, more is conducive to of the present inventionly efficiently carry out.
As improvement, described step only has form parameter p and the global shape parameter q of the acquiescence that primary Depth-AAM match obtains in just using 5. in 6., for the first time later match all uses the form parameter p and the global shape parameter q that obtain after the last Depth-AAM match convergence to carry out initialization, like this, the faster face characteristic that carries out more accurately of the present invention is extracted.
As improvement, carry out the and 7. judged before the step, if head joint rotation direction θ is greater than 30 degree and direction degree of confidence Conf θGreater than 0.8, then carried out for the 7. step, be about to be partitioned into the match of human body head RGBD four-way image substitution Depth-AAM algorithm iteration, otherwise just skipping the 7. goes on foot to finish face characteristic and extracts, like this, judged before iterative fitting whether target image is effective image, whether be recognizable image namely, for example, obtain if cut apart on the back side that the target image that obtains is behaved, then obviously this target image does not possess the people face part, can't be distinguished, above-mentionedly arrange that purpose is so that the present invention is effectively carried out, avoid invalid situation.
Description of drawings
Fig. 1 is the schematic flow sheet (Depth-AAM training algorithm) of the apparent model of employing principal component analysis method training Depth-AAM algorithm of the present invention.
Fig. 2 is that the apparent model of the Depth-AAM algorithm of finishing based on training of the present invention carries out the schematic flow sheet (Depth-AAM fitting algorithm) that face characteristic extracts.
Fig. 3 is 68 gauge point locations drawing of people's face of the present invention.
Fig. 4 is Depth-AAM triangulation network trrellis diagram of the present invention.
Embodiment
The invention will be further described below in conjunction with specific embodiment.
Fig. 3 is 68 gauge point locations drawing of people's face of the present invention.Face coincidence part mark mark is 60,61,62,63,64,65 counterclockwise, and the face center is number mark 66.
A kind of face feature extraction method that the present invention proposes is based on the Depth-AAM algorithm, the Depth-AAM algorithm belongs to the face characteristic location algorithm to two dimensional image---the improvement of AAM algorithm, the human body attitude that taking full advantage of the Kinect camera provides is analyzed data and depth data, they are fused in the AAM algorithm, form the human face characteristic positioning method based on 2.5 dimension images.
Described face feature extraction method may further comprise the steps,
1) adopt the principal component analysis method to train the apparent model of Depth-AAM algorithm:
1. utilize the training of Kinect collected by camera texture image and the depth image of facial image, depth image is compressed to 0 ~ 255 pixel coverage from 0 ~ 65535 pixel coverage, the α passage of substitution four-way image, merge into RGBD four-way image with texture image again, and it is carried out craft demarcate several point;
2. define people's face shape for forming v apex coordinate s=(x of grid 1, y 1..., x v, y v) TThe shape vector that the summit consists of is set up the two-dimensional linear model with the principal component analysis method, and shape vector is expressed as basic configuration s 0Add m shape vector s iLinear combination P=(p 1..., p m) TThe feature value vector of form matrix, s 0Be the standard attitude of facial image, s iEigenwert p iThe characteristic of correspondence vector;
3. with s 0, RGBD four-way image I iWith its corresponding manual markings s i *Be transformed into RGBD four-way facial image under the standard attitude, i.e. s with the piecemeal affined transformation 0With s i' triangle gridding corresponding one by one, as shown in Figure 4, piecemeal affined transformation expression formula is x'=a 1X+a 2Y+a 3And y'=b 1X+b 2Y+b 3, (x, y) is s 0A upper coordinate, (x ', y ') be s i' upper the coordinate corresponding with (x, y), a 1With b 2Be the convergent-divergent yardstick of directions X and Y-direction, a 2And b 1Be rotation yardstick, a 3And b 3Be the translation size of directions X and Y-direction, each triangle only need to be brought their three summits separately into the affined transformation expression formula, does not need each pixel is calculated, and adopts the method for undetermined coefficients to obtain corresponding parameter (a 1, a 2, a 3, b 1, b 2, b 3);
4. 3. all training facial images are gone on foot conversion through the, obtain its facial image I under the standard attitude i', and adopt the principal component analysis method
Figure BDA00002215071300052
Here λ iThe parameter of i apparent vector, apparent parameter vector λ={ λ 1, λ 2..., λ nThat input picture is corresponding to the eigenwert of the apparent parameter of this AAM model, to represent the full detail of input picture, i apparent vectorial A i(x) corresponding to the large eigenwert of i in the apparent parameter vector;
Obtain A by 1. 2. 3. 4. going on foot training 0, A 1, A 2Namely finish the training of the apparent model of Depth-AAM algorithm Deng the eigenwert of each appearance features;
The apparent model of the Depth-AAM algorithm of 2) finishing based on training carries out face characteristic and extracts:
5. the Kinect camera adopts the API of Kinect winsdk to be partitioned into human body image, and obtains head node location coordinate, head joint rotation direction θ and degree of confidence Conf thereof according to the human body depth image θAccording to the human body depth image, human region is put white, zone outside the human body is put black, node location constantly enlarges the hunting zone about up and down simultaneously from the head, when all reaching black border about upper, stops the search of lower boundary, and the maximum region of definite people's face, its top left corner apex coordinate is designated as (x HeadLU, y HeadLU), zone length and width are designated as (1ength Head, width Head); In this specific embodiment, head node location and near depth information adopt the probe searching algorithm, expand to whole head zone, posting field position (t x, t y) and XY direction length, be used for initialization global shape function;
6. the global shape parameter q is defined as N ( x ; q ) = 1 + a - b b 1 + a x y + t x t y , Parameter (a, b) is expressed as a=kcos θ-and b=ksin θ, (t x, t y) be the translation of directions X and Y-direction, in order to write conveniently (a, b, t x, t y) be designated as (q 1, q 2, q 3, q 4) be the global shape parameter q; Obtain the target facial image during 5. purpose incites somebody to action exactly and carry out zooming and panning, compare with the standard attitude that 1. 2. 3. 4. goes on foot the apparent model that obtains;
7. the objective function of Depth-AAM algorithm match is the absolute value of input picture and formal synthesis image difference E = Σ x ∈ s 0 [ A 0 ( x ) + Σ i = 1 n λ i A i ( x ) - I ( N ( W ( x ; p ) ; q ) ) ] , Rotation direction θ initialization q is used in primary Depth-AAM match 1, q 2, the people's face positional information initialization q that obtains in the use 5. 3, q 4P represents the standard attitude after the initialization, finding the solution parameter p and q makes the coordinate figure of the poor described point that hour obtains of image energy and apparent parameter vector namely finish face characteristic to extract, be specially, to parameter p and q differentiate, try to achieve variation delta p and the Δ q of parameter p and q, it is poor that iteration is asked for minimum image energy Δq = H - 1 Σ i = 0 n S D i [ I ( N ( W ( x ; p ) ; q ) ) - A 0 ( x ) ] , The scope of i is i=1 ..., 4, Δp = H - 1 Σ i = 0 n SD j + 4 [ I ( N ( W ( x ; p ) ; q ) ) - A 0 ( x ) ] , The scope of j is j=1. .., and, wherein
Figure BDA00002215071300063
The scope of k is k=1 ..., 4,
Figure BDA00002215071300064
The scope of l is l=1 ..., 68, H is the Hessian matrix, H = Σ x ∈ S 0 [ SD k ( x ) ] T [ SD k ( x ) ] .
Point is 68, and manual the demarcation according to the facial image outline line is standard, is that take the right eye canthus the 1st point, other 67 some positions also are to determine that along the facial image profile is unique the calibration point position as shown in Figure 3.
Owing to directly use form parameter q at random not high to the precision that the facial image that is partitioned into carries out the Depth-AAM match, be difficult to reach fast convergence, so adopt the pyramid algorith of layering, will cut apart the facial image that obtains and narrow down to
Figure BDA00002215071300066
With
Figure BDA00002215071300067
Right first
Figure BDA00002215071300068
The target facial image of size carries out primary Depth-AAM match, obtains rough form parameter p 1With overall deformation parameter q, with primary Depth-AAM fitted shapes parameter p 1Amplify 2 times, substitution Depth-AAM carries out the match second time and obtains form parameter p 2With overall deformation parameter q 2, again with p 2Amplify 2 times, substitution Depth-AAM carries out for the third time match and obtains p 3, p 368 point coordinates that namely match obtains, vectorial λ is exactly apparent parameter; Described pyramid algorith is divided into 3 layers, and every layer of pyramid maximum iteration time is 30 times, if the modulus of the difference of 2 p in front and back is considered as iteration convergence less than 0.001; After for the third time Depth-AAM match convergence, calculate apparent parameter and be
Figure BDA00002215071300069
Described step is the facial image I under the Plays attitude 4. i' resolution sizes is 42*43, apparent vectorial A iResolution sizes also is 42*43.
Described step only has form parameter p and the global shape parameter q of the acquiescence that primary Depth-AAM match obtains in just using 5. in 6., and for the first time later match all uses the form parameter p and the global shape parameter q that obtain after the last Depth-AAM match convergence to carry out initialization.
Carry out the and 7. judged before the step, if head joint rotation direction θ is greater than 30 degree and direction degree of confidence Conf θGreater than 0.8, then carried out for the 7. step, be about to be partitioned into the match of human body head RGBD four-way image substitution Depth-AAM algorithm iteration, 7. go on foot to finish face characteristic and extract otherwise just skip the.

Claims (6)

1. a face feature extraction method is characterized in that, may further comprise the steps,
1) adopt the principal component analysis method to train the apparent model of Depth-AAM algorithm:
1. utilize the training of Kinect collected by camera texture image and the depth image of facial image, depth image is compressed to 0 ~ 255 pixel coverage from 0 ~ 65535 pixel coverage, the α passage of substitution four-way image, merge into RGBD four-way image with texture image again, and it is carried out craft demarcate several point;
2. define people's face shape for forming v apex coordinate s=(x of grid 1, y 1..., x v, y v) TThe shape vector that the summit consists of is set up the two-dimensional linear model with the principal component analysis method, and shape vector is expressed as basic configuration s 0Add m shape vector s iLinear combination P=(p 1..., p m) TThe feature value vector of form matrix, s 0Be the standard attitude of facial image, s iEigenwert p iThe characteristic of correspondence vector;
3. with s 0, RGBD four-way image I iWith its corresponding manual markings s i *Be transformed into RGBD four-way facial image under the standard attitude, i.e. s with the piecemeal affined transformation 0 withs i' triangle gridding corresponding one by one, piecemeal affined transformation expression formula is x'=a 1X+a 2Y+a 3And y'=b 1X+b 2Y+b 3, (x, y) is s 0A upper coordinate, (x ', y ') be s i' upper the coordinate corresponding with (x, y), a 1With b 2Be the convergent-divergent yardstick of directions X and Y-direction, a 2And b 1Be rotation yardstick, a 3And b 3Translation size for directions X and Y-direction adopts the method for undetermined coefficients to obtain corresponding parameter (a 1, a 2, a 3, b 1, b 2, b 3);
4. 3. all training facial images are gone on foot conversion through the, obtain its facial image I under the standard attitude i', and adopt the principal component analysis method
Figure FDA00002215071200012
Here λ iThe parameter of i apparent vector, apparent parameter vector λ={ λ 1, λ 2..., λ nThat input picture is corresponding to the eigenwert of the apparent parameter of this AAM model, to represent the full detail of input picture, i apparent vectorial A i(x) corresponding to the large eigenwert of i in the apparent parameter vector;
By 1. 2. 3. 4. the step trains the eigenwert that obtains each appearance features namely to finish the training of the apparent model of Depth-AAM algorithm;
The apparent model of the Depth-AAM algorithm of 2) finishing based on training carries out face characteristic and extracts:
5. the Kinect camera adopts the API of Kinect winsdk to be partitioned into human body image, and obtains head node location coordinate, head joint rotation direction θ and degree of confidence Conf thereof according to the human body depth image θAccording to the human body depth image, human region is put white, zone outside the human body is put black, node location constantly enlarges the hunting zone about up and down simultaneously from the head, when all reaching black border about upper, stops the search of lower boundary, and the maximum region of definite people's face, its top left corner apex coordinate is designated as (x HeadLU, y HeadLU), zone length and width are designated as (length Head, width Head);
6. the global shape parameter q is defined as N ( x ; q ) = 1 + a - b b 1 + a x y + t x t y , Parameter (a, b) is expressed as a=kcos θ-and b=ksin θ, (t x, t y) be the translation of directions X and Y-direction, in order to write conveniently (a, b, t x, t y) be designated as (q 1, q 2, q 3, q 4) be the global shape parameter q;
7. the objective function of Depth-AAM algorithm match is the absolute value of input picture and formal synthesis image difference
Figure FDA00002215071200022
Rotation direction θ initialization q is used in primary Depth-AAM match 1, q 2, the people's face positional information initialization q that obtains in the use 5. 3, q 4, find the solution p and q and make the coordinate figure of the poor described point that hour obtains of image energy and apparent parameter vector namely finish face characteristic to extract, be specially, to parameter p and q differentiate, try to achieve variation delta p and the Δ q of parameter p and q, it is poor that iteration is asked for minimum image energy Δq = H - 1 Σ i = 0 n S D i [ I ( N ( W ( x ; p ) ; q ) ) - A 0 ( x ) ] , The scope of i is i=1 ..., 4, Δp = H - 1 Σ i = 0 n SD j + 4 [ I ( N ( W ( x ; p ) ; q ) ) - A 0 ( x ) ] , The scope of j is j=1. .., and, wherein
Figure FDA00002215071200025
The scope of k is k=1 ..., 4,
Figure FDA00002215071200026
The scope of l is l=1 ..., 68, H is the Hessian matrix, H = Σ x ∈ S 0 [ SD k ( x ) ] T [ SD k ( x ) ] .
2. face feature extraction method according to claim 1 is characterized in that, point is 68, and manual the demarcation according to the facial image outline line is standard, is that take the right eye canthus the 1st point, other 67 some positions also are to determine along the facial image profile is unique.
3. face feature extraction method according to claim 1 is characterized in that, adopts the pyramid algorith of layering, will cut apart the facial image that obtains and narrow down to
Figure FDA00002215071200028
With Right first
Figure FDA000022150712000210
The target facial image of size carries out primary Depth-AAM match, obtains rough form parameter p 1With overall deformation parameter q, with primary Depth-AAM fitted shapes parameter p 1Amplify 2 times, substitution Depth-AAM carries out the match second time and obtains form parameter p 2With overall deformation parameter q 2, again with p 2Amplify 2 times, substitution Depth-AAM carries out for the third time match and obtains p 3, p 368 point coordinates that namely match obtains, vectorial λ is exactly apparent parameter; Described pyramid algorith is divided into 3 layers, and every layer of pyramid maximum iteration time is 30 times, if the modulus of the difference of 2 p in front and back is considered as iteration convergence less than 0.001.
4. face feature extraction method according to claim 1 is characterized in that, described step is the facial image I under the Plays attitude 4. i' resolution sizes is 42*43, apparent vectorial A iResolution sizes also is 42*43.
5. face feature extraction method according to claim 1, it is characterized in that, described step only has form parameter p and the global shape parameter q of the acquiescence that primary Depth-AAM match obtains in just using 5. in 6., and for the first time later match all uses the form parameter p and the global shape parameter q that obtain after the last Depth-AAM match convergence to carry out initialization.
6. face feature extraction method according to claim 1 is characterized in that, carries out the and 7. judges before the step, if head joint rotation direction θ is greater than 30 degree and direction degree of confidence Conf θGreater than 0.8, then carried out for the 7. step, be about to be partitioned into the match of human body head RGBD four-way image substitution Depth-AAM algorithm iteration, 7. go on foot to finish face characteristic and extract otherwise just skip the.
CN201210376751.4A 2012-09-29 2012-09-29 Method for extracting face features Active CN102880866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210376751.4A CN102880866B (en) 2012-09-29 2012-09-29 Method for extracting face features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210376751.4A CN102880866B (en) 2012-09-29 2012-09-29 Method for extracting face features

Publications (2)

Publication Number Publication Date
CN102880866A true CN102880866A (en) 2013-01-16
CN102880866B CN102880866B (en) 2014-12-17

Family

ID=47482183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210376751.4A Active CN102880866B (en) 2012-09-29 2012-09-29 Method for extracting face features

Country Status (1)

Country Link
CN (1) CN102880866B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413352A (en) * 2013-07-29 2013-11-27 西北工业大学 Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN103679193A (en) * 2013-11-12 2014-03-26 华南理工大学 FREAK-based high-speed high-density packaging component rapid location method
CN104123545A (en) * 2014-07-24 2014-10-29 江苏大学 Real-time expression feature extraction and identification method
CN104504856A (en) * 2014-12-30 2015-04-08 天津大学 Fatigue driving detection method based on Kinect and face recognition
CN105096377A (en) * 2014-05-14 2015-11-25 华为技术有限公司 Image processing method and apparatus
CN105184278A (en) * 2015-09-30 2015-12-23 深圳市商汤科技有限公司 Human face detection method and device
CN105228033A (en) * 2015-08-27 2016-01-06 联想(北京)有限公司 A kind of method for processing video frequency and electronic equipment
CN106022214A (en) * 2016-05-04 2016-10-12 南京工程学院 Effective human face feature extraction method in unconstrained environment
CN106778506A (en) * 2016-11-24 2017-05-31 重庆邮电大学 A kind of expression recognition method for merging depth image and multi-channel feature
CN106815547A (en) * 2015-12-02 2017-06-09 掌赢信息科技(上海)有限公司 It is a kind of that method and the electronic equipment that standardized model is moved are obtained by multi-fit
CN106897675A (en) * 2017-01-24 2017-06-27 上海交通大学 The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107462204A (en) * 2017-09-21 2017-12-12 武汉武大卓越科技有限责任公司 A kind of three-dimensional pavement nominal contour extracting method and system
CN107851299A (en) * 2015-07-21 2018-03-27 索尼公司 Information processor, information processing method and program
CN108595600A (en) * 2018-04-18 2018-09-28 努比亚技术有限公司 Photo classification method, mobile terminal and readable storage medium storing program for executing
CN108734144A (en) * 2018-05-28 2018-11-02 北京文香信息技术有限公司 A kind of speaker's identity identifying method based on recognition of face
CN108805889A (en) * 2018-05-07 2018-11-13 中国科学院自动化研究所 The fining conspicuousness method for segmenting objects of margin guide and system, equipment
CN109584347A (en) * 2018-12-18 2019-04-05 重庆邮电大学 A kind of augmented reality mutual occlusion processing method based on active apparent model
CN109703465A (en) * 2018-12-28 2019-05-03 百度在线网络技术(北京)有限公司 The control method and device of vehicle-mounted imaging sensor
CN110580680A (en) * 2019-09-09 2019-12-17 武汉工程大学 face super-resolution method and device based on combined learning
CN112617758A (en) * 2020-12-31 2021-04-09 厦门越人健康技术研发有限公司 Traditional Chinese medicine health state identification method based on artificial intelligence
CN112990348A (en) * 2021-04-12 2021-06-18 华南理工大学 Small target detection method for self-adjustment feature fusion
CN113361382A (en) * 2021-05-14 2021-09-07 沈阳工业大学 Hand shape recognition method based on compressed relative contour feature points

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980809B (en) * 2016-01-19 2020-08-21 深圳市朗驰欣创科技股份有限公司 Human face characteristic point detection method based on ASM

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1072018A1 (en) * 1998-04-13 2001-01-31 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1072018A1 (en) * 1998-04-13 2001-01-31 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413352A (en) * 2013-07-29 2013-11-27 西北工业大学 Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN103679193A (en) * 2013-11-12 2014-03-26 华南理工大学 FREAK-based high-speed high-density packaging component rapid location method
CN105096377A (en) * 2014-05-14 2015-11-25 华为技术有限公司 Image processing method and apparatus
US10043308B2 (en) 2014-05-14 2018-08-07 Huawei Technologies Co., Ltd. Image processing method and apparatus for three-dimensional reconstruction
CN104123545A (en) * 2014-07-24 2014-10-29 江苏大学 Real-time expression feature extraction and identification method
CN104123545B (en) * 2014-07-24 2017-06-16 江苏大学 A kind of real-time human facial feature extraction and expression recognition method
CN104504856A (en) * 2014-12-30 2015-04-08 天津大学 Fatigue driving detection method based on Kinect and face recognition
CN107851299A (en) * 2015-07-21 2018-03-27 索尼公司 Information processor, information processing method and program
CN107851299B (en) * 2015-07-21 2021-11-30 索尼公司 Information processing apparatus, information processing method, and program
CN105228033A (en) * 2015-08-27 2016-01-06 联想(北京)有限公司 A kind of method for processing video frequency and electronic equipment
CN105228033B (en) * 2015-08-27 2018-11-09 联想(北京)有限公司 A kind of method for processing video frequency and electronic equipment
CN105184278A (en) * 2015-09-30 2015-12-23 深圳市商汤科技有限公司 Human face detection method and device
CN106815547A (en) * 2015-12-02 2017-06-09 掌赢信息科技(上海)有限公司 It is a kind of that method and the electronic equipment that standardized model is moved are obtained by multi-fit
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107045618B (en) * 2016-02-05 2020-07-03 北京陌上花科技有限公司 Facial expression recognition method and device
CN106022214A (en) * 2016-05-04 2016-10-12 南京工程学院 Effective human face feature extraction method in unconstrained environment
CN106022214B (en) * 2016-05-04 2019-10-08 南京工程学院 Effective face feature extraction method under unconstrained condition
CN106778506A (en) * 2016-11-24 2017-05-31 重庆邮电大学 A kind of expression recognition method for merging depth image and multi-channel feature
CN106897675B (en) * 2017-01-24 2021-08-17 上海交通大学 Face living body detection method combining binocular vision depth characteristic and apparent characteristic
CN106897675A (en) * 2017-01-24 2017-06-27 上海交通大学 The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
CN107462204A (en) * 2017-09-21 2017-12-12 武汉武大卓越科技有限责任公司 A kind of three-dimensional pavement nominal contour extracting method and system
CN108595600B (en) * 2018-04-18 2023-12-15 努比亚技术有限公司 Photo classification method, mobile terminal and readable storage medium
CN108595600A (en) * 2018-04-18 2018-09-28 努比亚技术有限公司 Photo classification method, mobile terminal and readable storage medium storing program for executing
CN108805889B (en) * 2018-05-07 2021-01-08 中国科学院自动化研究所 Edge-guided segmentation method, system and equipment for refined salient objects
CN108805889A (en) * 2018-05-07 2018-11-13 中国科学院自动化研究所 The fining conspicuousness method for segmenting objects of margin guide and system, equipment
CN108734144A (en) * 2018-05-28 2018-11-02 北京文香信息技术有限公司 A kind of speaker's identity identifying method based on recognition of face
CN109584347A (en) * 2018-12-18 2019-04-05 重庆邮电大学 A kind of augmented reality mutual occlusion processing method based on active apparent model
CN109584347B (en) * 2018-12-18 2023-02-21 重庆邮电大学 Augmented reality virtual and real occlusion processing method based on active appearance model
CN109703465A (en) * 2018-12-28 2019-05-03 百度在线网络技术(北京)有限公司 The control method and device of vehicle-mounted imaging sensor
CN110580680B (en) * 2019-09-09 2022-07-05 武汉工程大学 Face super-resolution method and device based on combined learning
CN110580680A (en) * 2019-09-09 2019-12-17 武汉工程大学 face super-resolution method and device based on combined learning
CN112617758A (en) * 2020-12-31 2021-04-09 厦门越人健康技术研发有限公司 Traditional Chinese medicine health state identification method based on artificial intelligence
CN112990348A (en) * 2021-04-12 2021-06-18 华南理工大学 Small target detection method for self-adjustment feature fusion
CN112990348B (en) * 2021-04-12 2023-08-22 华南理工大学 Small target detection method based on self-adjusting feature fusion
CN113361382A (en) * 2021-05-14 2021-09-07 沈阳工业大学 Hand shape recognition method based on compressed relative contour feature points
CN113361382B (en) * 2021-05-14 2024-02-02 沈阳工业大学 Hand shape recognition method based on compressed relative contour feature points

Also Published As

Publication number Publication date
CN102880866B (en) 2014-12-17

Similar Documents

Publication Publication Date Title
CN102880866B (en) Method for extracting face features
Szeptycki et al. A coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking
CN101398886B (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN101777116B (en) Method for analyzing facial expressions on basis of motion tracking
CN102697508B (en) Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN108932475A (en) A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision
CN106469465A (en) A kind of three-dimensional facial reconstruction method based on gray scale and depth information
CN100389430C (en) AAM-based head pose real-time estimating method and system
CN101739719B (en) Three-dimensional gridding method of two-dimensional front view human face image
CN101894254B (en) Contouring method-based three-dimensional face recognition method
CN108564616A (en) Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN104408462B (en) Face feature point method for rapidly positioning
CN102800126A (en) Method for recovering real-time three-dimensional body posture based on multimodal fusion
CN104115192A (en) Improvements in or relating to three dimensional close interactions
CN110008913A (en) The pedestrian's recognition methods again merged based on Attitude estimation with viewpoint mechanism
CN104091155A (en) Rapid iris positioning method with illumination robustness
CN103646416A (en) Three-dimensional cartoon face texture generation method and device
CN105159452B (en) A kind of control method and system based on human face modeling
CN106874850A (en) One kind is based on three-dimensional face point cloud characteristic point positioning method
CN101794459A (en) Seamless integration method of stereoscopic vision image and three-dimensional virtual object
CN102081733A (en) Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method
CN107357426A (en) A kind of motion sensing control method for virtual reality device
CN112329723A (en) Binocular camera-based multi-person human body 3D skeleton key point positioning method
CN108694348B (en) Tracking registration method and device based on natural features
Jiménez et al. Face tracking and pose estimation with automatic three-dimensional model construction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant