CN104657713A - Three-dimensional face calibrating method capable of resisting posture and facial expression changes - Google Patents

Three-dimensional face calibrating method capable of resisting posture and facial expression changes Download PDF

Info

Publication number
CN104657713A
CN104657713A CN201510067374.XA CN201510067374A CN104657713A CN 104657713 A CN104657713 A CN 104657713A CN 201510067374 A CN201510067374 A CN 201510067374A CN 104657713 A CN104657713 A CN 104657713A
Authority
CN
China
Prior art keywords
face
dimensional
shape
depth map
represent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510067374.XA
Other languages
Chinese (zh)
Other versions
CN104657713B (en
Inventor
胡浩基
刘蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201510067374.XA priority Critical patent/CN104657713B/en
Publication of CN104657713A publication Critical patent/CN104657713A/en
Application granted granted Critical
Publication of CN104657713B publication Critical patent/CN104657713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a three-dimensional face calibrating method capable of resisting posture and facial expression changes. The three-dimensional face calibrating method comprises an active appearance model establishing stage and a face calibrating stage. In the active appearance model establishing stage, a three-dimensional face is acquired through three-dimensional image acquisition equipment, important markers of the face are manually marked, and through the grid shape and appearance information of the face, an active appearance model based on a depth image is established; in the face calibrating stage, the face is roughly calibrated through an average nose model first and then a test face is matched with the active appearance model based on the depth image to finely calibrate the face. By the method from rough calibration to fine calibration, the posture and facial expression changes of the face can be resisted, so that the face can be accurately calibrated under a natural condition; through conversion of the three-dimensional face onto the depth image, the calibrating efficiency is improved; therefore, the three-dimensional face calibrating method has an important significance in promoting practical application of the three-dimensional face to identity authentication.

Description

A kind of can the three-dimensional face calibration steps of anti-attitude and expression shape change
Technical field
The present invention relates to Image Processing and Pattern Recognition technical field, particularly relate to the Face normalization method of anti-attitudes vibration in three-dimensional face identification direction.
Background technology
In information age today, effective identification and authentication of personal identification is in security system, and such as airport security, gate inhibition's supervision etc. are very important problems.The method of traditional authentication has shortcomings such as using inconvenience, security low, and living things feature recognition, as a kind of novel personal identification method, has the advantages such as security, ease for maintenance, ubiquity because of it, for the problems referred to above provide solution party to.
Biometrics identification technology refers to, the physiologic information utilizing human body intrinsic and behavior, as feature, carry out the technology of identification and checking by intelligent computation.Recognition of face is a branch of living things feature recognition, has the advantages such as easily collected, friendly, user's acceptance is high.Three-dimensional face identification refers to the technology utilizing 3-D view to carry out recognition of face.Three-dimensional face, with the original geometry information of face, is expected to overcome the problem of the attitudes vibration run in recognition of face now.
The three-dimensional face identification method of present proposition is all generally for not having attitude or the face only with very little attitude, but in the practical application of three-dimensional face identification, need the recognition of face realizing having any attitude, and this remains a difficult problem.The difficult point of the three-dimensional face identification that attitude is constant is all, when face is with larger attitudes vibration, not only can there is larger change in human face data along with rotation, and face also can lose more data because of from blocking on some region, thus face is imperfect, lose symmetry.Expression shape change can make Facial metamorphosis, thus face geometry changes, and causes three-dimensional data to change.In this case, how can calibrate face exactly, human face posture is calibrated to front, and fill up the data of loss, thus train the face gathered in storehouse to mate better with face, need the problem solved exactly.But a lot of method is based on three-dimensional surface curvature or based on two-dimensional silhouette, and these features are all easily subject to the impact of three-dimensional face data degradation, thus Face normalization still tool be a difficult problem, still do not have the method for good robustness to solve the problem so far.
Summary of the invention
The object of the invention is to for the deficiencies in the prior art in Face normalization field in current three-dimensional face identification, propose a kind of can the three-dimensional face calibration steps of anti-attitude and expression shape change, the method, when face has larger attitudes vibration, still has stronger robustness.
The object of the invention is to be achieved through the following technical solutions: a kind of can the three-dimensional face calibration steps of anti-attitude and expression shape change, comprise the following steps:
(1) construct the active appearance models stage, specifically comprise following sub-step:
(1.1) training face is gathered: the face being obtained neutral pose (namely without any postural change) by acquiring three-dimensional images equipment, as face training storehouse, manual positioning vital signs point, and the outline point of face is obtained by the separatrix scanning face and background, obtain enough gauge points;
(1.2) face depth map is generated: in three-dimensional face institute in a coordinate system, x-y coordinate axis is grid is set up at interval with certain resolution, the method of bi-cubic interpolation is adopted to be interpolated on corresponding x-y grid z-axis coordinate figure, filling cavity, obtains three-dimensional face grid chart, is projected by three-dimensional face grid chart to x-y coordinate system, z-axis coordinate is as pixel value, obtain depth map, the three-dimensional marker point of correspondence is projected on x-y coordinate system, obtain the two-dimensional coordinate of gauge point on depth map;
(1.3) facial pretreatment: carry out medium filtering to the face depth map of step 1.2 acquisition and remove peak point, carries out the level and smooth face of Gaussian smoothing filter and removes noise, down-sampled to the face within the scope of 100,000 pixel sizes;
(1.4) face shape is calibrated: the grid chart be made up of the gauge point that training face is all represents face shape, adopts the method for general formula analysis that face shape is snapped to uniform shapes;
(1.5) shape is generated: the shape of all training faces is deducted uniform shapes, and adopts the method for principal component analysis to obtain main composition, as shape vector; Shape is expressed as follows:
s = s 0 + Σ i = 1 n p i s i , p i = s i ( s - s 0 ) - - - ( 1 )
Wherein p irepresent form parameter, s 0represent the basic configuration of shape, s represents the example of Free shape m odeling, s=(x 1, y 1, x 2, y 2..., x v, y v) t, s i(i=1,2 ..., n) represent shape vector, n represents the dimension of shape, the number of v expressive notation point;
(1.6) training of human face shape standardization: by basic configuration s 0carry out triangular grid, and by the outward appearance of all training faces according to its shape to basic configuration s 0distortion, forms the face with basic configuration, and wherein distortion adopts piecewise linearity deformation method, carries out affined transformation by the corresponding triangle in two mesh shapes;
(1.7) generate active appearance models: by training face shape normalised for step 1.6, adopt the method for principal component analysis to obtain main composition, as outward appearance vector; Display model is expressed as follows:
M ( x ) = A 0 ( x ) + Σ i = 1 m λ i A i ( x ) , λ i = A i ( x ) T ( M ( x ) - A 0 ( x ) ) - - - ( 2 )
Wherein λ irepresent the parameter of display model, M (x) represents the example of any display model, A 0x () represents standard appearance template, A i(x) (i=1,2 ..., m) represent outward appearance vector, M (x), A 0(x) and A ix () is two dimensional image, m represents the dimension of outward appearance vector;
(2) test face calibration phase, specifically comprise following sub-step:
(2.1) face test library is gathered: the three-dimensional face being obtained different attitude and expression shape change by acquiring three-dimensional images equipment, as face test library;
(2.2) face depth map is generated: for three-dimensional faces all in test library, in three-dimensional face institute in a coordinate system, x-y coordinate axis is grid is set up at interval with certain resolution, the method of bi-cubic interpolation is adopted to be interpolated on corresponding x-y grid z-axis coordinate figure, filling cavity, obtains three-dimensional face grid chart, is projected by three-dimensional face grid chart to x-y coordinate system, z-axis coordinate, as pixel value, obtains depth map;
(2.3) facial pretreatment: carry out medium filtering to the face depth map of step 2.2 acquisition and remove peak point, carries out the level and smooth face of Gaussian smoothing filter and removes noise, down-sampled to the face within the scope of 100,000 pixel sizes;
(2.4) face is slightly calibrated: because initiatively Optimizing Search algorithm needs initialization model accurately, to avoid converging to local minimum, especially when face have very large attitude, good initialization is extremely important, so first the present invention adopts the prenasale based on the average nose model of depth map to detect and the thick calibration steps of face, detect the prenasale of face, and slightly calibrate face, concrete sub-step is as follows:
(2.4.1) in training storehouse, select the face of neutral pose and neutral expression, be that benchmark carries out superposition and on average, obtains average face, be partitioned into nasal area from average face, as average nose model with prenasale;
(2.4.2) three-dimensional faces all in test library is rotated around Y-axis with anglec of rotation β, obtain a series of postrotational three-dimensional face; Wherein β ∈ [-90 °, 90 °], with 6 ° for step-length, the face often opened in test library all has R=31 the anglec of rotation; Rotation formula is as follows:
x i β y i β z i β = cos β 0 sin β 0 1 0 - sin β 0 cos β x i y i z i - - - ( 3 )
Wherein, certain three-dimensional coordinate point is expressed as (x i, y i, z i) (i=1,2 ..., N), the corresponding rotational coordinates point exported is according to the method in step 2.2, postrotational three-dimensional face is converted into depth map;
(2.4.3) template matches that it is criterion that the average nose model that the face depth map obtained by step 2.4.2 and step 2.4.1 obtain carries out with standard cross-correlation, obtains standard cross-correlogram;
(2.4.4) calculate the maximum correlation coefficient of standard cross-correlogram, the position of maximum correlation coefficient in standard cross-correlogram is nasal area position, obtains corresponding prenasale and anglec of rotation β t;
(2.4.5) according to formula (3) with anglec of rotation β taround Y-axis rotary test face, slightly calibrate; After carrying out three-dimensional face and slightly calibrating, the attitude of face is bordering on neutrality, is converted into the depth map being bordering on positive face, and namely around X, Y, the angle of Z-direction is all in [-20 °, 20 °] scope, and this meets the calibration range of the face angle of active appearance models.
(2.4.6) make circle with face width for diameter for the center of circle with the prenasale detected and be partitioned into the effective coverage of face, reduce initialisation image size further, ensure the convergence that active appearance models has had in the process of iteration;
(2.5) mated with active appearance models by the face through thick calibration, Offered target function, by the parameter of the mode computation model of iteration, minimizing objective function can be expressed as:
argm p,i λn||I(W(x;p))-A 0-Aλ|| 2(4)
Wherein W (x; P) represent piecewise linearity warping function, I is test face depth map, A=[A 1, A 2..., A m] represent the combination of outward appearance vector, λ=[λ 1, λ 2..., λ m] represent the combination of apparent parameter; Objective function || I (W (x; P))-A 0-A λ || 2be linear about parameter lambda, therefore the iteration of lambda parameter directly can use stacking pattern, λ ← λ+Δ λ, and is not linear about parameter p, and iteration adopts reverse amalgamation mode W (x; P) ← W (x; P) ο W (x; Δ p) -1;
(2.6) adopt reverse amalgamation mode to carry out parameter renewal until objective function converges by iterative search, concrete sub-step is as follows:
(2.6.1) precomputation: calculate standard appearance template A 0and outward appearance vector A i(i=1,2 ..., gradient m) calculate W (x; P) at the Jacobi matrix at p=0 place compute gradient image calculate the projection Orthogonal Complementary Set P=E-AA of A t;
(2.6.2) according to W (x; P) facial image I is deformed into I (W (x; P));
(2.6.3) compute gradient joint image wherein J=[J 0, J 1..., J m] represent gradient image J i(i=0,1 ..., combination m), calculate extra large gloomy matrix H fsic = J fsic T J fsic ;
(2.6.4) the iteration renewal amount of form parameter p is calculated upgrade affined transformation W (x; P) ← W (x; P) ο W (x; Δ p) -1, ο represents fusion calculation;
(2.6.5) the iteration renewal amount Δ λ=A of apparent parameter λ is calculated t(I-A 0-A λ-J Δ p), undated parameter λ ← λ+Δ λ, returns step 2.6.2 iteration, until objective function converges, now, input test Facial metamorphosis is basic configuration, and realizing can the three-dimensional face calibration of anti-attitude and expression shape change.
The invention has the beneficial effects as follows:
1, propose by slightly to the three-dimensional face calibration steps of essence, by Face normalization to the face with uniform shapes.Thick calibration steps is simple, and computing velocity is fast, by Face normalization to being bordering on front, ensures the convergence that active appearance models has had in the process of iteration.This accuracy that improve three-dimensional face calibration by the calibration steps slightly to essence;
2, in the process of face essence calibration, adopt fast oppositely amalgamation mode, carry out iterative search, calculate the parameter of active appearance models, this mode can produce more not variable in the algorithm, thus all adds in precomputation by more calculating, and accelerates the efficiency of iterative search.
3, three-dimensional face being converted into depth map, and carrying out three-dimensional face calibration on depth map, compared to directly operating on three-dimensional face, accelerating the efficiency of Face normalization;
4, adopt the active appearance models based on depth map, except by Face normalization, in the process of iterative search, the expression shape change of face can also be weakened, improve the accuracy rate of follow-up recognition of face operation.
Accompanying drawing explanation
Fig. 1 is the general flow chart of the inventive method;
Fig. 2 is 22 face gauge point schematic diagram of hand labeled, and (a) is two-dimensional marker point schematic diagram, and (b) is corresponding three-dimensional marker point schematic diagram.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail.
As shown in Figure 1, the present invention is a kind of can the three-dimensional face calibration steps of anti-attitude and expression shape change, comprises the following steps:
(1) construct the active appearance models stage, specifically comprise following sub-step:
(1.1) training face is gathered: the face being obtained neutral pose (namely without any postural change) by the second generation high-quality 3D digitizer of structure based light, as face training storehouse, manual positioning vital signs point (comprising eyes, nose, corners of the mouth point etc.), 22 altogether, as shown in Figure 2, wherein (a) is two-dimensional marker point schematic diagram, and (b) is corresponding three-dimensional marker point schematic diagram; Obtained the outline point of face by the separatrix scanning face and background, obtain enough gauge points, altogether 47 gauge points;
(1.2) face depth map is generated: in three-dimensional face institute in a coordinate system, x-y coordinate axis is grid is set up at interval with certain resolution, the method of bi-cubic interpolation is adopted to be interpolated on corresponding x-y grid z-axis coordinate figure, filling cavity, obtains three-dimensional face grid chart, is projected by three-dimensional face grid chart to x-y coordinate system, z-axis coordinate is as pixel value, obtain depth map, the three-dimensional marker point of correspondence is projected on x-y coordinate system, obtain the two-dimensional coordinate of gauge point on depth map; Experimentally analyze, when resolution is set to 1mm, calibrate effective and speed is fast;
(1.3) facial pretreatment: carry out medium filtering to the face depth map of step 1.2 acquisition and remove peak point, carries out the level and smooth face of Gaussian smoothing filter and removes noise, down-sampled to the face within the scope of 100,000 pixel sizes;
(1.4) face shape is calibrated: the grid chart be made up of the gauge point that training face is all represents face shape, adopts the method for general formula analysis that face shape is snapped to uniform shapes;
(1.5) shape is generated: the shape of all training faces is deducted uniform shapes, and adopts the method for principal component analysis to obtain main composition, as shape vector; Shape is expressed as follows:
s = s 0 + Σ i = 1 n p i s i , p i = s i ( s - s 0 ) - - - ( 1 )
Wherein p irepresent form parameter, s 0represent the basic configuration of shape, s represents the example of Free shape m odeling, s=(x 1, y 1, x 2, y 2..., x v, y v) t, s i(i=1,2 ..., n) represent shape vector, n represents the dimension of shape, the number of v expressive notation point;
(1.6) training of human face shape standardization: by basic configuration s 0carry out triangular grid, and by the outward appearance of all training faces according to its shape to basic configuration s 0distortion, forms the face with basic configuration, and wherein distortion adopts piecewise linearity deformation method, carries out affined transformation by the corresponding triangle in two mesh shapes;
(1.7) generate active appearance models: by training face shape normalised for step 1.6, adopt the method for principal component analysis to obtain main composition, as outward appearance vector; Display model is expressed as follows:
M ( x ) = A 0 ( x ) + Σ i = 1 m λ i A i ( x ) , λ i = A i ( x ) T ( M ( x ) - A 0 ( x ) ) - - - ( 2 )
Wherein λ irepresent the parameter of display model, M (x) represents the example of any display model, A 0x () represents standard appearance template, A i(x) (i=1,2 ..., m) represent outward appearance vector, M (x), A 0(x) and A ix () is two dimensional image, m represents the dimension of outward appearance vector;
(2) test face calibration phase, specifically comprise following sub-step:
(2.1) face test library is gathered: the three-dimensional face being obtained different attitude and expression shape change by acquiring three-dimensional images equipment, as face test library;
(2.2) face depth map is generated: for three-dimensional faces all in test library, in three-dimensional face institute in a coordinate system, x-y coordinate axis is grid is set up at interval with certain resolution, the method of bi-cubic interpolation is adopted to be interpolated on corresponding x-y grid z-axis coordinate figure, filling cavity, obtains three-dimensional face grid chart, is projected by three-dimensional face grid chart to x-y coordinate system, z-axis coordinate, as pixel value, obtains depth map;
(2.3) facial pretreatment: carry out medium filtering to the face depth map of step 2.2 acquisition and remove peak point, carries out the level and smooth face of Gaussian smoothing filter and removes noise, down-sampled to the face within the scope of 100,000 pixel sizes;
(2.4) face is slightly calibrated: because initiatively Optimizing Search algorithm needs initialization model accurately, to avoid converging to local minimum, especially when face have very large attitude, good initialization is extremely important, so first the present invention adopts the prenasale based on the average nose model of depth map to detect and the thick calibration steps of face, detect the prenasale of face, and slightly calibrate face, concrete sub-step is as follows:
(2.4.1) in training storehouse, select the face of neutral pose and neutral expression, be that benchmark carries out superposition and on average, obtains average face, be partitioned into nasal area from average face, as average nose model with prenasale;
(2.4.2) three-dimensional faces all in test library is rotated around Y-axis with anglec of rotation β, obtain a series of postrotational three-dimensional face; Wherein β ∈ [-90 °, 90 °], with 6 ° for step-length, the face often opened in test library all has R=31 the anglec of rotation; Rotation formula is as follows:
x i β y i β z i β = cos β 0 sin β 0 1 0 - sin β 0 cos β x i y i z i - - - ( 3 )
Wherein, certain three-dimensional coordinate point is expressed as (x i, y i, z i) (i=1,2 ..., N), the corresponding rotational coordinates point exported is according to the method in step 2.2, postrotational three-dimensional face is converted into depth map;
(2.4.3) template matches that it is criterion that the average nose model that the face depth map obtained by step 2.4.2 and step 2.4.1 obtain carries out with standard cross-correlation, obtains standard cross-correlogram;
(2.4.4) calculate the maximum correlation coefficient of standard cross-correlogram, the position of maximum correlation coefficient in standard cross-correlogram is nasal area position, obtains corresponding prenasale and anglec of rotation β t;
(2.4.5) according to formula (3) with anglec of rotation β taround Y-axis rotary test face, slightly calibrate; After carrying out three-dimensional face and slightly calibrating, the attitude of face is bordering on neutrality, is converted into the depth map being bordering on positive face, and namely around X, Y, the angle of Z-direction is all in [-20 °, 20 °] scope, and this meets the calibration range of the face angle of active appearance models.
(2.4.6) make circle with face width for diameter for the center of circle with the prenasale detected and be partitioned into the effective coverage of face, reduce initialisation image size further, ensure the convergence that active appearance models has had in the process of iteration;
(2.5) mated with active appearance models by the face through thick calibration, Offered target function, by the parameter of the mode computation model of iteration, minimizing objective function can be expressed as:
argm p,i λn||I(W(x;p))-A 0-Aλ|| 2(4)
Wherein W (x; P) represent piecewise linearity warping function, I is test face depth map, A=[A 1, A 2..., A m] represent the combination of outward appearance vector, λ=[λ 1, λ 2..., λ m] represent the combination of apparent parameter; Objective function || I (W (x; P))-A 0-A λ || 2be linear about parameter lambda, therefore the iteration of lambda parameter directly can use stacking pattern, λ ← λ+Δ λ, and is not linear about parameter p, and iteration adopts reverse amalgamation mode W (x; P) ← W (x; P) ο W (x; Δ p) -1;
(2.6) adopt reverse amalgamation mode to carry out parameter renewal until objective function converges by iterative search, reverse amalgamation mode refers to and is superimposed upon in outward appearance example M (x) by the iteration renewal amount of parameter p; Concrete sub-step is as follows:
(2.6.1) precomputation: calculate standard appearance template A 0and outward appearance vector A i(i=1,2 ..., gradient m) calculate W (x; P) at the Jacobi matrix at p=0 place compute gradient image calculate the projection Orthogonal Complementary Set P=E-AA of A t;
(2.6.2) according to W (x; P) facial image I is deformed into I (W (x; P));
(2.6.3) compute gradient joint image wherein J=[J 0, J 1..., J m] represent gradient image J i(i=0,1 ..., combination m), calculate extra large gloomy matrix H fsic = J fsic T J fsic ;
(2.6.4) the iteration renewal amount of form parameter p is calculated upgrade affined transformation W (x; P) ← W (x; P) ο W (x; Δ p) -1, ο represents fusion calculation;
(2.6.5) the iteration renewal amount Δ λ=A of apparent parameter λ is calculated t(I-A 0-A λ-J Δ p), undated parameter λ ← λ+Δ λ, returns step 2.6.2 iteration, until objective function converges, now, input test Facial metamorphosis is basic configuration, and realizing can the three-dimensional face calibration of anti-attitude and expression shape change.

Claims (1)

1. can the three-dimensional face calibration steps of anti-attitude and expression shape change, it is characterized in that, the method comprises the following steps:
(1) construct the active appearance models stage, specifically comprise following sub-step:
(1.1) training face is gathered: the face being obtained neutral pose by acquiring three-dimensional images equipment, as face training storehouse, manual positioning vital signs point, and the outline point of face is obtained by the separatrix scanning face and background, obtain enough gauge points;
(1.2) face depth map is generated: in three-dimensional face institute in a coordinate system, x-y coordinate axis is grid is set up at interval with certain resolution, the method of bi-cubic interpolation is adopted to be interpolated on corresponding x-y grid z-axis coordinate figure, filling cavity, obtains three-dimensional face grid chart, is projected by three-dimensional face grid chart to x-y coordinate system, z-axis coordinate is as pixel value, obtain depth map, the three-dimensional marker point of correspondence is projected on x-y coordinate system, obtain the two-dimensional coordinate of gauge point on depth map;
(1.3) facial pretreatment: carry out medium filtering to the face depth map of step 1.2 acquisition and remove peak point, carries out the level and smooth face of Gaussian smoothing filter and removes noise, down-sampled to the face within the scope of 100,000 pixel sizes;
(1.4) face shape is calibrated: the grid chart be made up of the gauge point that training face is all represents face shape, adopts the method for general formula analysis that face shape is snapped to uniform shapes;
(1.5) shape is generated: the shape of all training faces is deducted uniform shapes, and adopts the method for principal component analysis to obtain main composition, as shape vector; Shape is expressed as follows:
s = s 0 + Σ i = 1 n p i s i , p i = s i ( s - s 0 ) - - - ( 1 )
Wherein p irepresent form parameter, s 0represent the basic configuration of shape, s represents the example of Free shape m odeling, s=(x 1, y 1, x 2, y 2..., x v, y v) t, s i(i=1,2 ..., n) represent shape vector, n represents the dimension of shape, the number of v expressive notation point;
(1.6) training of human face shape standardization: by basic configuration s 0carry out triangular grid, and by the outward appearance of all training faces according to its shape to basic configuration s 0distortion, forms the face with basic configuration, and wherein distortion adopts piecewise linearity deformation method, carries out affined transformation by the corresponding triangle in two mesh shapes;
(1.7) generate active appearance models: by training face shape normalised for step 1.6, adopt the method for principal component analysis to obtain main composition, as outward appearance vector; Display model is expressed as follows:
M ( x ) = A 0 ( x ) + Σ i = 1 m λ i A i ( x ) , λ i = A i ( x ) T ( M ( x ) - A 0 ( x ) ) - - - ( 2 )
Wherein λ irepresent the parameter of display model, M (x) represents the example of any display model, A 0x () represents standard appearance template, A i(x) (i=1,2 ..., m) represent outward appearance vector, M (x), A 0(x) and A ix () is two dimensional image, m represents the dimension of outward appearance vector;
(2) test face calibration phase, specifically comprise following sub-step:
(2.1) face test library is gathered: the three-dimensional face being obtained different attitude and expression shape change by acquiring three-dimensional images equipment, as face test library;
(2.2) face depth map is generated: for three-dimensional faces all in test library, in three-dimensional face institute in a coordinate system, x-y coordinate axis is grid is set up at interval with certain resolution, the method of bi-cubic interpolation is adopted to be interpolated on corresponding x-y grid z-axis coordinate figure, filling cavity, obtains three-dimensional face grid chart, is projected by three-dimensional face grid chart to x-y coordinate system, z-axis coordinate, as pixel value, obtains depth map;
(2.3) facial pretreatment: carry out medium filtering to the face depth map of step 2.2 acquisition and remove peak point, carries out the level and smooth face of Gaussian smoothing filter and removes noise, down-sampled to the face within the scope of 100,000 pixel sizes;
(2.4) face is slightly calibrated, and concrete sub-step is as follows:
(2.4.1) in training storehouse, select the face of neutral pose and neutral expression, be that benchmark carries out superposition and on average, obtains average face, be partitioned into nasal area from average face, as average nose model with prenasale;
(2.4.2) three-dimensional faces all in test library is rotated around Y-axis with anglec of rotation β, obtain a series of postrotational three-dimensional face; Wherein β ∈ [-90 °, 90 °], with 6 ° for step-length, the face often opened in test library all has R=31 the anglec of rotation; Rotation formula is as follows:
x i β y i β z i β = cos β 0 sin β 0 1 0 - sin β 0 cos β x i y i z i - - - ( 3 )
Wherein, certain three-dimensional coordinate point is expressed as (x i, y i, z i) (i=1,2 ..., N), the corresponding rotational coordinates point exported is according to the method in step 2.2, postrotational three-dimensional face is converted into depth map;
(2.4.3) template matches that it is criterion that the average nose model that the face depth map obtained by step 2.4.2 and step 2.4.1 obtain carries out with standard cross-correlation, obtains standard cross-correlogram;
(2.4.4) calculate the maximum correlation coefficient of standard cross-correlogram, the position of maximum correlation coefficient in standard cross-correlogram is nasal area position, obtains corresponding prenasale and anglec of rotation β t;
(2.4.5) according to formula (3) with anglec of rotation β taround Y-axis rotary test face, slightly calibrate;
(2.4.6) make circle with face width for diameter for the center of circle with the prenasale detected and be partitioned into the effective coverage of face;
(2.5) mated with active appearance models by the face through thick calibration, Offered target function, by the parameter of the mode computation model of iteration, minimizing objective function can be expressed as:
arg min p , λ | | I ( W ( x ; p ) ) - A 0 - Aλ | | 2 - - - ( 4 )
Wherein W (x; P) represent piecewise linearity warping function, I is test face depth map, A=[A 1, A 2..., A m] represent the combination of outward appearance vector, λ=[λ 1, λ 2..., λ m] represent the combination of apparent parameter;
(2.6) adopt reverse amalgamation mode to carry out parameter renewal until objective function converges by iterative search, concrete sub-step is as follows:
(2.6.1) precomputation: calculate standard appearance template A 0and outward appearance vector A i(i=1,2 ..., gradient m) calculate W (x; P) at the Jacobi matrix at p=0 place compute gradient image calculate the projection Orthogonal Complementary Set P=E-AA of A t;
(2.6.2) according to W (x; P) facial image I is deformed into I (W (x; P));
(2.6.3) compute gradient joint image wherein J=[J 0, J 1..., J m] represent gradient image J i(i=0,1 ..., combination m), calculate extra large gloomy matrix H fsic = J fsic T J fsic ;
(2.6.4) the iteration renewal amount of form parameter p is calculated upgrade affined transformation represent fusion calculation;
(2.6.5) the iteration renewal amount Δ λ=A of apparent parameter λ is calculated t(I-A 0-A λ-J Δ p), undated parameter λ ← λ+Δ λ, returns step 2.6.2 iteration, until objective function converges, now, input test Facial metamorphosis is basic configuration, and realizing can the three-dimensional face calibration of anti-attitude and expression shape change.
CN201510067374.XA 2015-02-09 2015-02-09 It is a kind of can anti-posture and expression shape change three-dimensional face calibration method Active CN104657713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510067374.XA CN104657713B (en) 2015-02-09 2015-02-09 It is a kind of can anti-posture and expression shape change three-dimensional face calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510067374.XA CN104657713B (en) 2015-02-09 2015-02-09 It is a kind of can anti-posture and expression shape change three-dimensional face calibration method

Publications (2)

Publication Number Publication Date
CN104657713A true CN104657713A (en) 2015-05-27
CN104657713B CN104657713B (en) 2017-11-24

Family

ID=53248814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510067374.XA Active CN104657713B (en) 2015-02-09 2015-02-09 It is a kind of can anti-posture and expression shape change three-dimensional face calibration method

Country Status (1)

Country Link
CN (1) CN104657713B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106667496A (en) * 2017-02-10 2017-05-17 广州帕克西软件开发有限公司 Face data measuring method and device
CN107707451A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Instant communicating method and device
CN108062545A (en) * 2018-01-30 2018-05-22 北京搜狐新媒体信息技术有限公司 A kind of method and device of face alignment
CN108446672A (en) * 2018-04-20 2018-08-24 武汉大学 A kind of face alignment method based on the estimation of facial contours from thick to thin
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN110096999A (en) * 2019-04-29 2019-08-06 达闼科技(北京)有限公司 Chessboard recognition methods, chessboard recognition device, electronic equipment and can storage medium
CN110188688A (en) * 2019-05-30 2019-08-30 网易(杭州)网络有限公司 Postural assessment method and device
CN110826045A (en) * 2018-08-13 2020-02-21 深圳市商汤科技有限公司 Authentication method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930537A (en) * 2010-08-18 2010-12-29 北京交通大学 Method and system for identifying three-dimensional face based on bending invariant related features
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930537A (en) * 2010-08-18 2010-12-29 北京交通大学 Method and system for identifying three-dimensional face based on bending invariant related features
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GEORGIOS TZIMIROPOULOS 等: "Optimization problems for fast AAM fitting in-the-wild", 《2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
IAIN MATTEWS 等: "Active Appearance Models Revisited", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *
RONG LIU 等: "3D face registration by depth-based template matching and active appearance model", 《2014 SIXTH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING(WCSP)》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106667496B (en) * 2017-02-10 2020-05-05 广州帕克西软件开发有限公司 Face data measuring method and device
CN106667496A (en) * 2017-02-10 2017-05-17 广州帕克西软件开发有限公司 Face data measuring method and device
CN107707451A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Instant communicating method and device
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
US10922533B2 (en) 2017-10-23 2021-02-16 Beijing Kuangshi Technology Co., Ltd. Method for face-to-unlock, authentication device, and non-volatile storage medium
CN108875335B (en) * 2017-10-23 2020-10-09 北京旷视科技有限公司 Method for unlocking human face and inputting expression and expression action, authentication equipment and nonvolatile storage medium
CN108062545A (en) * 2018-01-30 2018-05-22 北京搜狐新媒体信息技术有限公司 A kind of method and device of face alignment
CN108062545B (en) * 2018-01-30 2020-08-28 北京搜狐新媒体信息技术有限公司 Face alignment method and device
CN108446672A (en) * 2018-04-20 2018-08-24 武汉大学 A kind of face alignment method based on the estimation of facial contours from thick to thin
CN108446672B (en) * 2018-04-20 2021-12-17 武汉大学 Face alignment method based on shape estimation of coarse face to fine face
CN110826045A (en) * 2018-08-13 2020-02-21 深圳市商汤科技有限公司 Authentication method and device, electronic equipment and storage medium
CN110096999A (en) * 2019-04-29 2019-08-06 达闼科技(北京)有限公司 Chessboard recognition methods, chessboard recognition device, electronic equipment and can storage medium
CN110188688A (en) * 2019-05-30 2019-08-30 网易(杭州)网络有限公司 Postural assessment method and device
CN110188688B (en) * 2019-05-30 2021-12-14 网易(杭州)网络有限公司 Posture evaluation method and device

Also Published As

Publication number Publication date
CN104657713B (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN104657713A (en) Three-dimensional face calibrating method capable of resisting posture and facial expression changes
Ye et al. Accurate 3d pose estimation from a single depth image
JP4785880B2 (en) System and method for 3D object recognition
CN100559398C (en) Automatic deepness image registration method
CN102567703B (en) Hand motion identification information processing method based on classification characteristic
Yuan et al. 3D point cloud matching based on principal component analysis and iterative closest point algorithm
CN101339669A (en) Three-dimensional human face modelling approach based on front side image
CN104680135A (en) Three-dimensional human face mark point detection method capable of resisting expression, posture and shielding changes
CN101916445A (en) Affine parameter estimation-based image registration method
CN103826032A (en) Depth map post-processing method
CN106803094A (en) Threedimensional model shape similarity analysis method based on multi-feature fusion
CN111259739B (en) Human face pose estimation method based on 3D human face key points and geometric projection
CN112101247B (en) Face pose estimation method, device, equipment and storage medium
CN104700452A (en) Three-dimensional body posture model matching method for any posture
CN107730587A (en) One kind is based on picture quick three-dimensional Interactive Modeling method
CN103994765A (en) Positioning method of inertial sensor
CN104677347A (en) Indoor mobile robot capable of producing 3D navigation map based on Kinect
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN106547724A (en) Theorem in Euclid space coordinate transformation parameter acquisition methods based on minimum point set
CN104318552A (en) Convex hull projection graph matching based model registration method
CN106408654B (en) A kind of creation method and system of three-dimensional map
Zhang et al. Pose optimization based on integral of the distance between line segments
An et al. Self-adaptive polygon mesh reconstruction based on ball-pivoting algorithm
CN102930586A (en) Interactive geometry deformation method based on linear rotation invariant differential coordinates
CN109344710B (en) Image feature point positioning method and device, storage medium and processor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant