The content of the invention
For problems of the prior art, the present invention provides a kind of 3 d human face mesh model processing method and set
It is standby, to overcome the 3 d human face mesh model for causing foundation based on Facial expression database progress overall deformation in the prior art
With the matching degree of corresponding two-dimension human face image it is relatively low the defects of.
First aspect present invention provides a kind of 3 d human face mesh model processing method, including:
Obtain initial three-dimensional face wire frame model corresponding with original two dimensional facial image, the initial three-dimensional face grid
Model includes the second expressive features point corresponding with the first expressive features point of the original two dimensional facial image;
According to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model
Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image
Characteristic point, N are the number of the first expressive features point and the second expressive features point;
According to the camera parameter matrix being calculated by the second expression on the initial three-dimensional face wire frame model
Characteristic point is mapped on the original two dimensional facial image, to judge the second expressive features point and first expressive features
The matching degree of point, and the initial three-dimensional face wire frame model is adjusted according to judged result.
In the first possible implementation of first aspect, the camera parameter matrix that the basis is calculated
The second expressive features point on the initial three-dimensional face wire frame model is mapped on the original two dimensional facial image, to sentence
The matching degree of the disconnected second expressive features point and the first expressive features point, and according to judged result to described pending
Model is adjusted, including:
According to formula(2)Calculate the matching error of the second expressive features point and the first expressive features point:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Judge whether the matching error is more than or equal to predetermined threshold value;
If being more than or equal to, the initial three-dimensional face wire frame model is adjusted, so that the three-dimensional face after adjustment
The matching error of the second expressive features point and the first expressive features point on grid model is less than the predetermined threshold value.
According to the first possible implementation of first aspect, in second of possible implementation of first aspect
In, it is described that the initial three-dimensional face wire frame model is adjusted, including:
Calculate the second expressive features point XiEach grid vertex X on to the initial three-dimensional face wire frame modeljGeodesic curve away from
From, wherein, i is not equal to j;
The second expressive features point X on the fixed initial three-dimensional face wire frame modeliZ coordinate, it is default using first
Algorithm changes the second expressive features point XiX, y-coordinate, obtain and the second expressive features point XiCorresponding 3rd expression
Characteristic point Xi’;
It is constraint with geodesic curve distance, is determined and the 3rd expressive features point X using the second preset algorithmi' right
Each grid vertex X answeredj’;
According to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding to each grid vertex Xj’
Adjust the initial three-dimensional face wire frame model.
According to the first or second of possible implementation of first aspect, first aspect, the 3rd of first aspect the
In the possible implementation of kind, the original two dimensional facial image includes target two-dimension human face image and refers to two-dimension human face figure
Picture;
It is described to obtain corresponding with original two dimensional facial image initial three-dimensional face wire frame model, including:
Extract the human face expression characteristic point of the target two-dimension human face image and the face with reference to two-dimension human face image
Expressive features point, the human face expression characteristic point include face mask characteristic point and the first expressive features point;
According to the face mask characteristic point of the target two-dimension human face image and the face with reference to two-dimension human face image
Contour feature point determines nearly front face image, and the nearly front face image is the target two-dimension human face image or the ginseng
Examine two-dimension human face image;
According to the face mask characteristic point and the first expressive features point of the nearly front face image, to from neutral face database
The target Nature face model of middle determination is deformed, and obtains Nature face model corresponding with the front face image;
According to the Nature face model of the nearly front face image respectively to presetting each preset table included in expression storehouse
Feelings model is deformed, and obtains each expression model corresponding with the front face image;
First weight of each expression model is determined according to the first expressive features point of the target two-dimension human face image
Coefficient, and determine according to the first expressive features point with reference to two-dimension human face image the second weight system of each expression model
Number;
Each expression model is merged according to first weight coefficient, to obtain and the target two-dimension human face image pair
The 3 d human face mesh model answered, and each expression model is merged according to second weight coefficient, to obtain and the ginseng
Examine 3 d human face mesh model corresponding to two-dimension human face image.
According to the third possible implementation of first aspect, in the 4th kind of possible implementation of first aspect
In, the default expression storehouse includes general blendshape models.
According to the third or the 4th kind of possible implementation of first aspect, in the 5th kind of possible reality of first aspect
In existing mode, the face mask characteristic point according to the target two-dimension human face image and described with reference to two-dimension human face image
Face mask characteristic point determines nearly front face image, including:
The face of the target two-dimension human face image is calculated according to the face mask characteristic point of the target two-dimension human face image
Contouring curvature, and refer to two-dimension human face figure according to the face mask characteristic point calculating with reference to two-dimension human face image is described
The face mask curvature of picture;
It is nearly front face image to determine the small image of the face mask curvature.
According to first aspect the third, the 4th kind or the 5th kind of possible implementation, at the 6th kind of first aspect
In possible implementation, it is described the initial three-dimensional face wire frame model is adjusted according to judged result after, also wrap
Include:
According to 3 d human face mesh model corresponding with the target two-dimension human face image to the target two-dimension human face figure
As being deformed, and according to 3 d human face mesh model corresponding with the reference two-dimension human face image two-dimentional people is referred to described
Face image is deformed;
Merged by the target two-dimension human face image after deformation and with reference to two-dimension human face image, by described with reference to two dimension
Expression on facial image is transferred on the target two-dimension human face image.
Second aspect of the present invention provides a kind of 3 d human face mesh model processing equipment, including:
Acquisition module, for obtaining initial three-dimensional face wire frame model corresponding with original two dimensional facial image, the original
Beginning 3 d human face mesh model includes the second expression corresponding with the first expressive features point of the original two dimensional facial image
Characteristic point;
Computing module, according to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model
Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image
Characteristic point, N are the number of the first expressive features point and the second expressive features point;
Judge module, for according to the camera parameter matrix that is calculated by the initial three-dimensional face wire frame model
On the second expressive features point be mapped on the original two dimensional facial image, with judge the second expressive features point with it is described
The matching degree of first expressive features point, and the initial three-dimensional face wire frame model is adjusted according to judged result.
In the first possible implementation of second aspect, the judge module, including:
Computing unit, for according to formula(2)Calculate the second expressive features point and the first expressive features point
Matching error:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Judging unit, for judging whether the matching error is more than or equal to predetermined threshold value;
Adjustment unit, if for being more than or equal to, the initial three-dimensional face wire frame model is adjusted, so that adjustment
The matching error of the second expressive features point and the first expressive features point on 3 d human face mesh model afterwards is less than described
Predetermined threshold value.
According to the first possible implementation of second aspect, in second of possible implementation of second aspect
In, the adjustment unit, including:
Computation subunit, for calculating the second expressive features point XiEach grid on to the initial three-dimensional face wire frame model
Summit XjGeodesic curve distance, wherein, i is not equal to j;
First adjustment subelement, for fixing the second expressive features point X on the initial three-dimensional face wire frame modeli's
Z coordinate, the second expressive features point X is changed using the first preset algorithmiX, y-coordinate, is obtained and second expression is special
Levy point XiCorresponding 3rd expressive features point Xi’;
Determination subelement, for being constraint with geodesic curve distance, determined and the described 3rd using the second preset algorithm
Expressive features point Xi' corresponding to each grid vertex Xj’;
Second adjustment subelement, for according to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi’
Corresponding each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.
According to the first or second of possible implementation of second aspect, second aspect, the 3rd of second aspect the
In the possible implementation of kind, the original two dimensional facial image includes target two-dimension human face image and refers to two-dimension human face figure
Picture;
The acquisition module, including:
Extraction unit, for extracting human face expression characteristic point and the two-dimentional people of the reference of the target two-dimension human face image
The human face expression characteristic point of face image, the human face expression characteristic point include face mask characteristic point and the first expressive features point;
First determining unit, for the face mask characteristic point according to the target two-dimension human face image and described refer to two
The face mask characteristic point of dimension facial image determines nearly front face image, and the nearly front face image is target two dimension
Facial image described refers to two-dimension human face image;
First deformation unit, for the face mask characteristic point and the first expressive features according to the nearly front face image
Point, the target Nature face model determined from neutral face database is deformed, obtained corresponding with the front face image
Nature face model;
Second deformation unit, for according to the Nature face model of the nearly front face image respectively to presetting expression storehouse
In each default expression model for including deformed, obtain each expression model corresponding with the front face image;
Second determining unit, each table is determined for the first expressive features point according to the target two-dimension human face image
First weight coefficient of feelings model, and each expression is determined according to the first expressive features point with reference to two-dimension human face image
Second weight coefficient of model;
Combining unit, for merging each expression model according to first weight coefficient, to obtain and the target
3 d human face mesh model corresponding to two-dimension human face image, and each expression model is merged according to second weight coefficient,
To obtain 3 d human face mesh model corresponding with the reference two-dimension human face image.
According to the third possible implementation of second aspect, in the 4th kind of possible implementation of second aspect
In, the default expression storehouse includes general blendshape models.
According to the third or the 4th kind of possible implementation of second aspect, in the 5th kind of possible reality of second aspect
In existing mode, first determining unit, it is specifically used for:
The face of the target two-dimension human face image is calculated according to the face mask characteristic point of the target two-dimension human face image
Contouring curvature, and refer to two-dimension human face figure according to the face mask characteristic point calculating with reference to two-dimension human face image is described
The face mask curvature of picture;
It is nearly front face image to determine the small image of the face mask curvature.
According to second aspect the third, the 4th kind or the 5th kind of possible implementation, at the 6th kind of second aspect
In possible implementation, the equipment also includes:
Deformation module, for basis 3 d human face mesh model corresponding with the target two-dimension human face image to the mesh
Mark two-dimension human face image is deformed, and according to 3 d human face mesh model corresponding with the reference two-dimension human face image to institute
State and deformed with reference to two-dimension human face image;
Merging module, for being merged by the target two-dimension human face image after deformation and with reference to two-dimension human face image, with
The expression with reference on two-dimension human face image is transferred on the target two-dimension human face image.
3 d human face mesh model processing method and equipment provided by the invention, obtaining and original two dimensional facial image pair
After the initial three-dimensional face wire frame model answered, according to the camera parameter of the initial three-dimensional face wire frame model, by described original three
The second expressive features point on dimension face wire frame model is mapped on the original two dimensional facial image, to judge second table
Feelings characteristic point and the matching degree of the first expressive features point, and according to judged result to the initial three-dimensional face grid mould
Type is adjusted.Matching degree is carried out according to camera parameter with original two dimensional facial image to initial three-dimensional face wire frame model to sentence
It is disconnected so that initial three-dimensional face wire frame model to be adjusted in the case where matching degree is low, so as to ensure the three-dimensional after adjustment
Face wire frame model has more preferable matching degree with original two dimensional facial image.
Embodiment
Fig. 1 is the flow chart for the 3 d human face mesh model processing method that the embodiment of the present invention one provides, as shown in figure 1,
This method includes:
Step 101, obtain initial three-dimensional face wire frame model corresponding with original two dimensional facial image, the initial three-dimensional
Face wire frame model includes the second expressive features point corresponding with the first expressive features point of the original two dimensional facial image;
In the present embodiment, above-mentioned 3 d human face mesh model processing method, the processing unit are performed by a processing unit
It is preferably integrated and is arranged in the terminal devices such as PC, notebook computer, can be used for the two images progress to input
The processing of human face expression transfer.The methods described that the present embodiment provides is applied to the three-dimensional to being obtained by the way of prior art
Face wire frame model is adjusted, or, it is also applied for the three-dimensional obtained to the method provided using embodiment as shown in Figure 3
Being adjusted for face wire frame model, is not limited with the present embodiment.
No matter for description for the sake of simplicity, the 3 d human face mesh model that will be obtained in the present embodiment using which kind of above-mentioned method
Initial three-dimensional face wire frame model is referred to as to illustrate.The initial three-dimensional face wire frame model correspond to an original two dimension
Facial image.The methods described that the present embodiment provides is preferably adapted in the application scenarios of human face expression transfer, in face table
It is to need that target two-dimension human face image will be transferred to reference to the human face expression on two-dimension human face image in the processing procedure of feelings transfer
On.If wanting to carry out human face expression transfer, first have to realize the three of target two-dimension human face image and reference two-dimension human face image respectively
Tie up the reconstruction of face wire frame model.Therefore, the original two dimensional facial image described in the present embodiment for example can be with reference to two dimension
Facial image either target two-dimension human face image, accordingly, initial three-dimensional face wire frame model for example can be with reference to two
Tie up the either three-dimensional face grid mould corresponding with target two-dimension human face image of 3 d human face mesh model corresponding to facial image
Type, because the methods described that the present embodiment provides is all suitable for for both 3 d human face mesh models, explanation is not differentiated between below.
Processing unit obtains initial three-dimensional face wire frame model corresponding with original two dimensional facial image first, described original
It is special that 3 d human face mesh model includes the second expression corresponding with the first expressive features point of the original two dimensional facial image
Sign point.3 d human face mesh model in the present embodiment for example to obtain in the prior art is used as the initial three-dimensional face grid
Model, the processing unit is inputted, so that the second expression that the processing unit includes according to the beginning 3 d human face mesh model is special
Sign point carries out follow-up adjustment processing.
Wherein, the first expressive features point of original two dimensional facial image is primarily referred to as face can draw when showing different expressions
Playing different face changes, the motion morphology of these face is the first expressive features point for forming the human face expression, such as nose,
The form of face, eyebrow, eyes etc..And the second expressive features point corresponding with the first expressive features point can be artificial or oneself
Dynamic mark is on initial three-dimensional face wire frame model.
Step 102, according to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model
Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image
Characteristic point, N are the number of the first expressive features point and the second expressive features point;
The camera parameter matrix that step 103, basis are calculated is by the initial three-dimensional face wire frame model
Second expressive features point is mapped on the original two dimensional facial image, to judge the second expressive features point and described first
The matching degree of expressive features point, and the initial three-dimensional face wire frame model is adjusted according to judged result.
In the present embodiment, in order to which the original two dimensional facial image whether corresponding to initial three-dimensional face wire frame model is
No matching is judged, it is necessary first to which the second expressive features point on initial three-dimensional face wire frame model is mapped into corresponding original
On beginning two-dimension human face image, and then judge the second expressive features point and the first expression on corresponding original two dimensional facial image
Matching error between characteristic point, the initial three-dimensional face wire frame model is adjusted according to judged result afterwards.
And the second expressive features point on initial three-dimensional face wire frame model is being mapped to corresponding original two dimensional face
, it is necessary to use a parameter, i.e. camera parameter when on image, the parameter is typically represented in the form of a parameter matrix.Specifically
Ground, solution formula can be passed through(1)To obtain camera parameter matrix, formula(1)Mean that the camera parameter matrix need to be as far as possible full
The distance of the second expressive features point of foot and the first expressive features point is minimum.After the camera parameter matrix is obtained, the square is utilized
The second expressive features point on initial three-dimensional face wire frame model is mapped on corresponding two-dimension human face image by battle array, to judge
The matching degree of the second expressive features point and the first expressive features point is stated, and according to judged result to the initial three-dimensional people
Face grid model is adjusted.
In the present embodiment, according to camera parameter to initial three-dimensional face wire frame model and the progress of original two dimensional facial image
Judgement with degree so that initial three-dimensional face wire frame model is adjusted in the case where matching degree is low, so as to ensure to adjust
3 d human face mesh model afterwards has more preferable matching degree with original two dimensional facial image.
Further, Fig. 2 is the flow chart of the processing procedure of the step 103 of embodiment illustrated in fig. 1, as shown in Fig. 2 Fig. 1
According to the camera parameter matrix that is calculated by the second table on the initial three-dimensional face wire frame model in middle step 103
Feelings characteristic point is mapped on the original two dimensional facial image, to judge that the second expressive features point is special with first expression
The matching degree of point is levied, and the initial three-dimensional face wire frame model is adjusted according to judged result, including:
Step 201, according to formula(2)The matching for calculating the second expressive features point and the first expressive features point misses
Difference:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Step 202, judge whether the matching error is more than or equal to predetermined threshold value, if being more than or equal to, perform step
203, otherwise terminate;
After the camera parameter of initial three-dimensional face wire frame model is obtained, according to the camera parameter square being calculated
The second expressive features point on the initial three-dimensional face wire frame model is mapped on the original two dimensional facial image by battle array, with
According to formula(2)To judge on the second expressive features point on initial three-dimensional face wire frame model and original two dimensional facial image
The matching error of first expressive features point.Formula(2)In, because the depth of each pair expressive features point, pixel grey scale are had nothing in common with each other,
Therefore, each pair expressive features point has different weight coefficients.
And then judge whether the matching error is more than or equal to predetermined threshold value, if being more than or equal to, need according to step 203
~206 pairs of initial three-dimensional face wire frame models are adjusted, and otherwise need not be adjusted.
Step 203, calculate the second expressive features point XiEach grid vertex X on to the initial three-dimensional face wire frame modelj's
Geodesic curve distance, wherein, i is not equal to j;
The second expressive features point X in step 204, the fixed initial three-dimensional face wire frame modeliZ coordinate, use
First preset algorithm changes the second expressive features point XiX, y-coordinate, obtain and the second expressive features point XiIt is corresponding
3rd expressive features point Xi’;
Step 205, with the geodesic curve distance for constraint, using the second preset algorithm determine with the 3rd expressive features
Point Xi' corresponding to each grid vertex Xj’;
Step 206, according to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding to each grid
Summit Xj' the adjustment initial three-dimensional face wire frame model.
When judging that matching error is more than or equal to predetermined threshold value, initial three-dimensional face wire frame model need to be adjusted.
Specifically, each second expressive features point is calculated on the initial three-dimensional face wire frame model first to the initial three-dimensional face grid mould
The geodesic curve distance of other grid vertexes in type in addition to corresponding current second expressive features point.Due to initial three-dimensional face
Grid model is a three-dimensional grid model, is made up of grid one by one, and the geodesic curve distance can be understood as being to work as
Preceding second expressive features point along different grid lines reach some grid vertex when most short grid lines length, i.e. path distance.
Afterwards, the second expressive features point X on fixed initial three-dimensional face wire frame modeliZ coordinate, it is default using first
Algorithm changes the second expressive features point XiX, y-coordinate, obtain and the second expressive features point XiCorresponding 3rd expression is special
Levy point Xi', wherein, first preset algorithm is, for example, Nelder-Mead simplex algorithm.
And then determined and the 3rd expressive features point Xi ' using the second preset algorithm with geodesic curve distance to constrain
Corresponding each grid vertex Xj', and according to the 3rd expressive features point Xi' and with the 3rd expressive features point Xi' corresponding to
Each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.Wherein, second preset algorithm is, for example, radial direction base letter
Number, Laplce's distortion of the mesh algorithm etc..
It is because to keep the second expressive features point as far as possible it is understood that why being constraint with geodesic curve distance
The grid vertex of other non-characteristic points of surrounding, after the second expressive features point is changed to the 3rd expressive features point, is still becoming
According to geodesic curve distance after more, the correlative positional relation with the 3rd expressive features point is kept.
It is relatively low in initial three-dimensional face wire frame model and the matching degree of corresponding two-dimension human face image in the present embodiment
When, it is to constrain to adjust initial three-dimensional face wire frame model, what is be adjusted to expressive features point with above-mentioned geodesic curve distance
Meanwhile advantageously ensure that other non-expressive features points keep the relative position relation with corresponding expression characteristic point afterwards before adjustment,
So that the 3 d human face mesh model after adjustment has more preferable matching degree with corresponding two-dimension human face image.
Fig. 3 is the flow chart for the 3 d human face mesh model processing method that the embodiment of the present invention two provides, as shown in figure 3,
The processing method is to the improvement for the process for obtaining initial three-dimensional face wire frame model in the prior art, in the prior art based on people
Face expression data storehouse is established in the scheme of the 3 d human face mesh model to match with source beginning two-dimension human face image, due to the people
Each human face expression model in face expression data storehouse is counted according to the age of Different Individual, sex, shape of face, mood, expression etc.
Establish, there are obvious individual differences, if the expression in original two-dimension human face image is beyond the human face expression data
The scope in storehouse, it will be unable to obtain matching 3 d human face mesh model by Facial expression database.Therefore, the present embodiment
The methods described of offer is used to establish 3 d human face mesh model corresponding with original two dimensional facial image.Wherein, above-mentioned Fig. 1 or
Original two dimensional facial image described in embodiment illustrated in fig. 2 specifically includes target two-dimension human face image and ginseng in the present embodiment
Two-dimension human face image is examined, in the application scenarios of human face expression transfer, is needed with reference to the face table in two-dimension human face image
Feelings are transferred in target two-dimension human face image.
The methods described that the present embodiment provides includes:
Step 301, the human face expression characteristic point of the extraction target two-dimension human face image and the reference two-dimension human face figure
The human face expression characteristic point of picture, the human face expression characteristic point include face mask characteristic point and the first expressive features point;
Step 302, the face mask characteristic point according to the target two-dimension human face image and the reference two-dimension human face figure
The face mask characteristic point of picture determines nearly front face image, and the nearly front face image is the target two-dimension human face image
Or described refer to two-dimension human face image;
Step 303, face mask characteristic point and the first expressive features point according to the nearly front face image, to therefrom
The target Nature face model determined in property face database is deformed, and obtains Nature face corresponding with the front face image
Model;
Step 304, according to the Nature face model of the nearly front face image respectively to presetting what is included in expression storehouse
Each default expression model is deformed, and obtains each expression model corresponding with the front face image;
Step 305, each expression model determined according to the first expressive features point of the target two-dimension human face image
First weight coefficient, and determine the of each expression model according to the first expressive features point with reference to two-dimension human face image
Two weight coefficients;
Step 306, each expression model merged according to first weight coefficient, to obtain and the target two dimension people
3 d human face mesh model corresponding to face image, and each expression model is merged according to second weight coefficient, to obtain
3 d human face mesh model corresponding with the reference two-dimension human face image.
This method can still be performed by above-mentioned processing unit, and the two images of now processing unit input are referred to as
Target two-dimension human face image and with reference to two-dimension human face image, is to need that two will be referred in the processing procedure of human face expression transfer
Human face expression on dimension facial image is transferred on target two-dimension human face image.
Extract target two-dimension human face image and the human face expression characteristic point with reference to two-dimension human face image respectively first.Extracting
Such as active shape model can be used when human face expression characteristic point(Active Shape Model, hereinafter referred to as ASM)
Human face expression characteristic point is accurately detected out etc. ripe algorithm, the human face expression characteristic point includes face mask characteristic point and the
Expressive features point, wherein, face mask characteristic point refers to clearly picking out some characteristic points of facial contour, first
Expressive features point is primarily referred to as face can cause different face to change when showing different expressions, the motion morphology of these face
Form the first expressive features point of the human face expression, such as the form of nose, face, eyebrow, eyes etc..
, therefore, can because face mask characteristic point can embody the direction of face on corresponding image in the present embodiment
With the face mask characteristic point in the human face expression characteristic point of target two-dimension human face image respectively and refer to two-dimension human face figure
Face mask characteristic point in the human face expression characteristic point of picture, the nearly front face figure of a conduct is selected in the two images
Picture.It is specifically that target two-dimension human face image is calculated according to the face mask characteristic point of target two-dimension human face image in the present embodiment
Face mask curvature, and the face for referring to two-dimension human face image is calculated according to the face mask characteristic point with reference to two-dimension human face image
Contouring curvature, it is front face image to select the small image of face mask curvature afterwards.Face mask curvature is small to mean face
Portion is towards more towards front.
And then according to the face mask characteristic point and the first expressive features point of the nearly front face image, to from gender bender
The target Nature face model determined in face storehouse is deformed, and obtains the Nature face model of the nearly front face image.Citing
For, for example selection determination with reference to two-dimension human face image to be used as nearly front face image, then will be to from neutral face database
The target Nature face model of determination refers to the face mask characteristic point and the first expressive features point of two-dimension human face image according to this
The deformation such as such as scale, rotate, with obtain with this with reference to the corresponding Nature face model of two-dimension human face image.Wherein, in
Include the three-dimensional Nature face model of the individual differences such as multiple sexes of forgoing, age, race in property face database, from Nature face
The target Nature face model determined in storehouse both can be the three-dimensional Nature face model of one be randomly chosen or right
Three-dimensional Nature face model after the Weighted Fusion of the three-dimensional Nature face model of all or part included in Nature face storehouse.
In the present embodiment, the Nature face model of the nearly front face image of acquisition has and the nearly front face image base
This consistent face mask, simply there is no detailed expressive features on the Nature face model.Therefore, with this in this implementation
Property faceform be intermediary, and then carry out the processing of follow-up expression model.
It is and then each pre- to being included in default expression storehouse respectively according to the Nature face model of the nearly front face image
If expression model is deformed, each expression model corresponding with the front face image is obtained.Specifically, the default expression storehouse
Preferably general blendshape models, wherein including a variety of different expression models, in the present embodiment, use is general
Blendshape models add expressive features for above-mentioned Nature face model.Need to be according to the Nature face of nearly front face image
Model deforms to a variety of expression models, to obtain each blendshape expressions mould corresponding with the nearly front face image
Type.The expressive features of its own had both been contained in each blendshape expressions model, have contained the face of front face image again
Contouring feature.
And then, it is necessary to determined and the target two-dimension human face figure according to the first expressive features point of target two-dimension human face image
The first weight coefficient of each blendshape expressions model as corresponding to, and according to the first expression with reference to two-dimension human face image
Characteristic point determines the second weight coefficient with this with reference to the corresponding each blendshape expressions model of two-dimension human face image, also
It is to say, because the expressive features on each blendshape expressions model are different, it is necessary to be directed to target two-dimension human face respectively
Image and with reference to two-dimension human face image, to determine the proportion shared by each blendshape expressions model, and then according to described first
Weight coefficient merges each blendshape expressions model, to obtain three-dimensional people corresponding with the target two-dimension human face image
Face grid model, and each blendshape expressions model is merged according to second weight coefficient, to obtain and the ginseng
Examine 3 d human face mesh model corresponding to two-dimension human face image.So-called merging, i.e., will be each according to respective weight coefficient
Blendshape expression models are superimposed together, be by each blendshape expressions model with each first expressive features point
Corresponding organ is overlapped according to the weight coefficient of corresponding blendshape expressions model.
Illustrate how to determine weight coefficient exemplified by determining the first weight coefficient, for certain in target two-dimension human face image
For individual first expressive features point, each blendshape expressions model organ corresponding with the first expressive features point is traveled through successively
Characteristic point, this feature point can be artificial hand labeled, also can initially delimit, such as eyebrow, and then according to the spy of the organ
Sign is put with the close degree of the first expressive features point to determine the weight coefficient of the blendshape expression models.
Obtain corresponding with target two-dimension human face image 3 d human face mesh model in execution of step 306, and with ginseng
After examining 3 d human face mesh model corresponding to two-dimension human face image, optionally, side as shown in Figure 1 or 2 can also carry out
Method, the 3 d human face mesh model of acquisition is adjusted.
Optionally, in execution of step 306, or according to method as shown in Figure 1 or 2, to the three-dimensional face of acquisition
After grid model is adjusted, following steps are can also carry out to realize the purpose of human face expression transfer.
Step 307, basis 3 d human face mesh model corresponding with the target two-dimension human face image are to the target two
Dimension facial image is deformed, and according to 3 d human face mesh model corresponding with the reference two-dimension human face image to the ginseng
Two-dimension human face image is examined to be deformed;
Step 308, merge by the target two-dimension human face image after deformation and with reference to two-dimension human face image, will described in
It is transferred to reference to the expression on two-dimension human face image on the target two-dimension human face image.
In the present embodiment, target two-dimension human face image and the human face expression feature with reference to two-dimension human face image are extracted respectively
Point, and the face mask characteristic point in the human face expression characteristic point from target two-dimension human face image and refers to two-dimension human face figure
Nearly front face image of conduct is selected as in, and then according to the nearly front face image to being determined from neutral face database
Target Nature face model is deformed, and obtains Nature face model, special independent of specific individual in the Nature face storehouse
Sign;Afterwards according to the Nature face model each blendshape expressions model to being included in general blendshape models respectively
Deformed, and then distinguished respectively according to target two-dimension human face image and with reference to each first expressive features point of two-dimension human face image
Determine the first weight coefficient and the second weight coefficient of each blendshape expressions model, with according to different weight coefficients to each
Blendshape expression models are merged, and are finally given respectively with target two-dimension human face image and with reference to two-dimension human face image pair
The 3 d human face mesh model answered.Because Nature face storehouse and general blendshape models all obviate the individual difference of people,
Overcome and the defects of 3 d human face mesh model easily fails is established based on Facial expression database in the prior art.
Fig. 4 is the structural representation for the 3 d human face mesh model processing equipment that the embodiment of the present invention three provides, such as Fig. 4 institutes
Show, the processing equipment includes:
Acquisition module 11, it is described for obtaining initial three-dimensional face wire frame model corresponding with original two dimensional facial image
Initial three-dimensional face wire frame model includes the second table corresponding with the first expressive features point of the original two dimensional facial image
Feelings characteristic point;
Computing module 12, according to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model
Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image
Characteristic point, N are the number of the first expressive features point and the second expressive features point;
Judge module 13, for according to the camera parameter matrix that is calculated by the initial three-dimensional face grid mould
The second expressive features point in type is mapped on the original two dimensional facial image, to judge the second expressive features point and institute
The matching degree of the first expressive features point is stated, and the initial three-dimensional face wire frame model is adjusted according to judged result.
The processing equipment of the present embodiment can be used for the technical scheme for performing embodiment of the method shown in Fig. 1, its realization principle
Similar with technique effect, here is omitted.
Fig. 5 is the structural representation for the 3 d human face mesh model processing equipment that the embodiment of the present invention four provides, such as Fig. 5 institutes
To show, the processing equipment is on the basis of embodiment illustrated in fig. 4, the judge module 13, including:
Computing unit 131, for according to formula(2)Calculate the second expressive features point and the first expressive features point
Matching error:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Judging unit 132, for judging whether the matching error is more than or equal to predetermined threshold value;
Adjustment unit 133, if for being more than or equal to, the initial three-dimensional face wire frame model is adjusted, so that
The matching error of the second expressive features point and the first expressive features point on 3 d human face mesh model after adjustment is less than
The predetermined threshold value.
Further, the adjustment unit 133, including:
Computation subunit 1331, for calculating the second expressive features point XiIt is each on to the initial three-dimensional face wire frame model
Grid vertex XjGeodesic curve distance, wherein, i is not equal to j;
First adjustment subelement 1332, for fixing the second expressive features point on the initial three-dimensional face wire frame model
XiZ coordinate, the second expressive features point X is changed using the first preset algorithmiX,yCoordinate, obtain and second expression
Characteristic point XiCorresponding 3rd expressive features point Xi’;
Determination subelement 1333, for the geodesic curve distance for constraint, using the second preset algorithm determine with it is described
3rd expressive features point Xi' corresponding to each grid vertex Xj’;
Second adjustment subelement 1334, for according to the 3rd expressive features point Xi' and with the 3rd expressive features
Point Xi' corresponding to each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.
Further, the original two dimensional facial image includes target two-dimension human face image and with reference to two-dimension human face image;
The acquisition module 11, including:
Extraction unit 111, for extracting the human face expression characteristic point of the target two-dimension human face image and described referring to two
The human face expression characteristic point of facial image is tieed up, the human face expression characteristic point includes face mask characteristic point and first expression
Characteristic point;
First determining unit 112, for the face mask characteristic point according to the target two-dimension human face image and the ginseng
The face mask characteristic point for examining two-dimension human face image determines nearly front face image, and the nearly front face image is the target
Two-dimension human face image described refers to two-dimension human face image;
First deformation unit 113, for the face mask characteristic point and the first expression according to the nearly front face image
Characteristic point, the target Nature face model determined from neutral face database is deformed, obtained and the front face image
Corresponding Nature face model;
Second deformation unit 114, for according to the Nature face model of the nearly front face image respectively to preset table
Each default expression model included in feelings storehouse is deformed, and obtains each expression model corresponding with the front face image;
Second determining unit 115, described in being determined according to the first expressive features point of the target two-dimension human face image
First weight coefficient of each expression model, and determined according to the first expressive features point with reference to two-dimension human face image described each
Second weight coefficient of expression model;
Combining unit 116, for merging each expression model according to first weight coefficient, to obtain and the mesh
3 d human face mesh model corresponding to two-dimension human face image is marked, and each expression mould is merged according to second weight coefficient
Type, to obtain 3 d human face mesh model corresponding with the reference two-dimension human face image.
Specifically, the default expression storehouse includes general blendshape models.
Further, first determining unit 112, is specifically used for:
The face of the target two-dimension human face image is calculated according to the face mask characteristic point of the target two-dimension human face image
Contouring curvature, and refer to two-dimension human face figure according to the face mask characteristic point calculating with reference to two-dimension human face image is described
The face mask curvature of picture;
It is nearly front face image to determine the small image of the face mask curvature.
Further, the processing equipment also includes:
Deformation module 21, for basis 3 d human face mesh model corresponding with the target two-dimension human face image to described
Target two-dimension human face image is deformed, and according to 3 d human face mesh model pair corresponding with the reference two-dimension human face image
It is described to be deformed with reference to two-dimension human face image;
Merging module 22, for being merged by the target two-dimension human face image after deformation and with reference to two-dimension human face image,
So that the expression with reference on two-dimension human face image is transferred on the target two-dimension human face image.
The processing equipment of the present embodiment can be used for the technical scheme for performing embodiment of the method shown in Fig. 2 or Fig. 3, and it is realized
Principle is similar with technique effect, and here is omitted.
Fig. 6 is the structural representation for the processing equipment that the embodiment of the present invention five provides, as shown in fig. 6, the processing equipment bag
Include:
Memory 31 and the processor 32 being connected with the memory 31, wherein, the memory 31 is used to store one
Group program code, the processor 32 is used to call the program code stored in the memory 31, to perform as shown in Figure 1 three
Tie up in face wire frame model processing method:Initial three-dimensional face wire frame model corresponding with original two dimensional facial image is obtained,
The initial three-dimensional face wire frame model includes corresponding with the first expressive features point of the original two dimensional facial image
2 expressive features points;According to formula(1)Calculate the camera parameter matrix of the initial three-dimensional face wire frame model:
Wherein, P is the camera parameter matrix, XiFor i-th of second tables on the initial three-dimensional face wire frame model
Feelings characteristic point, xiFor with the second expressive features point XiI-th of first expressions on the corresponding original two dimensional facial image
Characteristic point, N are the number of the first expressive features point and the second expressive features point;According to the camera parameter matrix being calculated
The second expressive features point on the initial three-dimensional face wire frame model is mapped on the original two dimensional facial image, to sentence
The matching degree of the disconnected second expressive features point and the first expressive features point, and according to judged result to described original three
Dimension face wire frame model is adjusted.
Further, the processor 32 is additionally operable to according to formula(2)Calculate the second expressive features point and described the
The matching error of 1 expressive features point:
Wherein, Err is the matching error, wiFor i-th pair characteristic point XiAnd xiWeight coefficient;
Judge whether the matching error is more than or equal to predetermined threshold value;If being more than or equal to, to the initial three-dimensional face
Grid model is adjusted, so that the second expressive features point on the 3 d human face mesh model after adjustment and first expression
The matching error of characteristic point is less than the predetermined threshold value.
Further, the processor 32 is additionally operable to calculate the second expressive features point XiTo the initial three-dimensional face grid
Each grid vertex X on modeljGeodesic curve distance, wherein, i is not equal to j;On the fixed initial three-dimensional face wire frame model
Second expressive features point XiZ coordinate, the second expressive features point X is changed using the first preset algorithmiX, y-coordinate, obtain
With the second expressive features point XiCorresponding 3rd expressive features point Xi’;It is constraint with geodesic curve distance, using second
Preset algorithm determines and the 3rd expressive features point Xi' corresponding to each grid vertex Xj’;According to the 3rd expressive features point
Xi' and with the 3rd expressive features point Xi' corresponding to each grid vertex Xj' the adjustment initial three-dimensional face wire frame model.
Further, the original two dimensional facial image includes target two-dimension human face image and with reference to two-dimension human face image,
The processor 32 is additionally operable to extract the human face expression characteristic point of the target two-dimension human face image and described refers to two-dimension human face
The human face expression characteristic point of image, the human face expression characteristic point include face mask characteristic point and first expressive features
Point;It is special according to the face mask characteristic point of the target two-dimension human face image and the face mask with reference to two-dimension human face image
Sign point determines nearly front face image, and the nearly front face image is the target two-dimension human face image or described with reference to two dimension
Facial image;According to the face mask characteristic point and the first expressive features point of the nearly front face image, to from Nature face
The target Nature face model determined in storehouse is deformed, and obtains Nature face model corresponding with the front face image;
Entered respectively to presetting each default expression model included in expression storehouse according to the Nature face model of the nearly front face image
Row deformation, obtains each expression model corresponding with the front face image;According to the first of the target two-dimension human face image
Expressive features point determines the first weight coefficient of each expression model, and according to first table with reference to two-dimension human face image
Feelings characteristic point determines the second weight coefficient of each expression model;Each expression mould is merged according to first weight coefficient
Type, to obtain 3 d human face mesh model corresponding with the target two-dimension human face image, and according to second weight coefficient
Merge each expression model, to obtain 3 d human face mesh model corresponding with the reference two-dimension human face image.
Further, the processor 32 is additionally operable to the face mask characteristic point meter according to the target two-dimension human face image
The face mask curvature of the target two-dimension human face image is calculated, and it is special according to the face mask with reference to two-dimension human face image
Sign point calculates the face mask curvature with reference to two-dimension human face image;Determine the small image of the face mask curvature for closely just
Dough figurine face image.
Further, the processor 32 is additionally operable to according to three-dimensional face net corresponding with the target two-dimension human face image
Lattice model deforms to the target two-dimension human face image, and according to three-dimensional people corresponding with the reference two-dimension human face image
Face grid model deforms to described with reference to two-dimension human face image;By the target two-dimension human face image after deformation and with reference to two dimension
Facial image is merged, and the expression with reference on two-dimension human face image is transferred into the target two-dimension human face image
On.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through
Programmed instruction related hardware is completed, and foregoing program can be stored in a computer read/write memory medium, the program
Upon execution, the step of execution includes above method embodiment;And foregoing storage medium includes:ROM, RAM, magnetic disc or light
Disk etc. is various can be with the medium of store program codes.
Finally it should be noted that:Various embodiments above is merely illustrative of the technical solution of the present invention, rather than its limitations;To the greatest extent
The present invention is described in detail with reference to foregoing embodiments for pipe, it will be understood by those within the art that:Its according to
The technical scheme described in foregoing embodiments can so be modified, either which part or all technical characteristic are entered
Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is departed from various embodiments of the present invention technology
The scope of scheme.