Summary of the invention
For problems of the prior art, the invention provides a kind of 3 d human face mesh model disposal route and equipment, carry out based on Facial expression database the 3 d human face mesh model defect lower with the matching degree of corresponding two-dimension human face image that bulk deformation causes setting up in order to overcome in prior art.
First aspect present invention provides a kind of 3 d human face mesh model disposal route, comprising:
Obtain the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
The camera parameter matrix of described initial three-dimensional face wire frame model is calculated according to formula (1):
Wherein, P is described camera parameter matrix, X
ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x
ifor with described second expressive features point X
ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
According to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
In the first possible implementation of first aspect, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image by the described camera parameter matrix that described basis calculates, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described pending model is adjusted, comprising:
The matching error of described second expressive features point and described first expressive features point is calculated according to formula (2):
Wherein, Err is described matching error, w
ibe i-th couple of unique point X
iand x
iweight coefficient;
Judge whether described matching error is more than or equal to predetermined threshold value;
If be more than or equal to, then described initial three-dimensional face wire frame model is adjusted, be less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
According to the first possible implementation of first aspect, in the implementation that the second of first aspect is possible, described described initial three-dimensional face wire frame model to be adjusted, comprising:
Calculate the second expressive features point X
ieach grid vertex X to described initial three-dimensional face wire frame model
jgeodesic line distance, wherein, i is not equal to j;
The second expressive features point X on fixing described initial three-dimensional face wire frame model
iz coordinate, adopt first preset algorithm change described second expressive features point X
ix, y coordinate, obtains and described second expressive features point X
ithe 3rd corresponding expressive features point X
i';
With described geodesic line distance for constraint, the second preset algorithm is adopted to determine and described 3rd expressive features point X
i' corresponding each grid vertex X
j';
According to described 3rd expressive features point X
i' and with described 3rd expressive features point X
i' corresponding each grid vertex X
j' adjust described initial three-dimensional face wire frame model.
The implementation possible according to the first or the second of first aspect, first aspect, in the third possible implementation of first aspect, described original two dimensional facial image comprises target two-dimension human face image and with reference to two-dimension human face image;
The initial three-dimensional face wire frame model that described acquisition is corresponding with original two dimensional facial image, comprising:
Extract the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and the first expressive features point;
Determine nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
According to face mask unique point and the first expressive features point of described nearly front face image, the target Nature face model determined is out of shape, obtains the Nature face model corresponding with described front face image from neutral face database;
Nature face model according to described nearly front face image is out of shape each default expression model comprised in default expression storehouse respectively, obtains each expression model corresponding with described front face image;
Determine the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Described each expression model is merged according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
According to the third possible implementation of first aspect, in the 4th kind of possible implementation of first aspect, described default expression storehouse comprises general blendshape model.
According to the third or the 4th kind of possible implementation of first aspect, in the 5th kind of possible implementation of first aspect, the described face mask unique point according to described target two-dimension human face image and the described face mask unique point with reference to two-dimension human face image determine nearly front face image, comprising:
Calculate the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculate the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image;
Determine that the image that described face mask curvature is little is nearly front face image.
According to first aspect the third, the 4th kind or the 5th kind of possible implementation, in the 6th kind of possible implementation of first aspect, described according to judged result described initial three-dimensional face wire frame model adjusted after, also comprise:
According to the 3 d human face mesh model corresponding with described target two-dimension human face image, described target two-dimension human face image is out of shape, and is out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Merge by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
Second aspect present invention provides a kind of 3 d human face mesh model treatment facility, comprising:
Acquisition module, for obtaining the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
Computing module, calculates the camera parameter matrix of described initial three-dimensional face wire frame model according to formula (1):
Wherein, P is described camera parameter matrix, X
ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x
ifor with described second expressive features point X
ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
Judge module, for the second expressive features point on described initial three-dimensional face wire frame model being mapped to described original two dimensional facial image according to the described camera parameter matrix calculated, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
In the first possible implementation of second aspect, described judge module, comprising:
Computing unit, for calculating the matching error of described second expressive features point and described first expressive features point according to formula (2):
Wherein, Err is described matching error, w
ibe i-th couple of unique point X
iand x
iweight coefficient;
Judging unit, for judging whether described matching error is more than or equal to predetermined threshold value;
Adjustment unit, if for being more than or equal to, then adjusts described initial three-dimensional face wire frame model, is less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
According to the first possible implementation of second aspect, in the implementation that the second of second aspect is possible, described adjustment unit, comprising:
Computation subunit, for calculating the second expressive features point X
ieach grid vertex X to described initial three-dimensional face wire frame model
jgeodesic line distance, wherein, i is not equal to j;
First adjustment subelement, the second expressive features point X on fixing described initial three-dimensional face wire frame model
iz coordinate, adopt first preset algorithm change described second expressive features point X
ix, y coordinate, obtains and described second expressive features point X
ithe 3rd corresponding expressive features point X
i';
Determine subelement, for described geodesic line distance for constraint, adopt the second preset algorithm to determine and described 3rd expressive features point X
i' corresponding each grid vertex X
j';
Second adjustment subelement, for according to described 3rd expressive features point X
i' and with described 3rd expressive features point X
i' corresponding each grid vertex X
j' adjust described initial three-dimensional face wire frame model.
The implementation possible according to the first or the second of second aspect, second aspect, in the third possible implementation of second aspect, described original two dimensional facial image comprises target two-dimension human face image and with reference to two-dimension human face image;
Described acquisition module, comprising:
Extraction unit, for extracting the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and the first expressive features point;
First determining unit, for determining nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
First deformation unit, for according to the face mask unique point of described nearly front face image and the first expressive features point, the target Nature face model determined from neutral face database is out of shape, obtains the Nature face model corresponding with described front face image;
Second deformation unit, for being out of shape each default expression model comprised in default expression storehouse respectively according to the Nature face model of described nearly front face image, obtains each expression model corresponding with described front face image;
Second determining unit, for determining the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Merge cells, for merging described each expression model according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
According to the third possible implementation of second aspect, in the 4th kind of possible implementation of second aspect, described default expression storehouse comprises general blendshape model.
According to the third or the 4th kind of possible implementation of second aspect, in the 5th kind of possible implementation of second aspect, described first determining unit, specifically for:
Calculate the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculate the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image;
Determine that the image that described face mask curvature is little is nearly front face image.
According to second aspect the third, the 4th kind or the 5th kind of possible implementation, in the 6th kind of possible implementation of second aspect, described equipment also comprises:
Deformation module, for being out of shape described target two-dimension human face image according to the 3 d human face mesh model corresponding with described target two-dimension human face image, and be out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Merge module, for merging by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
3 d human face mesh model disposal route provided by the invention and equipment, after obtaining the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, according to the camera parameter of this initial three-dimensional face wire frame model, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.According to camera parameter, initial three-dimensional face wire frame model and original two dimensional facial image are carried out to the judgement of matching degree, make to adjust initial three-dimensional face wire frame model when matching degree is low, thus can ensure that the 3 d human face mesh model after adjusting and original two dimensional facial image have better matching degree.
Embodiment
The process flow diagram of the 3 d human face mesh model disposal route that Fig. 1 provides for the embodiment of the present invention one, as shown in Figure 1, the method comprises:
Step 101, obtain the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
In the present embodiment, above-mentioned 3 d human face mesh model disposal route is performed by a treating apparatus, this treating apparatus is preferably integrated to be arranged in the such as terminal device such as PC, notebook computer, may be used for the process two width images of input being carried out to human face expression transfer.The 3 d human face mesh model that the described method that the present embodiment provides is applicable to adopting the mode of prior art to obtain adjusts, or, also be applicable to adjust the carrying out of the 3 d human face mesh model that the method that the embodiment adopted as shown in Figure 3 provides obtains, be not limited with the present embodiment.
For the purpose of describing simply, be described no matter the 3 d human face mesh model adopting which kind of method above-mentioned to obtain is referred to as initial three-dimensional face wire frame model in the present embodiment.This initial three-dimensional face wire frame model correspond to an original two-dimension human face image.The described method that the present embodiment provides preferably is applicable in the application scenarios of human face expression transfer, in the processing procedure of human face expression transfer, is need to transfer on target two-dimension human face image with reference to the human face expression on two-dimension human face image.Carry out human face expression transfer if want, first will distinguish the reconstruction of realize target two-dimension human face image and the 3 d human face mesh model with reference to two-dimension human face image.Therefore, original two dimensional facial image described in the present embodiment can be such as with reference to two-dimension human face image or target two-dimension human face image, accordingly, initial three-dimensional face wire frame model can be such as the 3 d human face mesh model corresponding with reference two-dimension human face image or the 3 d human face mesh model corresponding with target two-dimension human face image, the described method provided due to the present embodiment is all applicable for these two kinds of 3 d human face mesh models, does not distinguish explanation below.
First treating apparatus obtains the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, and described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image.The 3 d human face mesh model such as obtained in prior art in the present embodiment is as described initial three-dimensional face wire frame model, input this treating apparatus, carry out follow-up adjustment process with the second expressive features point making this treating apparatus comprise according to this beginning 3 d human face mesh model.
Wherein, first expressive features point of original two dimensional facial image mainly refers to that face can cause different face changes when showing different expression, namely the motion morphology of these face forms the first expressive features point, such as form of nose, face, eyebrow, eyes etc. of this human face expression.And the second expressive features point corresponding with this first expressive features point can be artificial or automatic mark on initial three-dimensional face wire frame model.
Step 102, calculate the camera parameter matrix of described initial three-dimensional face wire frame model according to formula (1):
Wherein, P is described camera parameter matrix, X
ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x
ifor with described second expressive features point X
ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
The second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image by the described camera parameter matrix that step 103, basis calculate, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
In the present embodiment, in order to whether the original two dimensional facial image whether corresponding with it to initial three-dimensional face wire frame model mates judge, first need the second expressive features point on initial three-dimensional face wire frame model to be mapped on corresponding original two dimensional facial image, and then the matching error judging between this second expressive features point with the first expressive features point on corresponding original two dimensional facial image, according to judged result, described initial three-dimensional face wire frame model is adjusted afterwards.
And when being mapped to by the second expressive features point on initial three-dimensional face wire frame model on corresponding original two dimensional facial image, need to use a parameter, i.e. camera parameter, this parameter generally represents with the form of a parameter matrix.Particularly, can pass through solution formula (1) and obtain camera parameter matrix, it is minimum that formula (1) means that this camera parameter matrix need meet the distance of the second expressive features point and the first expressive features point as far as possible.After obtaining this camera parameter matrix, this matrix is utilized to be mapped to by the second expressive features point on initial three-dimensional face wire frame model on corresponding two-dimension human face image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
In the present embodiment, according to camera parameter, initial three-dimensional face wire frame model and original two dimensional facial image are carried out to the judgement of matching degree, make to adjust initial three-dimensional face wire frame model when matching degree is low, thus can ensure that the 3 d human face mesh model after adjusting and original two dimensional facial image have better matching degree.
Further, Fig. 2 is the process flow diagram of the processing procedure of step 103 embodiment illustrated in fig. 1, as shown in Figure 2, according to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image in step 103 in Fig. 1, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted, comprising:
Step 201, calculate the matching error of described second expressive features point and described first expressive features point according to formula (2):
Wherein, Err is described matching error, w
ibe i-th couple of unique point X
iand x
iweight coefficient;
Step 202, judge whether described matching error is more than or equal to predetermined threshold value, if be more than or equal to, then perform step 203, otherwise terminate;
After the camera parameter obtaining initial three-dimensional face wire frame model, according to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching error of the second expressive features point on initial three-dimensional face wire frame model and the first expressive features point on original two dimensional facial image according to formula (2).In formula (2), because often pair of the expression degree of depth of unique point, pixel grey scale are had nothing in common with each other, therefore, often pair of expression unique point has different weight coefficients.
And then, judge whether described matching error is more than or equal to predetermined threshold value, if be more than or equal to, then need to adjust this initial three-dimensional face wire frame model according to step 203 ~ 206, otherwise without the need to adjusting.
Step 203, calculate the second expressive features point X
ieach grid vertex X to described initial three-dimensional face wire frame model
jgeodesic line distance, wherein, i is not equal to j;
The second expressive features point X on step 204, fixing described initial three-dimensional face wire frame model
iz coordinate, adopt first preset algorithm change described second expressive features point X
ix, y coordinate, obtains and described second expressive features point X
ithe 3rd corresponding expressive features point X
i';
Step 205, with described geodesic line distance for constraint, adopt the second preset algorithm to determine and described 3rd expressive features point X
i' corresponding each grid vertex X
j';
Step 206, according to described 3rd expressive features point X
i' and with described 3rd expressive features point X
i' corresponding each grid vertex X
j' adjust described initial three-dimensional face wire frame model.
When judging that matching error is more than or equal to predetermined threshold value, need adjust initial three-dimensional face wire frame model.Particularly, the geodesic line distance of other grid vertexes on this initial three-dimensional face wire frame model on each second expressive features point to this initial three-dimensional face wire frame model except corresponding current second expressive features point is first calculated.Because initial three-dimensional face wire frame model is a three-dimensional grid model, be made up of grid one by one, it is the length of current second expressive features point the shortest mesh lines when arriving certain grid vertex along different mesh lines that this geodesic line distance can be understood as, i.e. path distance.
Afterwards, the second expressive features point X on fixing initial three-dimensional face wire frame model
iz coordinate, adopt first preset algorithm change described second expressive features point X
ix, y coordinate, obtains and this second expressive features point X
ithe 3rd corresponding expressive features point X
i', wherein, this first preset algorithm is such as Nelder-Mead simplex algorithm.
And then, with this geodesic line distance for constraint, adopt the second preset algorithm to determine each grid vertex X corresponding with described 3rd expressive features point Xi '
j', and according to described 3rd expressive features point X
i' and with described 3rd expressive features point X
i' corresponding each grid vertex X
j' adjust described initial three-dimensional face wire frame model.Wherein, this second preset algorithm is such as radial basis function, Laplce's distortion of the mesh algorithm etc.
Be understandable that, why with geodesic line distance for constraint, because the grid vertex of other the non-unique points around the second expressive features point will be kept as far as possible, after the second expressive features point changes to the 3rd expressive features point, still after change according to geodesic line distance, keep and the correlative positional relation of the 3rd expressive features point.
In the present embodiment, when initial three-dimensional face wire frame model and the matching degree of corresponding two-dimension human face image are lower, initial three-dimensional face wire frame model is adjusted for retraining with above-mentioned geodesic line distance, while expression unique point is adjusted, be conducive to ensureing that other non-expressive features points keep the relative position relation with corresponding expression unique point before adjustment afterwards, make the 3 d human face mesh model after adjusting have better matching degree with corresponding two-dimension human face image.
The process flow diagram of the 3 d human face mesh model disposal route that Fig. 3 provides for the embodiment of the present invention two, as shown in Figure 3, this disposal route is the improvement to the process obtaining initial three-dimensional face wire frame model in prior art, set up based on Facial expression database in prior art in the scheme of the 3 d human face mesh model matched with source beginning two-dimension human face image, due to the age that each human face expression model in this Facial expression database is according to Different Individual, sex, shape of face, mood, expression waits statistics to set up, there is obvious individual differences, if the expression in original two-dimension human face image is beyond the scope of this Facial expression database, the 3 d human face mesh model matched cannot be obtained by Facial expression database.For this reason, the described method that provides of the present embodiment is for setting up the 3 d human face mesh model corresponding with original two dimensional facial image.Wherein, above-mentioned Fig. 1 or embodiment illustrated in fig. 2 described in original two dimensional facial image specifically comprise target two-dimension human face image in the present embodiment and with reference to two-dimension human face image, in the application scenarios of human face expression transfer, be need to transfer in target two-dimension human face image with reference to the human face expression in two-dimension human face image.
The described method that the present embodiment provides comprises:
Step 301, the human face expression unique point extracting described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and described first expressive features point;
Step 302, determine nearly front face image according to the face mask unique point of described target two-dimension human face image and the described face mask unique point with reference to two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
Step 303, according to the face mask unique point of described nearly front face image and the first expressive features point, the target Nature face model determined is out of shape, obtains the Nature face model corresponding with described front face image from neutral face database;
Step 304, respectively each default expression model comprised in default expression storehouse to be out of shape according to the Nature face model of described nearly front face image, to obtain each expression model corresponding with described front face image;
Step 305, determine the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Step 306, according to described first weight coefficient merge described each expression model, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
The method still can be performed by above-mentioned treating apparatus, two width images of now this treating apparatus input are called target two-dimension human face image and reference two-dimension human face image, in the processing procedure of human face expression transfer, be need to transfer on target two-dimension human face image with reference to the human face expression on two-dimension human face image.
First target two-dimension human face image and the human face expression unique point with reference to two-dimension human face image is extracted respectively.Such as active shape model (Active ShapeModel can be adopted when extraction human face expression unique point, hereinafter referred to as ASM) etc. ripe algorithm accurately detect human face expression unique point, this human face expression unique point comprises face mask unique point and the first expressive features point, wherein, face mask unique point refers to some unique points that clearly can pick out facial contour, first expressive features point mainly refers to that face can cause different face changes when showing different expression, namely the motion morphology of these face forms the first expressive features point of this human face expression, such as nose, face, eyebrow, the form of eyes etc.
In the present embodiment, due to face mask unique point can embody face on corresponding image towards, therefore, respectively according to the face mask unique point in the human face expression unique point of target two-dimension human face image with reference to the face mask unique point in the human face expression unique point of two-dimension human face image, a nearly front face image of conduct can be selected in these two images.Face mask unique point specifically according to target two-dimension human face image in the present embodiment calculates the face mask curvature of target two-dimension human face image, and according to the face mask curvature of face mask unique point computing reference two-dimension human face image with reference to two-dimension human face image, the image selecting face mask curvature little is afterwards front face image.Face mask curvature is little means that face is towards more towards front.
And then, according to face mask unique point and the first expressive features point of this nearly front face image, the target Nature face model determined is out of shape, obtains the Nature face model of this nearly front face image from neutral face database.For example, such as select to determine using reference two-dimension human face image as nearly front face image, so carry out the such as distortion such as convergent-divergent, rotation, to obtain the Nature face model corresponding with this reference two-dimension human face image by the target Nature face model determined from neutral face database according to the face mask unique point of this reference two-dimension human face image and the first expressive features point.Wherein, the three-dimensional Nature face model of the individual differences such as multiple sex of forgoing, age, race is included in Nature face storehouse, the target Nature face model determined from neutral face database both can be the three-dimensional Nature face model selected randomly, also can be the three-dimensional Nature face model after the Weighted Fusion of all or part of three-dimensional Nature face model comprised in centering face database.
In the present embodiment, the Nature face model of the nearly front face image of acquisition has the face mask that nearly front face image is basically identical with this, just on this Nature face model, does not have detailed expressive features.Therefore, in this enforcement with this Nature face model for intermediary, and then carry out the process of follow-up expression model.
And then the Nature face model according to described nearly front face image is out of shape each default expression model comprised in default expression storehouse respectively, obtains each expression model corresponding with described front face image.Particularly, this is preset expression storehouse and is preferably general blendshape model, wherein includes multiple different expression model, in the present embodiment, adopts general blendshape model to come for above-mentioned Nature face model adds expressive features.Need be out of shape this multiple expression model according to the Nature face model of nearly front face image, express one's feelings model to obtain each blendshape corresponding to front face image near with this.Both contain the expressive features of himself in each blendshape expression model, contain again the face mask feature of front face image.
And then, need to determine that each blendshape corresponding with this target two-dimension human face image expresses one's feelings the first weight coefficient of model according to the first expressive features point of target two-dimension human face image, and according to determining that with reference to the first expressive features point of two-dimension human face image each blendshape corresponding with this reference two-dimension human face image expresses one's feelings the second weight coefficient of model, that is, the expressive features of expressing one's feelings on model due to each blendshape is different, need respectively for target two-dimension human face image with reference to two-dimension human face image, determine the proportion that each blendshape expresses one's feelings shared by model, and then merge described each blendshape expression model according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each blendshape expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.So-called merging, namely being superimposed together according to respective weight coefficient model of each blendshape being expressed one's feelings, is namely that organ corresponding with each first expressive features point in model of being expressed one's feelings by each blendshape superposes according to the express one's feelings weight coefficient of model of corresponding blendshape.
How weight coefficient is determined to determine that the first weight coefficient illustrates, for certain the first expressive features point in target two-dimension human face image, travel through the unique point of each blendshape expression model organ corresponding with this first expressive features point successively, this unique point can be artificial hand labeled, also can initially delimit, such as eyebrow, and then determine that this blendshape expresses one's feelings the weight coefficient of model according to the unique point of this organ and the close degree of this first expressive features point.
The 3 d human face mesh model corresponding with target two-dimension human face image is obtained in execution of step 306, and after the 3 d human face mesh model corresponding with reference to two-dimension human face image, optionally, method as shown in Figure 1 or 2 can also be performed, the 3 d human face mesh model obtained is adjusted.
Optionally, in execution of step 306, or according to method as shown in Figure 1 or 2, after the 3 d human face mesh model obtained is adjusted, following steps can also be performed to realize the object of human face expression transfer.
Step 307, the basis 3 d human face mesh model corresponding with described target two-dimension human face image are out of shape described target two-dimension human face image, and are out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Step 308, by the target two-dimension human face image after distortion with merge with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
In the present embodiment, extract target two-dimension human face image and the human face expression unique point with reference to two-dimension human face image respectively, and select a nearly front face image of conduct according to the face mask unique point in this human face expression unique point from target two-dimension human face image with reference to two-dimension human face image, and then according to this nearly front face image, the target Nature face model determined from neutral face database is out of shape, obtain Nature face model, in this Nature face storehouse, do not rely on concrete personal feature; Respectively each blendshape expression model comprised in general blendshape model is out of shape according to this Nature face model afterwards, and then respectively according to target two-dimension human face image and the first weight coefficient and the second weight coefficient of determining each blendshape expression model with reference to each first expressive features point of two-dimension human face image respectively, to merge each blendshape expression model according to different weight coefficients, finally obtain 3 d human face mesh model corresponding with target two-dimension human face image and reference two-dimension human face image respectively.Because Nature face storehouse and general blendshape model all obviate the individual difference of people, overcome in prior art and set up the easily failed defect of 3 d human face mesh model based on Facial expression database.
The structural representation of the 3 d human face mesh model treatment facility that Fig. 4 provides for the embodiment of the present invention three, as shown in Figure 4, this treatment facility comprises:
Acquisition module 11, for obtaining the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
Computing module 12, calculates the camera parameter matrix of described initial three-dimensional face wire frame model according to formula (1):
Wherein, P is described camera parameter matrix, X
ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x
ifor with described second expressive features point X
ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
Judge module 13, for the second expressive features point on described initial three-dimensional face wire frame model being mapped to described original two dimensional facial image according to the described camera parameter matrix calculated, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
The treatment facility of the present embodiment may be used for the technical scheme performing embodiment of the method shown in Fig. 1, and it realizes principle and technique effect is similar, repeats no more herein.
The structural representation of the 3 d human face mesh model treatment facility that Fig. 5 provides for the embodiment of the present invention four, as shown in Figure 5, this treatment facility is on basis embodiment illustrated in fig. 4, and described judge module 13, comprising:
Computing unit 131, for calculating the matching error of described second expressive features point and described first expressive features point according to formula (2):
Wherein, Err is described matching error, w
ibe i-th couple of unique point X
iand x
iweight coefficient;
Judging unit 132, for judging whether described matching error is more than or equal to predetermined threshold value;
Adjustment unit 133, if for being more than or equal to, then adjusts described initial three-dimensional face wire frame model, is less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
Further, described adjustment unit 133, comprising:
Computation subunit 1331, for calculating the second expressive features point X
ieach grid vertex X to described initial three-dimensional face wire frame model
jgeodesic line distance, wherein, i is not equal to j;
First adjustment subelement 1332, the second expressive features point X on fixing described initial three-dimensional face wire frame model
iz coordinate, adopt first preset algorithm change described second expressive features point X
ix,
ycoordinate, obtains and described second expressive features point X
ithe 3rd corresponding expressive features point X
i';
Determine subelement 1333, for described geodesic line distance for constraint, adopt the second preset algorithm to determine and described 3rd expressive features point X
i' corresponding each grid vertex X
j';
Second adjustment subelement 1334, for according to described 3rd expressive features point X
i' and with described 3rd expressive features point X
i' corresponding each grid vertex X
j' adjust described initial three-dimensional face wire frame model.
Further, described original two dimensional facial image comprises target two-dimension human face image and reference two-dimension human face image;
Described acquisition module 11, comprising:
Extraction unit 111, for extracting the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and described first expressive features point;
First determining unit 112, for determining nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
First deformation unit 113, for according to the face mask unique point of described nearly front face image and the first expressive features point, the target Nature face model determined from neutral face database is out of shape, obtains the Nature face model corresponding with described front face image;
Second deformation unit 114, for being out of shape each default expression model comprised in default expression storehouse respectively according to the Nature face model of described nearly front face image, obtains each expression model corresponding with described front face image;
Second determining unit 115, for determining the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Merge cells 116, for merging described each expression model according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
Particularly, described default expression storehouse comprises general blendshape model.
Further, described first determining unit 112, specifically for:
Calculate the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculate the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image;
Determine that the image that described face mask curvature is little is nearly front face image.
Further, described treatment facility also comprises:
Deformation module 21, for being out of shape described target two-dimension human face image according to the 3 d human face mesh model corresponding with described target two-dimension human face image, and be out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Merge module 22, for merging by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
The treatment facility of the present embodiment may be used for the technical scheme performing embodiment of the method shown in Fig. 2 or Fig. 3, and it realizes principle and technique effect is similar, repeats no more herein.
The structural representation of the treatment facility that Fig. 6 provides for the embodiment of the present invention five, as shown in Figure 6, this treatment facility comprises:
Storer 31 and the processor 32 be connected with described storer 31, wherein, described storer 31 is for storing batch processing code, described processor 32 is for calling the program code stored in described storer 31, to perform in 3 d human face mesh model disposal route as shown in Figure 1: obtain the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image; The camera parameter matrix of described initial three-dimensional face wire frame model is calculated according to formula (1):
Wherein, P is described camera parameter matrix, X
ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x
ifor with described second expressive features point X
ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point; According to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
Further, described processor 32 is also for calculating the matching error of described second expressive features point and described first expressive features point according to formula (2):
Wherein, Err is described matching error, w
ibe i-th couple of unique point X
iand x
iweight coefficient;
Judge whether described matching error is more than or equal to predetermined threshold value; If be more than or equal to, then described initial three-dimensional face wire frame model is adjusted, be less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
Further, described processor 32 is also for calculating the second expressive features point X
ieach grid vertex X to described initial three-dimensional face wire frame model
jgeodesic line distance, wherein, i is not equal to j; The second expressive features point X on fixing described initial three-dimensional face wire frame model
iz coordinate, adopt first preset algorithm change described second expressive features point X
ix, y coordinate, obtains and described second expressive features point X
ithe 3rd corresponding expressive features point X
i'; With described geodesic line distance for constraint, the second preset algorithm is adopted to determine and described 3rd expressive features point X
i' corresponding each grid vertex X
j'; According to described 3rd expressive features point X
i' and with described 3rd expressive features point X
i' corresponding each grid vertex X
j' adjust described initial three-dimensional face wire frame model.
Further, described original two dimensional facial image comprises target two-dimension human face image and reference two-dimension human face image, described processor 32 is also for extracting the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, and described human face expression unique point comprises face mask unique point and described first expressive features point; Determine nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image; According to face mask unique point and the first expressive features point of described nearly front face image, the target Nature face model determined is out of shape, obtains the Nature face model corresponding with described front face image from neutral face database; Nature face model according to described nearly front face image is out of shape each default expression model comprised in default expression storehouse respectively, obtains each expression model corresponding with described front face image; Determine the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image; Described each expression model is merged according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
Further, described processor 32 also for calculating the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculates the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image; Determine that the image that described face mask curvature is little is nearly front face image.
Further, described processor 32 also for being out of shape described target two-dimension human face image according to the 3 d human face mesh model corresponding with described target two-dimension human face image, and is out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image; Merge by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
One of ordinary skill in the art will appreciate that: all or part of step realizing said method embodiment can have been come by the hardware that programmed instruction is relevant, aforesaid program can be stored in a computer read/write memory medium, this program, when performing, performs the step comprising said method embodiment; And aforesaid storage medium comprises: ROM, RAM, magnetic disc or CD etc. various can be program code stored medium.
Last it is noted that above each embodiment is only in order to illustrate technical scheme of the present invention, be not intended to limit; Although with reference to foregoing embodiments to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein some or all of technical characteristic; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.