CN111127641A - Three-dimensional human body parametric modeling method with high-fidelity facial features - Google Patents

Three-dimensional human body parametric modeling method with high-fidelity facial features Download PDF

Info

Publication number
CN111127641A
CN111127641A CN201911410126.5A CN201911410126A CN111127641A CN 111127641 A CN111127641 A CN 111127641A CN 201911410126 A CN201911410126 A CN 201911410126A CN 111127641 A CN111127641 A CN 111127641A
Authority
CN
China
Prior art keywords
human body
dimensional human
representing
model
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911410126.5A
Other languages
Chinese (zh)
Other versions
CN111127641B (en
Inventor
陈寅
杨启亮
程志全
姜巍
周旭
吴彤
雷运洪
林帅
拉尔夫·马丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avatar Technology Shenzhen Co ltd
Hunan Huashen Technology Co ltd
Shenzhen Institute of Advanced Technology of CAS
Army Engineering University of PLA
Original Assignee
Avatar Technology Shenzhen Co ltd
Hunan Huashen Technology Co ltd
Shenzhen Institute of Advanced Technology of CAS
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avatar Technology Shenzhen Co ltd, Hunan Huashen Technology Co ltd, Shenzhen Institute of Advanced Technology of CAS, Army Engineering University of PLA filed Critical Avatar Technology Shenzhen Co ltd
Priority to CN201911410126.5A priority Critical patent/CN111127641B/en
Publication of CN111127641A publication Critical patent/CN111127641A/en
Application granted granted Critical
Publication of CN111127641B publication Critical patent/CN111127641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional human body parameterized modeling method with high-fidelity facial features, which is characterized in that a three-dimensional human body parameterized mathematical model is trained, variables related to a user object in an approximation solving equation are solved based on three-dimensional human body original data aiming at the user object to be processed, and a three-dimensional human body model with geometric shape data of the user object is reconstructed; a standard template with a texture map is created. Establishing a standard texture, wherein the standard texture provides a basic texture picture for a user instance and provides general texture mapping information; and realizing a three-dimensional human body parameterized modeling result with high fidelity facial features. And matching the texture picture of the user on the standard texture picture through texture mapping. And further, the picture seamless fusion algorithm is used, the hue difference between the texture picture of the user and the standard texture picture is reduced, seamless splicing between the texture picture and the standard texture picture is realized, and a final result is obtained.

Description

Three-dimensional human body parametric modeling method with high-fidelity facial features
Technical Field
The invention relates to the technical field of image data processing, in particular to a three-dimensional human body parametric modeling method with high-fidelity facial features.
Background
In the field of computer graphics, three-dimensional human modeling is challenging, for two main reasons.
First, the human body is a dynamic object composed of various materials, with a high degree of complexity, difficult to perfectly digitize a three-dimensional reconstruction, and the three-dimensional human body raw data is imperfect, permanently having raw data defects.
Through research, the following results are found: the massive three-dimensional human body model examples form a three-dimensional human body space, the three-dimensional human body space can be described in a formula mode by using a parameterized mathematical model, and a three-dimensional human body model result of the specific three-dimensional human body examples is established, namely the three-dimensional human body parameterized modeling method. The three-dimensional human body parametric modeling method has stable functions: because the solved result is in accordance with the three-dimensional human body parameterized mathematical model, even if the input data is incomplete, the complete three-dimensional human body model result can be automatically established. Therefore, the defect that the conventional three-dimensional scanning method is defective due to data defect is avoided.
Secondly, the sensitivity perceived by the user, the degree of distortion of the result is very easily perceived by the user. Therefore, three-dimensional human body modeling requires not only accurate reproduction of human body three-dimensional shape geometry data, but also fidelity of visual effect, which is particularly important for facial features.
According to the literature search, the following are some representative patents related to the present invention:
1) patents related to three-dimensional human parametric modeling are CN201310215996.3 and CN 201310555513.4. 201310215996.3 is a method for recovering the form and posture of three-dimensional human body in real time without marking points; CN201310555513.4 is a three-dimensional real-time capturing system without mark points for performers. The two patents are the existing work of the project group where the invention is located, respective three-dimensional human body parametric modeling methods are provided, and the solution is realized for the posture and the form of the three-dimensional human body geometric shape. The patent adopts a discrete three-dimensional human body parametric modeling method, pays more attention to the parametric representation of the human body, and expresses the facial features lack of high fidelity.
2) Patents related to facial texture reconstruction: the system comprises a real-time three-dimensional face reconstruction method CN201811418790.X based on a single-frame face image, a face texture feature scanner CN201620396107.7 based on three-dimensional laser, and a three-dimensional head modeling method and device CN 201610371399.3. The above patent calculates an affine matrix from some facial feature points, such as two eye centers, nose cusps, and the like, and then calculates the matching relationship between the model and the texture image through the affine matrix. The texture reconstructed by the above patent mainly focuses on the fidelity of facial features, does not have uniform texture mapping information, and is difficult to be used for specific applications such as augmented reality and virtual reality.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a three-dimensional human body parametric modeling method with high-fidelity facial features, and solves the problem of three-dimensional human body parametric modeling with high-fidelity facial features.
In order to achieve the above purpose, the invention adopts the following technical scheme: a three-dimensional human body parametric modeling method with high-fidelity facial features is characterized in that three-dimensional human body original point cloud data of a user object and positions of facial feature points are obtained;
solving morphological space parameters and posture space parameters related to the user object through the trained three-dimensional human body parameterized mathematical model, and reconstructing a three-dimensional human body model of the user object with high-fidelity facial geometric characteristics;
a standard texture template is pre-established. And projecting the collected facial texture picture of the user onto the three-dimensional human body model through the reconstructed three-dimensional human body model with the high-fidelity facial geometric characteristics so as to replace the facial region of the standard texture template, and realizing seamless splicing between the facial region and the standard texture template through a Poisson fusion algorithm to obtain a three-dimensional human body parameterized modeling result with the high-fidelity facial texture characteristics.
Further, the three-dimensional human body parameterized mathematical model is as follows:
M(β,θ;Φ):e=Bf(θ)Sf(β)Qf(θ)e* (1)
Sf(β) is a morphological deformation matrix, Bf(theta) is a rigid posture distortion matrix, QfAnd (theta) is a non-rigid posture deformation matrix, theta is a posture space parameter, β represents a form space parameter, a constant phi is a form and posture parameter which is learned in advance through data training, e is a triangle side vector of a triangle grid in a three-dimensional human body standard model which is established in advance, and e is a deformed side vector.
Further, in the above-mentioned case,
Figure BDA0002349754610000031
wherein | β | is the number of morphological feature parameters, ψb',f(0. ltoreq. b '< | β |) is the linear coefficient of the b' th morphological characteristic parameter on the f-th triangle, ψ|β|,fThe amount of shift of the f-th triangle on the form distortion matrix S, βb'Representing the b' th morphological characteristic parameter;
Figure BDA0002349754610000032
where | Bone | represents the number of rigid elements, wb,fIs the weight of the skin of the b-th rigid part to the f-th triangle, R (θ)b) A rigid transformation matrix representing the b-th rigid component;
Figure BDA0002349754610000033
wherein, γ0,fThe deformation matrix is a unit matrix and represents the deformation matrix when the human body rigid part has no relative deformation; gamma rayb,fRepresenting the b-th Rodrigues rotation vector thetabLinear coefficients on the f-th triangle.
Further, training a three-dimensional human body parameterized mathematical model comprises the following steps: for pre-established standard triangular mesh T*Deforming to register the sample k with each sample k of the naked three-dimensional human body database, and accurately matching the deformed vertex with the vertex of the sample k; at T*The face area selects the feature points, and reaches the appointed position in the point cloud model after deformation, so that the three-dimensional human body parameterized mathematical model is trained.
Furthermore, in constant parameters of the three-dimensional human body parameterized mathematical model, each rigid component pre-designates the skin weight of each triangle; calculating a form constant parameter psi and a posture constant parameter gamma of the three-dimensional human body parameterized mathematical model;
the process of solving the constant parameters is to find the optimal constants Ψ, Γ and find the optimal triangular mesh model for the human body in each training set such that the following objective functions are satisfied:
Figure BDA0002349754610000041
Tktriangular mesh model, λ, representing the kth human body in the training set1,λ2,λ3,λ4Is a weight parameter; thetakRepresenting the posture parameters of the kth body in the training set, βkRepresenting the morphological parameters of the kth human body in the training set;
Figure BDA0002349754610000042
representing a standard triangular mesh T*I is 0,1,2,
Figure BDA0002349754610000043
triangular mesh T representing the kth human in the training setkThe ith side of the f-th triangle; adj { (f1, f2), f1 and f2 being adjacent } denotes T*Wherein all adjacent triangle pairs are set, f1, f2 represents two adjacent triangles, Sf1(),Sf2() Form deformation matrixes of two adjacent triangles f1 and f2 respectively; i is a unity 3 matrix; vkA set of all vertices of a triangular mesh representing a kth individual in the training set; pkRepresenting a set of all points of a point cloud model of the kth human body in the training set;
Figure BDA0002349754610000044
a set of triangular mesh facial feature points representing the kth human body in the training set,
Figure BDA0002349754610000045
Figure BDA0002349754610000046
a set of point cloud model facial feature points representing the kth human body in the training set,
Figure BDA0002349754610000047
and optimizing the target function by using a confidence domain method, finishing alternate iterative computation of each type of variable through gradient descent until the computation of the target function is converged, solving constant parameters psi and gamma, and further obtaining the trained three-dimensional human body parameterized mathematical model.
Further, morphological space parameters and posture space parameters related to the user object are solved, and meanwhile, a three-dimensional human body model with high-fidelity facial features of the user object is reconstructed, and the process is as follows:
skin detection is carried out on the three-dimensional portrait original data of the user, and two types of areas are identified: real skin and head point cloud PSkinAttached point cloud of clothing PCloth(ii) a For the facial feature points, firstly, the corresponding pixel positions are identified on the image by using a face identification algorithm, and then the corresponding three-dimensional space positions are found through a camera perspective model and serve as the target positions P of the facial feature pointsFace
Figure BDA0002349754610000051
Where ρ (·) is a Geman-McClure function, ef,iThe ith side of the f-th triangle of the triangular mesh representing the user object, V represents the set of all vertices of the triangular mesh of the user object, VFaceSet of triangular mesh facial feature points, P, representing a user objectFaceAnd adopting a doglegg method to complete solution, and calculating a form variable β and a posture variable theta of the user object and a deformed three-dimensional human body model T with high-fidelity face geometric characteristics.
The invention achieves the following beneficial effects:
the method is different from the existing three-dimensional human body parametric modeling algorithm, and the texture maps are firstly brought into the three-dimensional human body parametric modeling system, so that the three-dimensional human body modeling result of the high-fidelity facial features is supported.
The invention establishes a standard template with texture mapping, referred to as standard texture for short, the standard texture provides a basic texture picture for a user instance, provides uniform texture mapping information, and then fuses the texture picture of the user and the standard texture through a picture seamless fusion technology to realize an expected texture mapping target.
The three-dimensional human body modeling with the high-fidelity facial features is realized, and the problem of sensitivity of user perception in the human body three-dimensional modeling is effectively solved.
Drawings
Fig. 1 is a functional block diagram of an implementation of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in fig. 1, a three-dimensional human parametric modeling method with high fidelity facial features comprises the steps of:
step 1, establishing a three-dimensional human body parameterized mathematical model;
based on the non-rigid deformation mathematical theory of the three-dimensional human body geometric space, the method uses a main element analysis linearization method to complete the solution of the three-dimensional human body parameterized mathematical model equation, thereby simplifying the three-dimensional human body parameterized mathematical model into a quasi-linearized approximate solution equation;
it is assumed that the individual three-dimensional human discretization is represented as a triangular mesh, comprising vertices V ═ V0,...,v|V|-1E, edge E ═ E0,...,e|E|-1H (each edge e is a vector consisting of two adjacent vertices) and F ═ F0,...,f|F|-1(each face f (i.e., a triangle) is a triangle made up of three adjacent vertices), v0Representing the first vertex, v|V|-1Represents the | V | -1-th vertex, e0Denotes the first edge, e|E|-1Represents the | E | -1 edge, f0Denotes the first face, f|F|-1Represents the | F | -1 surface; the | V |, | E |, and | F | respectively represent the number of vertexes, edges, and faces in the mesh, and the pre-established three-dimensional human standard model is represented as T ═ a great faceV, E, F, wherein T of any human body three-dimensional human body model is obtained based on T deformation, and V, E, F respectively represent vertex, edge and face vector sets of triangular meshes in a pre-established three-dimensional human body standard model; v, E and F respectively represent the vertex, edge and face vector sets of the triangular meshes in the three-dimensional human body model.
Three-dimensional human body parameterized mathematical model under triangular mesh description
Figure BDA0002349754610000061
Expresses human body related parameters to three-dimensional human body space
Figure BDA0002349754610000062
Is mapped to
Figure BDA0002349754610000063
Wherein the variable β represents the shape space parameters such as height, weight and three-dimensional circumference, the three-dimensional human body is abstracted into a joint model and divided into | Bone | (| Bone | ═ 19) rigid components, and the b (1 ≦ b < | > Bone |) rigid component is used for the relative rotation of the relative to the father rigid component to form a three-dimensional Rodrigues (Reed-Solomon) vector thetabRepresenting a three-dimensional Rodrigues vector θ of the root rigid part0Representing absolute rotation, theta, relative to a three-dimensional body standard model TbAnd theta0The two combined representation represents a posture space parameter theta (the dimension theta is 19 x 3 ═ 57), and a constant phi is a form and a posture parameter which are pre-learned through data training and comprises a form constant parameter set psi ═ { psi ═ 57b',fA constant parameter set Γ ═ 0,1,2, · β | -1, F |, 0,1,. F | -1}, and a posture constant parameter set Γ ═ γ ″b,fI, b ═ 0,1. | Bone | -1, F ═ 0,1. | F | -1} and skinning weight set W ═ { W ═ F | -1}b,fB is 0,1. | Bone | -1, F is 0,1. | F | -1} | β | represents the number of form space parameters, b represents the serial number of the rigid member, b' represents the serial number of the form space parameters, ψ represents the serial number of the form space parameters, andb',flinear coefficients (9 x 1 matrix), γ, representing the b' th morpho-spatial parameter on the f triangleb,fLinear coefficients (9 x 3 matrix) representing the b-th Rodrigues rotation vector on the f-th triangle, wb,fShowing the skin representing the b-th rigid component versus the f-th triangleAnd (4) weighting. Due to the fact that nonlinear factors such as non-rigid deformation are involved, the three-dimensional human body parameterized mathematical model is a typical second-order partial differential equation with constraint conditions. The f-th triangle edge vector e of T passes through a non-rigid small-scale posture deformation matrix Q under the combined action of a variable and a constantf(θ) form deformation matrix Sf(β) and weighted skinning Large Scale pose deformation matrix Bf(theta), obtaining a deformed edge vector E, and finally reconstructing a geometric model T of the three-dimensional grid through the Poisson surface, wherein the geometric model T is { V, E, F }, Q ═f,Sf,BfRespectively a 3 x 3 matrix.
The three-dimensional human body parameterized mathematical model can be formally expressed as follows:
Figure BDA0002349754610000071
form deformation matrix Sf(β) depicts the diversity of the three-dimensional human morphology change to the f-th triangle, Sf(β) can be represented by a linear combination of morphology space parameters β:
Figure BDA0002349754610000072
wherein | β | is the number of morphological space parameters, ψb',f(0 ≦ b '< | β |) is the linear coefficient (9 matrix of 1) of the b' th morpho-spatial parameter on the f-th triangle, ψ|β|,fThe f-th deviation amount (9 x 1 matrix) of the triangle deformation matrix S is β representing the shape parameters of height, weight, three-dimensional and the like, βb'Experiments have shown that the number of morphology space parameters | β | specified as 42, 1000 human training models are sufficient to calculate the morphology constant parameter Ψ.
Posture distortion matrix Bf(θ)、Qf(theta) all depict the diversity of the triangle f brought by the three-dimensional human posture change. Method of joint deformation using a linear skin, Bf(theta) represents the large scale deformation driven by the rigid skeletal elements, i.e., the summation of the products of the transformation and skinning weights of | Bone | (| Bone | ═ 19) rigid elements in the human body,
Figure BDA0002349754610000081
i Bone I denotes the number of rigid elements, wb,fIs an element of the set W representing the weight of the skin of the b-th rigid component to the triangle f, R (θ)b) A rigid transformation matrix representing the b-th rigid component, the rigid matrix being a 3 x 3 unitary orthogonal matrix;
Figure BDA0002349754610000082
formally representing local small-scale postural deformation on body shape, gamma0,fIndicating that there is no relative deformation of the rigid parts of the body (theta)b(1 ≦ b < | Bone |) is a zero vector) deformation matrix (9 x 1 matrix), which is an identity matrix; gamma rayb,fRepresenting the b-th Rodrigues rotation vector thetabLinear coefficients on the f-th triangle (9 x 3 matrix).
Step 2, training the three-dimensional human body parameterized mathematical model through a pre-established three-dimensional human body standard model and constraint conditions of the facial feature points to obtain a trained three-dimensional human body parameterized mathematical model; constraint conditions of the facial feature points are added in the training process, so that the parameterized mathematical model is ensured to have high-fidelity facial features;
for pre-established standard triangular mesh T*Deforming to register with each sample k of the naked three-dimensional human body database, namely deforming vertex VkAnd the vertex P of the sample kkPrecisely matched together; to ensure higher fidelity of the face, at T*Selecting representative characteristic points such as eyes, a nose, a mouth and the like in the face area, and deforming the representative characteristic points to reach specified positions in the point cloud model P; thereby training the three-dimensional human parametric mathematical model.
In the constant parameters, the weight of each rigid component in W to the skin of each triangle is manually specified in advance. In the specific data training process, the other two constant parameters psi and gamma of the three-dimensional human body parametric model are calculated.
The process of solving the constant parameters is to find the optimal constants Ψ, Γ and find the optimal triangular mesh model for the human body in each training set such that the following objective functions are satisfied:
Figure BDA0002349754610000091
Tktriangular mesh model, λ, representing the kth human body in the training set1,λ2,λ3,λ4Is a weight parameter, which is specified in advance. ThetakRepresenting the posture parameters of the kth body in the training set, βkRepresenting the morphological parameters of the kth human in the training set.
Figure BDA0002349754610000092
Representing a standard triangular mesh T*I is 0,1,2,
Figure BDA0002349754610000093
triangular mesh T representing the kth human in the training setkThe ith side of the f-th triangle. Adj { (f1, f2), f1 and f2 being adjacent } denotes T*Wherein all adjacent triangle pairs are set, f1, f2 represents two adjacent triangles, Sf1(),Sf2() The shape deformation matrixes of two adjacent triangles f1 and f2 are respectively shown. I is a unity 3 matrix. VkThe set of all vertices of the triangular mesh representing the kth individual in the training set (the vertices including the location parameter). PkA set of all points of the point cloud model representing the kth human in the training set.
Figure BDA0002349754610000094
A set of triangular mesh facial feature points representing the kth human body in the training set,
Figure BDA0002349754610000095
Figure BDA0002349754610000096
the set of point cloud model facial feature points representing the kth human body in the training set is marked manually in advance,
Figure BDA0002349754610000097
(2) formula first item guarantees optimal triangular mesh T of kth human body in training setkIs composed of T*According to thetak,βkIn the deformed model, the second item ensures that adjacent triangles have similar morphological deformation matrixes, the third item ensures that the posture deformation matrix keeps rigidity as much as possible, and the fourth item ensures that the vertex V of the optimal triangular mesh of each training set human bodykAnd the vertex P of the sample kkExactly matched, the fifth term guarantees T*The deformed rear feature point reaches a specified position. The solution of this function involves the calculation of a typical second order partial differential equation with constraints, with psi and Γ variables and is multidimensional; and (3) optimizing an objective function (namely a confidence domain doglegg method) by using a confidence domain method, finishing alternate iterative computation of each type of variable through gradient descent until the computation of the objective function converges, and solving constant parameters psi and gamma. Further obtaining a trained three-dimensional human body parameterized mathematical model;
step 3, user instance processing: aiming at a user object to be processed, three-dimensional human body original point cloud data P of the user object and the position P of the automatically identified facial feature point are processedFaceThrough the trained three-dimensional human body parameterized mathematical model, the form space parameters β and the posture space parameters theta related to the user object are solved, and meanwhile, the three-dimensional human body model with the highly realistic facial features of the user object is reconstructed.
In the solving process, firstly, skin detection is carried out on three-dimensional portrait original data of a user, and two types of areas are identified: real skin and head point cloud (i.e., collection of points) PSkinAttached point cloud of clothing PCloth. For PSkinPoint cloud, which is matched with the deformed three-dimensional human body model T; for PClothAnd point cloud, wherein the deformed three-dimensional human body model T is required to be positioned at the inner side of the clothes and is close to the clothes, and the requirement is realized by using a Geman-McClure function rho (·). For the facial feature points, firstly, the corresponding pixel positions are identified on the image by using a face identification algorithm, and then the corresponding three-dimensional positions are found by using a camera perspective modelSpatial position as target position P of facial feature pointFace
Figure BDA0002349754610000101
Like equation (2), equation (3) is solved by the doglegg method, and the form variable β and the pose variable θ of the user object and the deformed three-dimensional human body model T with the geometric features of the highly realistic face are calculated.
And 4, pre-establishing a standard texture template. Through the reconstructed three-dimensional human body model with the high-fidelity facial geometric characteristics, the acquired user facial texture picture is projected onto the three-dimensional human body model by using the camera perspective model, so that the facial area of the standard texture template is replaced, and the seamless splicing between the facial texture picture and the standard texture template is realized through a Poisson fusion algorithm, so that a three-dimensional human body parameterized modeling result with the high-fidelity facial texture characteristics is obtained.
The standard template with texture mapping is established by software such as Maya, 3D Max and the like: establishing a standard texture, wherein the standard texture provides a basic texture picture for a user instance and provides uniform texture mapping information;
the facial texture picture of the user is shot by a camera, the upper human body model is a geometric model (only the position of a point), and the color of each point can be known by adding the texture; the three-dimensional human body model with the high-fidelity geometric characteristics is used for projecting the collected facial texture picture of the user (firstly three-dimensionally, and then finding the color information of each three-dimensional point on the two-dimensional picture); the final modeling result is a three-dimensional human body parameterization modeling result with texture (the geometry and the texture of the face are vivid);
in the field of three-dimensional human body parametric modeling, the invention firstly provides a standard template with a high-fidelity texture mapping, hereinafter referred to as standard texture; the standard texture provides a basic texture picture for a user instance and provides uniform texture mapping information; the standard texture has no dead angle, and can help solve the problem of texture data defect of the user instance.
The three-dimensional human body parametric modeling obtains the geometric data of the human body, and on the basis, the texture image of the user is fused with the standard texture through the texture fusion technology to realize the expected texture mapping target.
For the user instance, the three-dimensional human body parametric modeling method establishes an expected three-dimensional human body geometric model, and the texture of the three-dimensional model is still the standard texture carried by the standard template and needs to be replaced by the texture of the user. Through texture mapping, the texture picture of the user can be matched on the standard texture picture; furthermore, the picture seamless fusion algorithm is used, the color tone difference between the texture picture of the user and the standard texture picture is reduced, seamless splicing between the texture picture and the standard texture picture is realized, and the three-dimensional human body parameterized modeling result with high-fidelity facial features is obtained.
The invention provides a standard texture mapping on the basis of an enhanced three-dimensional human body parameterized mathematical model, and realizes a three-dimensional human body model result of high-fidelity facial features by combining a picture seamless fusion technology. The method comprises the following steps: 1) and designing a three-dimensional human body parameterized mathematical model. Simplifying the three-dimensional human body parameterized mathematical model into a quasi-linearized approximate solution equation by using linearization methods such as principal element analysis and the like; 2) and training the three-dimensional human body parameterized mathematical model through standard template data. In the specific data training process, aiming at a quasi-linear approximation solution equation, calculating constant parameters of the approximation solution equation; 3) solving variables related to the user object in an approximation solving equation based on three-dimensional human body original data aiming at the user object to be processed, and reconstructing a three-dimensional human body model of the user object with geometric shape data; 4) a standard template with a texture map is created. Establishing a standard texture, wherein the standard texture provides a basic texture picture for a user instance and provides general texture mapping information; and realizing a three-dimensional human body parameterized modeling result with high fidelity facial features. And matching the texture picture of the user on the standard texture picture through texture mapping. And further, the picture seamless fusion algorithm is used, the hue difference between the texture picture of the user and the standard texture picture is reduced, seamless splicing between the texture picture and the standard texture picture is realized, and a final result is obtained.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (6)

1. A three-dimensional human body parametric modeling method with high-fidelity facial features is characterized in that,
acquiring three-dimensional human body original point cloud data of a user object and positions of facial feature points;
solving morphological space parameters and posture space parameters related to the user object through the trained three-dimensional human body parameterized mathematical model, and reconstructing a three-dimensional human body model of the user object with high-fidelity facial geometric characteristics;
the method comprises the steps of establishing a standard texture template in advance, projecting an acquired user facial texture picture onto a three-dimensional human body model through the reconstructed three-dimensional human body model with the high-fidelity facial geometric characteristics so as to replace a facial region of the standard texture template, and realizing seamless splicing between the two through a Poisson fusion algorithm to obtain a three-dimensional human body parameterized modeling result with the high-fidelity facial texture characteristics.
2. The method of claim 1, wherein the three-dimensional human parametric mathematical model is:
Figure RE-FDA0002419756260000012
Sf(β) is a morphological deformation matrix, Bf(theta) is a rigid posture distortion matrix, QfThe method comprises the following steps of (theta) representing a non-rigid posture deformation matrix, theta representing a posture space parameter, β representing a form space parameter, a constant phi representing a form and a posture parameter which are learned in advance through data training, e representing a triangle side vector of a triangle grid in a three-dimensional human body standard model which is established in advance, and e representing a triangle side vector of the triangle grid in the three-dimensional human body standard model which is subjected to data trainingThe deformed edge vector.
3. The method of claim 2, wherein the human body is modeled by using a three-dimensional human body with high fidelity facial features,
Figure RE-FDA0002419756260000011
wherein | β | is the number of morphological feature parameters, ψb',f(0. ltoreq. b '< | β |) is the linear coefficient of the b' th morphological characteristic parameter on the f-th triangle, ψ|β|,fThe amount of shift of the f-th triangle on the form distortion matrix S, βb'Representing the b' th morphological characteristic parameter;
Figure RE-FDA0002419756260000021
where | Bone | represents the number of rigid elements, wb,fIs the weight of the skin of the b-th rigid part to the f-th triangle, R (θ)b) A rigid transformation matrix representing the b-th rigid component;
Figure RE-FDA0002419756260000022
wherein, γ0,fThe deformation matrix is a unit matrix and represents the deformation matrix when the human body rigid part has no relative deformation; gamma rayb,fRepresenting the b-th Rodrigues rotation vector thetabLinear coefficients on the f-th triangle.
4. The method of claim 1, wherein the training of the three-dimensional human parametric mathematical model comprises: for pre-established standard triangular mesh T*Deforming to register the sample k with each sample k of the naked three-dimensional human body database, and accurately matching the deformed vertex with the vertex of the sample k; at T*Face region selection feature ofAnd (5) marking points, and after deformation, reaching specified positions in the point cloud model, thereby training the three-dimensional human body parameterized mathematical model.
5. The method of claim 4, wherein each rigid component is pre-specified for the skinning weight of each triangle in the constant parameters of the three-dimensional human parametric mathematical model; calculating a form constant parameter psi and a posture constant parameter gamma of the three-dimensional human body parameterized mathematical model;
the process of solving the constant parameters is to find the optimal constants Ψ, Γ and find the optimal triangular mesh model for the human body in each training set such that the following objective functions are satisfied:
Figure RE-FDA0002419756260000023
Tktriangular mesh model, λ, representing the kth human body in the training set1,λ2,λ3,λ4Is a weight parameter; thetakRepresenting the posture parameters of the kth body in the training set, βkRepresenting the morphological parameters of the kth human body in the training set;
Figure RE-FDA0002419756260000031
representing a standard triangular mesh T*I is 0,1,2,
Figure RE-FDA0002419756260000032
triangular mesh T representing the kth human in the training setkThe ith side of the f-th triangle; adj { (f1, f2), f1 and f2 being adjacent } denotes T*Wherein all adjacent triangle pairs are set, f1, f2 represents two adjacent triangles, Sf1(),Sf2() Form deformation matrixes of two adjacent triangles f1 and f2 respectively; i is a unity 3 matrix; vkA set of all vertices of a triangular mesh representing a kth individual in the training set; pkExpressing trainingTraining a set of all points of a point cloud model of the kth human body;
Figure RE-FDA0002419756260000033
a set of triangular mesh facial feature points representing the kth human body in the training set,
Figure RE-FDA0002419756260000034
Figure RE-FDA0002419756260000035
a set of point cloud model facial feature points representing the kth human body in the training set,
Figure RE-FDA0002419756260000036
and optimizing the target function by using a confidence domain method, finishing alternate iterative computation of each type of variable through gradient descent until the computation of the target function is converged, solving constant parameters psi and gamma, and further obtaining the trained three-dimensional human body parameterized mathematical model.
6. The method of claim 5, wherein morphological and pose spatial parameters associated with the user object are solved and a three-dimensional human model of the user object with highly realistic facial features is reconstructed by:
skin detection is carried out on the three-dimensional portrait original data of the user, and two types of areas are identified: real skin and head point cloud PSkinAttached point cloud of clothing PCloth(ii) a For the facial feature points, firstly, the corresponding pixel positions are identified on the image by using a face identification algorithm, and then the corresponding three-dimensional space positions are found through a camera perspective model and serve as the target positions P of the facial feature pointsFace
Figure RE-FDA0002419756260000037
Where ρ (·) is a Geman-McClure function, ef,iThe ith side of the f-th triangle of the triangular mesh representing the user object, V represents the set of all vertices of the triangular mesh of the user object, VFaceSet of triangular mesh facial feature points, P, representing a user objectFaceAnd adopting a doglegg method to complete solution, and calculating a form variable β and a posture variable theta of the user object and a deformed three-dimensional human body model T with high-fidelity face geometric characteristics.
CN201911410126.5A 2019-12-31 2019-12-31 Three-dimensional human body parameterized modeling method with high-fidelity facial features Active CN111127641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911410126.5A CN111127641B (en) 2019-12-31 2019-12-31 Three-dimensional human body parameterized modeling method with high-fidelity facial features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911410126.5A CN111127641B (en) 2019-12-31 2019-12-31 Three-dimensional human body parameterized modeling method with high-fidelity facial features

Publications (2)

Publication Number Publication Date
CN111127641A true CN111127641A (en) 2020-05-08
CN111127641B CN111127641B (en) 2024-02-27

Family

ID=70506281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911410126.5A Active CN111127641B (en) 2019-12-31 2019-12-31 Three-dimensional human body parameterized modeling method with high-fidelity facial features

Country Status (1)

Country Link
CN (1) CN111127641B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233253A (en) * 2020-12-14 2021-01-15 成都完美时空网络技术有限公司 Virtual sphere deformation control method and device, electronic equipment and storage medium
CN113538644A (en) * 2021-07-19 2021-10-22 北京百度网讯科技有限公司 Method and device for generating character dynamic video, electronic equipment and storage medium
WO2022120843A1 (en) * 2020-12-11 2022-06-16 中国科学院深圳先进技术研究院 Three-dimensional human body reconstruction method and apparatus, and computer device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140168204A1 (en) * 2012-12-13 2014-06-19 Microsoft Corporation Model based video projection
CN110084884A (en) * 2019-04-28 2019-08-02 叠境数字科技(上海)有限公司 A kind of manikin facial area method for reconstructing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140168204A1 (en) * 2012-12-13 2014-06-19 Microsoft Corporation Model based video projection
CN110084884A (en) * 2019-04-28 2019-08-02 叠境数字科技(上海)有限公司 A kind of manikin facial area method for reconstructing

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022120843A1 (en) * 2020-12-11 2022-06-16 中国科学院深圳先进技术研究院 Three-dimensional human body reconstruction method and apparatus, and computer device and storage medium
CN112233253A (en) * 2020-12-14 2021-01-15 成都完美时空网络技术有限公司 Virtual sphere deformation control method and device, electronic equipment and storage medium
CN112233253B (en) * 2020-12-14 2021-03-16 成都完美时空网络技术有限公司 Virtual sphere deformation control method and device, electronic equipment and storage medium
CN113538644A (en) * 2021-07-19 2021-10-22 北京百度网讯科技有限公司 Method and device for generating character dynamic video, electronic equipment and storage medium
CN113538644B (en) * 2021-07-19 2023-08-29 北京百度网讯科技有限公司 Character dynamic video generation method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111127641B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
JP3954211B2 (en) Method and apparatus for restoring shape and pattern in 3D scene
Wang et al. 3dn: 3d deformation network
Hasler et al. Multilinear pose and body shape estimation of dressed subjects from image sets
DK2686834T3 (en) IMPROVED VIRTUAL TEST ON SIMULATION DEVICE
CN101916454B (en) Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN111127641B (en) Three-dimensional human body parameterized modeling method with high-fidelity facial features
US7365744B2 (en) Methods and systems for image modification
US20060244757A1 (en) Methods and systems for image modification
CN113177977B (en) Non-contact three-dimensional human body size measuring method
Sinko et al. 3D registration of the point cloud data using ICP algorithm in medical image analysis
CN111553284A (en) Face image processing method and device, computer equipment and storage medium
Campomanes-Alvarez et al. Computer vision and soft computing for automatic skull–face overlay in craniofacial superimposition
JP2018129009A (en) Image compositing device, image compositing method, and computer program
CN112330813A (en) Wearing three-dimensional human body model reconstruction method based on monocular depth camera
CN114693570A (en) Human body model image fusion processing method, device and storage medium
CN114202630A (en) Illumination matching virtual fitting method, device and storage medium
CN111179418A (en) Three-dimensional human body measuring method and device without naked body of user
CN113516755B (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP2022516298A (en) How to reconstruct an object in 3D
CN115936796A (en) Virtual makeup changing method, system, equipment and storage medium
Ramadhani et al. Virtual Avatar Representation in the Digital Twin: A Photogrammetric Three-Dimentional Modeling Approach
Nebel et al. Range flow from stereo-temporal matching: application to skinning
CN116664796B (en) Lightweight head modeling system and method
JP7251003B2 (en) Face mesh deformation with fine wrinkles
Paulsen et al. Creating ultra dense point correspondence over the entire human head

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant