CN112200905B - Three-dimensional face complement method - Google Patents

Three-dimensional face complement method Download PDF

Info

Publication number
CN112200905B
CN112200905B CN202011102085.6A CN202011102085A CN112200905B CN 112200905 B CN112200905 B CN 112200905B CN 202011102085 A CN202011102085 A CN 202011102085A CN 112200905 B CN112200905 B CN 112200905B
Authority
CN
China
Prior art keywords
face
full
model
head model
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011102085.6A
Other languages
Chinese (zh)
Other versions
CN112200905A (en
Inventor
周翔
李爽
姜军委
杨涛
彭磊
李欢欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gedian Technology Shenzhen Co ltd
Original Assignee
Gedian Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gedian Technology Shenzhen Co ltd filed Critical Gedian Technology Shenzhen Co ltd
Priority to CN202011102085.6A priority Critical patent/CN112200905B/en
Publication of CN112200905A publication Critical patent/CN112200905A/en
Application granted granted Critical
Publication of CN112200905B publication Critical patent/CN112200905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06T3/147
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a three-dimensional face complement method, which comprises the following steps. S1: and inputting a face model and a full head model. S2: and extracting 3D characteristic points of the full-head model. S3: and 3D characteristic points of the face model are extracted. S4: the face model and the full head model are roughly aligned. S5: and extracting a face region of the input three-dimensional face. S6: deformation: targeting the face region of the three-dimensional face extracted in the step S5; and deforming the three-dimensional grid of the full-head model by using an iterative deformation algorithm until the full-head model adapts to the face model, and achieving a convergence condition. S7: and fusing the face model and the deformed full-head model. S8: and (3) outputting: and outputting the full-head model of the three-dimensional face complement. The method can overcome the defects existing in the prior art, is simple, convenient and feasible, is fully automatic, and is efficient and practical. The method is suitable for face complement under the condition of large difference of various face shapes and head shapes, has robustness and has stronger sense of reality of the final result.

Description

Three-dimensional face complement method
Technical Field
The invention belongs to the field of computer vision and computer graphics, and particularly relates to a three-dimensional face complement method.
Background
With the rapid development of machine vision and deep learning, we can get three-dimensional face data more and more conveniently. How to complement a three-dimensional face: making it a complete full-head model. Or imparting a different 3D head shape and hairstyle to a three-dimensional face has been an open problem. We propose a new three-dimensional face complement method: and deforming a complete full-head model, and then fusing three-dimensional face data, so that a complete full-head model can be obtained, and three-dimensional face complement is realized.
For the current face complement algorithm, the following methods are mainly available:
(1) Grid editing algorithm. Common mesh editing techniques employ free-form surface deformation techniques and differential-based mesh deformation techniques. Free-form surface deformation algorithms are typically control mesh deformation, control curve deformation, or control vertex deformation. This approach, while simple and efficient, tends to lose local detail of the grid. The differential-based mesh deformation method can preserve local geometric details of the triangular mesh. The methods require complicated manual operations of three-dimensional modelers, and are manual three-dimensional face complement methods.
(2) Face complement algorithm based on three-dimensional grid structure. These methods are the complement of the grid level according to the grid structure. Firstly, detecting a cavity boundary, and then, completing holes by using methods such as least square grids, radial Basis Function (RBF) implicit curved surfaces or Newton interpolation and the like. Such a method is generally suitable for filling holes in a grid, and cannot well complete tasks if the three-dimensional face lacks an excessively large area, for example, if the filling becomes a full-head model.
(3) The non-rigid deformation algorithm is iterated. Some iterative non-rigid deformation algorithms can be used on deformation of the head three-dimensional model to achieve the complement of the three-dimensional face. But these deformation algorithms require prior manual alignment. If the models with large head and face differences are deformed and fused together, three-dimensional face complement is realized, and a few strange deformations are generated as a result.
(4) A three-dimensional face complement algorithm based on a statistical model. This is done by constructing a three-dimensional full head model database, such as FacewareHouse, faceScape, etc. And carrying out statistical learning by using the data, and reconstructing a full-head model by using the learned model (the three-dimensional deformable model 3 DMM), thereby realizing the complementation of the lacking area (such as the head) of the three-dimensional face model. The face and head details can be lost by the face complement method through the statistical model, and the face and head detail method is difficult to apply to an actual scene.
In the above method, complicated manual operations are required for the mesh editing algorithm, and the effect of three-dimensional face complement depends on the level of the operator. The three-dimensional face complement algorithm based on the three-dimensional grid structure cannot realize three-dimensional face complement when the lack area is overlarge, for example, when only a face model is provided, the task of face complement into a full-head model cannot be realized. For the three-dimensional face complement algorithm based on the statistical model, details of the face and the head area are absent, and the calculated amount is large. For iterative non-rigid deformation algorithms, when the head model has complex three-dimensional hair data, many iterations are increased and also many strange deformations are generated. These algorithms cannot guarantee the sense of realism after completion and are difficult to apply in real scenes.
Disclosure of Invention
The invention aims to provide a three-dimensional face complement method, which solves the problem of automatically completing a three-dimensional face into a full-head model. The full-head model is automatically deformed and fitted with the three-dimensional face, and finally the full-head model after the three-dimensional face is completed is obtained. The method can overcome the defects existing in the prior art, can fully automatically realize the task of head supplement, can show good deformation effect under complex 3D data, and has natural face shape and head shape matching.
The implementation process of the invention is as follows:
a three-dimensional face complement method comprises the following steps: s1: inputting a face model and a full head model; s2: extracting 3D characteristic points of the full-head model; s3: extracting 3D characteristic points of a face model; s4: coarsely aligning the face model and the full-head model; s5: extracting a face region of an input three-dimensional face; s6: deformation: and (5) deforming the three-dimensional grid of the full-head model by using the iterative deformation algorithm with the face area of the three-dimensional face extracted in the step (S5) as a target until the full-head model is suitable for the face model, and achieving convergence conditions; s7: fusing the face model and the deformed full-head model; s8: and outputting the full-head model of the three-dimensional face complement.
The three-dimensional face complement method specifically comprises the following steps:
s1: inputting a face model and a full head model: respectively inputting a 3D point cloud of a face model to be complemented and a three-dimensional grid of a full-head model, wherein the face model and the full-head model are provided with texture information;
s2: extracting 3D characteristic points of the full-head model: selecting a face area and a neck area of the full-head model, and extracting 3D face feature points of the full-head model;
s3: extracting 3D characteristic points of a face model: extracting 3D feature points of a face model by using a method for synchronously realizing the positioning of the feature points of the three-dimensional point cloud of the face and the segmentation of the face; the method for synchronously realizing the positioning of the three-dimensional point cloud characteristic points of the human face and the segmentation of the human face refers to a method for synchronously realizing the positioning of the three-dimensional point cloud characteristic points of the human face and the segmentation of the human face disclosed by a patent CN 201910915696.3;
s4: coarse alignment of face model and full head model: according to the 3D characteristic points of the face model extracted in the step S3 and the 3D face characteristic points of the full-head model extracted in the step S2, coarse alignment is carried out by an ICP algorithm; wherein the ICP algorithm is fully called Iterative Closest Point;
s5: extracting a face region: setting a distance threshold value according to the face area of the three-dimensional grid of the aligned full-head model, and extracting the face area of the face model;
s6: deformation: targeting the face region of the three-dimensional face extracted in the step S5; deforming the three-dimensional grid of the full-head model by using an iterative deformation algorithm until the full-head model adapts to the face model, and achieving convergence conditions;
s7: fusing a face model and a full head model: fusing the face model and the deformed full-head model together; fusing textures of the two to obtain a full-head model after face replacement;
s8: and (3) outputting: and outputting the full-head model of the three-dimensional face complement.
Further, step S2 includes the steps of:
s21: selecting a face area of the three-dimensional grid of the full-head model, and solving indexes of the face area on vertexes of the three-dimensional grid of the full-head model;
s22: selecting a neck area of the three-dimensional grid of the full-head model, and solving indexes of the neck area at vertexes of the three-dimensional grid of the full-head model;
s23: and (3) obtaining the 3D face characteristic points of the full-head model by using a method for synchronously realizing the positioning of the face three-dimensional point cloud characteristic points and the face segmentation.
Further, step S4 includes the steps of:
s41: obtaining face 3D feature points corresponding to the 3D feature points of the face model extracted in the step S3 and the 3D face feature points of the full-head model extracted in the step S2;
s42: and (3) calculating a rigid transformation matrix of the point cloud by utilizing the characteristic points corresponding to the two by utilizing an ICP algorithm, and roughly aligning the two.
Further, step S5 includes the steps of:
s51: establishing a KD tree according to the coordinates of the vertices of the face area of the three-dimensional grid of the aligned full-head model;
s52: for each point of the 3D face point cloud of the face model, searching the nearest point in the KD tree, and judging the point of the 3D face point cloud as a face area when the distance is smaller than 1.5 cm.
Further, step S6 includes the steps of:
s61: setting the full-head model as a grid deformation source, setting the face model as a grid deformation target, and representing each iterative deformation as an affine transformation matrix of each vertex of the full-head model;
s62: setting a loss term of iterative deformation of the full-head model: distance loss term, rigidity loss term, and fixation term;
s63: and (3) iteratively solving the head deformation by using different rigidity parameters until the whole head model is suitable for the face model, and achieving convergence conditions.
Further, the step S61 specifically includes the steps of:
setting the face area of the input scanned 3D face point cloud as a mesh deformation target, and setting m points:
U=[u 1 … u m ] T
u m is the mth point of the face area of the human face point cloud, U is the target of grid deformation, T is matrix transposition,
setting the grid vertices of the full-head model as sources of grid deformation:
S=[s 1 … s n ] T
s n coordinates of the nth mesh vertex of the full head model, S is a source of mesh deformation, T is matrix transposition,
let the affine transformation matrix of the nth vertex be X n 3 x 4 matrix, then the affine transformation matrix for all vertices is:
X=[X 1 … X n ] T
X n the affine transformation matrix is the affine transformation matrix of the vertex of the nth full-head grid, X is the affine transformation matrix of all the vertices, and T is the matrix transposition.
Further, the step S62 specifically includes the following steps:
three penalty entries are set: distance loss term, rigidity loss term, and fixation term;
(1) Distance loss term: the first term loss function is set to the distance of the point cloud of the scanned face region and the face region of the full head model,
E d (X) is a distance loss term, w i As weight term, X i Affine transformation matrix for ith point of full-head model, dist is distance, U is mesh deformed target, s i Coordinates of the ith mesh vertex of the full head model,
w herein i Is a weight term, 0 or 1; nearest neighbor search algorithm using KD tree on face region extracted by face modelFinding the nearest point to the grid vertex of the full-head model, setting the nearest point to the grid vertex as 1, and setting other vertices as 0; extracting the nearest point of the face area of the face model from the grid vertex of the full-head model, and representing the nearest point as a new point:
S U =[s U 1 … s U n ] T
s U n representing the nearest point of the face area of the nth face model to the grid vertex of the full head model, S U Representing the nearest points of the face areas of all face models to the grid vertexes of the full-head model;
the distance loss term may be expressed as:
E d (X) is a distance loss term, w i As weight term, X i Affine transformation matrix s for ith point of full head model i Coordinates of the ith grid vertex of the full head model, u i The ith point of the face area of the face point cloud is U which is the target of grid deformation, X is an affine transformation matrix of all vertexes, S U Representing the nearest points of the face areas of all face models to the grid vertexes of the full-head model;
(2) Stiffness loss term: using the node-edge association matrix M in graph theory to represent the topology of the mesh, a weight matrix g=diag (1, γ) is defined, where γ is a parameter for balancing rotation and translation, and the rigidity term can be expressed as:
X i and X j Is two adjacent grid vertexes in the grid, G is a weight matrix, M is a node-side association matrix, X is an affine transformation matrix of all vertexes, E S (X) is a loss of stiffness term and F is a norm;
(3) The fixed item: wherein S is R And R each represents a point of the neck region of the full head modelCloud, fixed item can be expressed as:
S R and R both represent the point cloud of the neck region of the full head model, w i As weight term, X i Affine transformation matrix s for ith point of full head model i Coordinates of the ith grid vertex of the full head model, R i An ith point cloud which is a neck region, X is an affine transformation matrix of all vertices,in order to fix the item(s),
(4) Total loss term: for the deformation process of the full-head model, the weight of the rigidity item is set at the beginning, so that the deformation of the whole head is facilitated, the weight of the rigidity item is reduced, the deformation of the detail of the full-head model is facilitated, the weight of the rigidity loss item is set to be alpha, the three loss items are integrated, and the total loss item is as follows:
as total loss term, E d (X) is a distance loss term, E S (X) is a loss of rigidity term, < ->For a fixed term, α is the rigid term parameter weight.
Further, the step S63 specifically includes the following steps:
iterative computationEach iteration is calculated for each point of the affine transformation matrix X i From the total loss term of step S62, an equation to be solved can be derived:
as total loss term, S R And R both represent a point cloud of the neck region of the full head model, S U Representing the nearest points of the face area of all face models to the grid vertexes of the full-head model, U is a grid deformation target, X is an affine transformation matrix of all vertexes, G is a weight matrix, M is a node-side association matrix, alpha is a rigid item parameter weight, F is a norm,
the higher stiffness parameter is set at the beginning of the iteration, then the stiffness parameter is slowly reduced, each stiffness parameter is iterated to convergence, and the convergence condition is set as the difference of the transformation matrix X of two times is smaller than a certain experience value epsilon.
Further, step S7 includes the steps of:
s71: after the whole head model is deformed, the point cloud of the face is replaced by the point cloud of the 3D face scanning;
s72: respectively counting the histograms of the point cloud color of the full-head model face area and the point cloud color of the 3D face scanning, prescribing the histogram of the point cloud color of the full-head model face skin area to ensure that the distribution of the point cloud color is the same as that of the point cloud color histogram of the 3D face scanning, and then adopting a boundary texture Gaussian filtering method to transition the texture color difference at the joint;
s73: and regenerating the grid to obtain the full-head model.
The 3D point cloud is the three-dimensional point cloud.
The invention has the positive effects that:
(1) The invention mainly utilizes the 3D face scanning point cloud data and the full-head 3D model to complement the 3D face scanning into a full-head model. And after the full-head model is aligned with the 3D characteristic points of the target face, performing global iterative deformation of the full-head model. For the whole iterative deformation process, firstly, the whole head model is deformed integrally to adapt to the face shape of the target human face, and then the high-frequency details of the face area are deformed gradually. And iteratively converging until the error between the full-head model and the face model is smaller than a threshold value. Finally, the faces of the two models are combined: and replacing the face of the full-head model with the face of the human face model, and performing texture fusion (histogram prescribing and Gaussian filtering) to obtain the full-head model of the 3D scanning data. The model is deformed and fused, so that a strange deformation effect is not generated, and the whole human head model is adapted to face data.
(2) The method is simple, convenient and feasible, full-automatic, efficient and practical.
(3) The method is suitable for face complement under the condition of large difference of various face shapes and head shapes, has robustness and has stronger sense of reality of the final result.
Drawings
FIG. 1 is a flow chart of a method for three-dimensional face completion according to the present invention;
FIG. 2 is a schematic diagram of preprocessing a three-dimensional grid of a full-head model of the three-dimensional face complement method according to the invention;
FIG. 3 is a schematic view of face 3D scanning preprocessing of the three-dimensional face complement method of the present invention;
FIG. 4 is a schematic diagram of an iterative morphing 3D-to-2D method of three-dimensional face completion according to the present invention;
FIG. 5 is a flowchart of the whole algorithm of the three-dimensional face completion method according to the present invention;
FIG. 6 is an effect diagram of the three-dimensional face complement method of the present invention;
FIG. 7 is a top-half effect diagram of the three-dimensional face complement method of the present invention;
fig. 8 is a whole body effect diagram of the three-dimensional face complement method according to the present invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
The meaning of the letter symbols in the following formulas is the same as that of the same letter symbols in the same formulas of the summary of the invention of the specification, and will not be described.
Fig. 1 is a flowchart of a three-dimensional face complement method according to the present invention, as shown in fig. 1, the present invention includes the following steps:
s1: grid initialization, namely respectively inputting a face model to be complemented (face scanning 3D point cloud) and a full-head model (three-dimensional grid of a full-head template):
the face scanning 3D point cloud data is required to be textured, and most face three-dimensional scanners in the current market can scan the 3D data and texture data of the face. The 3D data required to be input is textured to extract the 3D feature points of the face according to the texture information. The input face model (3D point cloud) and the full head model (three-dimensional grid) to be complemented are respectively shown in the left diagrams of fig. 2 and 3.
S21: selecting a face area of the three-dimensional grid of the full-head model, and solving indexes of the face area on vertexes of the three-dimensional grid of the full-head model:
the purpose of setting the face region is to reject the point cloud of the non-face region of the scanned 3D face in a subsequent step. The face region of the full head model is shown in the upper right of fig. 2.
S22: selecting a neck region of the three-dimensional grid of the full-head model, and solving indexes of the neck region at vertexes of the three-dimensional grid of the full-head model:
the point cloud of the neck region is a rigid fixed term in the deformation of the full head model. The neck is set as a fixed item to limit deformation of the neck and the nearby area, so that robustness and reality of results are guaranteed. Setting the neck region as the stationary phase can also reduce the amount of calculation, and simultaneously ensure the combination of the deformed whole head model and the body region. The neck region of the full head model is shown in the lower right of fig. 2.
S23: by using the method of patent CN201910915696.3, 3D feature points (standard 68 face semantic feature points) of the full-head mesh are obtained:
the 3D characteristic points are used for roughly aligning the face scanning 3D point cloud and the three-dimensional grid of the full-head model together by using an ICP algorithm.
S3: extracting face scanning characteristics, and extracting 3D characteristic points of a face;
here, the 3D feature points of the face are also extracted by the method of patent CN 201910915696.3. The feature points are used for roughly aligning the face scanning 3D point cloud and the three-dimensional grid of the full-head model together by using an ICP algorithm.
S4: according to the 3D characteristic points of the face model and the 3D characteristic points of the full-head model, performing coarse alignment by using an ICP algorithm:
as shown in the second graph of fig. 3, the two three-dimensional models are aligned by the feature points ICP.
S5: face region extraction:
and setting a distance threshold value according to the face area of the three-dimensional grid of the aligned full-head model to extract the face area of the three-dimensional face scan. The face after feature point alignment is selected by using the nearest neighbor search algorithm of the KD tree. For each point cloud of the 3D face scan of the face model, searching the nearest point in the face region, and if the distance is less than 1.5cm, judging the nearest point as the face region. And finally obtaining a face area of the 3D face scanning, wherein the result is shown in the right graph of fig. 3.
S6: and (3) taking the three-dimensional face area as a target, and deforming the full-head three-dimensional grid by using an iterative deformation algorithm:
s61: setting the full-head model as a grid deformation source, setting the face model as a grid deformation target, and expressing each iterative deformation as an affine transformation matrix of each vertex of the full-head model:
here we set the face region of the input scanned 3D face point cloud as the target of the morphing. As shown in the right-hand diagram of fig. 3. We set the point cloud of this face region as the target of the deformation and set m points therein
U=[u 1 … u m ] T
We set the mesh vertices of the full-head model as the source S of mesh deformation:
S=[s 1 … s n ] T
for deformation of the full head model, we iteratively deform all points of the full head model, which is equivalent to solving an affine transformation matrix for each vertex of Source. Affine transformation moment with nth vertexThe array is X n This is a 3 x 4 matrix. The affine transformation matrix for all vertices is:
X=[X 1 … X n ] T
s62: setting a loss term of iterative deformation of the full-head model:
(1) Distance loss term: the first term loss function is set to a distance between a point cloud of the scanned face region and a face region of the full head model.
W herein i Is a weight term, and is 0 or 1. On a face region extracted by a face model, a nearest point closest to grid vertexes of the full-head model is found by utilizing a nearest neighbor search algorithm of a KD tree, and is set as 1, and other vertexes are set as 0. The reason for setting the weight in this way is to avoid deformation of the mesh of the non-face area of the full-head mesh. Weight w for each iteration i After determination, the distance loss term is as follows
Extracting the nearest point of the face area of the face model from the grid vertex of the full-head model, and representing the nearest point as a new point:
S U =[s U 1 … s U n ] T
the distance loss term may be expressed as
(2) Stiffness loss term: the stiffness loss term is used for ensuring that adjacent points in the grid can keep the original geometric structure unchanged. The node-edge association matrix M in graph theory is used here to represent the topology of the mesh. If the topology has a total of r edges and n vertices, then M is r n. When the edge r connects the points i and j, M ri =-1,M rj =1. We define a weight matrix g=diag (1, γ), where γ is a parameter to balance rotation and translation. Here the Frobenius norm is used to penalize different transforms of neighboring vertices. The rigid term may be expressed as
(3) The fixed item: the fixed term is used for ensuring that certain areas can be controlled to be unchanged when the whole head model is deformed. Such as we set the fixation term to the neck region. Our approach is to set the affine transformation target of the fixed term to itself. In this way, the loss function of the fixed term can be unified, as shown in the following formula. Wherein S is R And R both represent the point cloud of the neck region of the full head model.
(4) Total loss term: through the description of the three loss terms, we set the rigidity term by using the distance as the loss function of affine transformation and the topological structure of the three-dimensional grid of the full-head model, and control the whole deformation. And setting a fixed item, substituting the fixed item into the whole deformation iteration, and ensuring that the vertexes of the three-dimensional grid neck region of the whole head model are unchanged.
Because of the limitation of the rigidity term, the deformation area and the fixing area have a good transition, and the grid step is not generated. For the deformation process of the full-head model, the weight of the rigid item is set at the beginning, so that the deformation of the whole head is facilitated. The rigidity term weight can be reduced, and the deformation of the full-head model detail is facilitated. We set the weight of the stiffness penalty term to α, combine the three above penalty terms, the total penalty term being as follows
S63: iteratively solving for head deformation with different stiffness parameters until convergence:
repeated stackingSubstitution ofThe minimum value of (2) can be found for each iteration, the affine transformation matrix X for each point i . From the above total loss term, the equation to be solved can be derived:
this is a quadratic function whose derivative is set to zero and solved to a linear system of equations, the minimum can be directly and accurately found.Where x= (a) T A) -1 A T B. Thus, we can find affine transformations for all vertices of the full head model for each iteration.
Based on the iterative deformation algorithm, a higher stiffness parameter is set at the beginning of iteration, and then the stiffness parameter is slowly reduced. Each stiffness parameter iterates to convergence. The condition for convergence is set such that the difference of the two transformation matrices X is smaller than a certain empirical value epsilon.
The overall iterative algorithm is as follows:
for a better understanding of the iterative morphing algorithm of the present invention, we represent the entire iterative process as fig. 4. Grid vertices set as fixed items are like the yellow vertices in fig. 4. The points of the fixed term remain unchanged in position during the iteration. Represented by a loss of stiffness term as a line between the blue and black points. By setting parameters of the rigid item, the integral deformation is realized firstly, then the detail deformation is realized, and the point of the fixed item is kept unchanged. The full-head model grid is initially black dots, and the deformation process is blue dots.
S7: fusing the human face and the deformed whole head together, and fusing textures of the human face and the deformed whole head:
after the model of the whole head is deformed, the point cloud of the face is replaced by the point cloud of the 3D scanning, and the point cloud is regenerated to generate grids, so that the model of the whole head and the whole body of the 3D face scanning can be obtained. Another key problem needs to be solved at this time: the color of the full-head face is different from the color of the 3D face scanned point cloud. The difference in skin color can make the face of the fused model look obtrusive. The solution is similar to the specification of a 2D histogram. We respectively count the histograms of the point cloud color of the full-head model face region and the point cloud color of the 3D face scan. And then prescribing the histogram of the point cloud color of the skin region of the whole head model face so that the distribution of the histogram is the same as that of the point cloud color histogram of the 3D face scanning. And then adopting a boundary texture Gaussian filtering method to transition texture color difference at the joint. The final result is shown in the last figure of fig. 5.
S8: outputting a full-head model after 3D face complement:
fig. 6, 7 and 8 are effect diagrams of the three-dimensional face complement method according to the present invention, and upper body complement and whole body complement effect diagrams achieved by the three-dimensional face complement method, respectively. The left graph of fig. 6 is a 3D face point cloud, and the middle graph and the right graph are two full-head models after being complemented respectively. The left side of fig. 7 is the scanned 3D face point cloud and the right side is the completed upper body model, respectively. The middle part of fig. 8 is the scanned 3D face point cloud, and the two sides are the full-body models respectively.
The foregoing is a further detailed description of the invention in connection with specific preferred embodiments, and it is not intended that the invention be limited to such description. It will be apparent to those skilled in the art that several simple deductions or substitutions can be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (6)

1. The three-dimensional face complement method is characterized by comprising the following steps:
s1: inputting a face model and a full head model: respectively inputting a 3D point cloud of a face model to be complemented and a three-dimensional grid of a full-head model, wherein the face model and the full-head model are provided with texture information;
s2: extracting 3D characteristic points of the full-head model: selecting a face area and a neck area of the full-head model, and extracting 3D face feature points of the full-head model;
s3: extracting 3D characteristic points of a face model: extracting 3D feature points of a face model by using a method for synchronously realizing the positioning of the feature points of the three-dimensional point cloud of the face and the segmentation of the face;
s4: coarse alignment of face model and full head model: according to the 3D characteristic points of the face model extracted in the step S3 and the 3D face characteristic points of the full-head model extracted in the step S2, coarse alignment is carried out by an ICP algorithm;
s5: extracting a face region: setting a distance threshold value according to the face area of the three-dimensional grid of the aligned full-head model, and extracting the face area of the face model;
s6: deformation: targeting the face region of the three-dimensional face extracted in the step S5; deforming the three-dimensional grid of the full-head model by using an iterative deformation algorithm until the full-head model adapts to the face model, and achieving convergence conditions;
s7: fusing a face model and a full head model: fusing the face model and the deformed full-head model together; fusing textures of the two to obtain a full-head model after face replacement;
s8: and (3) outputting: and outputting the full-head model of the three-dimensional face complement.
2. The method of three-dimensional face completion of claim 1, wherein: step S2 includes the steps of:
s21: selecting a face area of the three-dimensional grid of the full-head model, and solving indexes of the face area on vertexes of the three-dimensional grid of the full-head model;
s22: selecting a neck area of the three-dimensional grid of the full-head model, and solving indexes of the neck area at vertexes of the three-dimensional grid of the full-head model;
s23: and (3) obtaining the 3D face characteristic points of the full-head model by using a method for synchronously realizing the positioning of the face three-dimensional point cloud characteristic points and the face segmentation.
3. The method of three-dimensional face completion of claim 1, wherein: step S4 includes the steps of:
s41: obtaining face 3D feature points corresponding to the 3D feature points of the face model extracted in the step S3 and the 3D face feature points of the full-head model extracted in the step S2;
s42: and (3) calculating a rigid transformation matrix of the point cloud by utilizing the characteristic points corresponding to the two by utilizing an ICP algorithm, and roughly aligning the two.
4. The method of three-dimensional face completion of claim 1, wherein: step S5 includes the steps of:
s51: establishing a KD tree according to the coordinates of the vertices of the face area of the three-dimensional grid of the aligned full-head model;
s52: for each point of the 3D face point cloud of the face model, searching the nearest point in the KD tree, and judging the point of the 3D face point cloud as a face area when the distance is smaller than 1.5 cm.
5. The method of three-dimensional face completion of claim 1, wherein: step S6 includes the steps of:
s61: setting the full-head model as a grid deformation source, setting the face model as a grid deformation target, and representing each iterative deformation as an affine transformation matrix of each vertex of the full-head model;
s62: setting a loss term of iterative deformation of the full-head model: distance loss term, rigidity loss term, and fixation term;
s63: and (3) iteratively solving the head deformation by using different rigidity parameters until the whole head model is suitable for the face model, and achieving convergence conditions.
6. The method of three-dimensional face completion of claim 1, wherein: step S7 includes the steps of:
s71: after the whole head model is deformed, the point cloud of the face is replaced by the point cloud of the 3D face scanning;
s72: respectively counting the histograms of the point cloud color of the full-head model face area and the point cloud color of the 3D face scanning, prescribing the histogram of the point cloud color of the full-head model face skin area to ensure that the distribution of the point cloud color is the same as that of the point cloud color histogram of the 3D face scanning, and then adopting a boundary texture Gaussian filtering method to transition the texture color difference at the joint;
s73: and regenerating the grid to obtain the full-head model.
CN202011102085.6A 2020-10-15 2020-10-15 Three-dimensional face complement method Active CN112200905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011102085.6A CN112200905B (en) 2020-10-15 2020-10-15 Three-dimensional face complement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011102085.6A CN112200905B (en) 2020-10-15 2020-10-15 Three-dimensional face complement method

Publications (2)

Publication Number Publication Date
CN112200905A CN112200905A (en) 2021-01-08
CN112200905B true CN112200905B (en) 2023-08-22

Family

ID=74009045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011102085.6A Active CN112200905B (en) 2020-10-15 2020-10-15 Three-dimensional face complement method

Country Status (1)

Country Link
CN (1) CN112200905B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113426129B (en) * 2021-06-24 2024-03-01 网易(杭州)网络有限公司 Method, device, terminal and storage medium for adjusting appearance of custom roles
CN113837053B (en) * 2021-09-18 2024-03-15 福建库克智能科技有限公司 Biological face alignment model training method, biological face alignment method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
WO2019050808A1 (en) * 2017-09-08 2019-03-14 Pinscreen, Inc. Avatar digitization from a single image for real-time rendering
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110688947A (en) * 2019-09-26 2020-01-14 西安知象光电科技有限公司 Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
CN111028341A (en) * 2019-12-12 2020-04-17 天目爱视(北京)科技有限公司 Three-dimensional model generation method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201023092A (en) * 2008-12-02 2010-06-16 Nat Univ Tsing Hua 3D face model construction method
KR101997500B1 (en) * 2014-11-25 2019-07-08 삼성전자주식회사 Method and apparatus for generating personalized 3d face model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
WO2019050808A1 (en) * 2017-09-08 2019-03-14 Pinscreen, Inc. Avatar digitization from a single image for real-time rendering
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110688947A (en) * 2019-09-26 2020-01-14 西安知象光电科技有限公司 Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
CN111028341A (en) * 2019-12-12 2020-04-17 天目爱视(北京)科技有限公司 Three-dimensional model generation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三维脸部网格模型的交互式调整;李梦东, 阮秋琦;中国图象图形学报(第08期);818-823 *

Also Published As

Publication number Publication date
CN112200905A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
KR102154470B1 (en) 3D Human Hairstyle Generation Method Based on Multiple Feature Search and Transformation
Lozes et al. Partial difference operators on weighted graphs for image processing on surfaces and point clouds
Yin et al. Morfit: interactive surface reconstruction from incomplete point clouds with curve-driven topology and geometry control.
Pauly et al. Example-based 3d scan completion
CN108038906B (en) Three-dimensional quadrilateral mesh model reconstruction method based on image
CN112200905B (en) Three-dimensional face complement method
CN104299263B (en) A kind of method that cloud scene is modeled based on single image
CN101866497A (en) Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
EP1960928A2 (en) Example based 3d reconstruction
Zeng et al. Region-based bas-relief generation from a single image
CN107730587B (en) Rapid three-dimensional interactive modeling method based on pictures
Lee et al. Segmenting a deforming mesh into near-rigid components
CN112767531B (en) Mobile-end-oriented human body model face area modeling method for virtual fitting
CN111462030A (en) Multi-image fused stereoscopic set vision new angle construction drawing method
CN107369204A (en) A kind of method for recovering the basic three-dimensional structure of scene from single width photo based on deep learning
CN114868128A (en) Computer-implemented method for personalizing spectacle frame elements by determining a parameterized replacement model of spectacle frame elements, and device and system using such a method
CN113538569A (en) Weak texture object pose estimation method and system
CN117157673A (en) Method and system for forming personalized 3D head and face models
Thiemann et al. 3D-symbolization using adaptive templates
CN116740281A (en) Three-dimensional head model generation method, three-dimensional head model generation device, electronic equipment and storage medium
CN108876922B (en) Grid repairing method based on internal dihedral angle compensation regularization
CN113781372B (en) Drama facial makeup generation method and system based on deep learning
CN114549795A (en) Parameterization reconstruction method, parameterization reconstruction system, parameterization reconstruction medium and parameterization reconstruction equipment for shoe tree curved surface
CN113870404A (en) Skin rendering method and device of 3D model
Brett et al. A Method of 3D Surface Correspondence for Automated Landmark Generation.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant