CN113808272B - Texture mapping method in three-dimensional virtual human head and face modeling - Google Patents

Texture mapping method in three-dimensional virtual human head and face modeling Download PDF

Info

Publication number
CN113808272B
CN113808272B CN202110984213.2A CN202110984213A CN113808272B CN 113808272 B CN113808272 B CN 113808272B CN 202110984213 A CN202110984213 A CN 202110984213A CN 113808272 B CN113808272 B CN 113808272B
Authority
CN
China
Prior art keywords
view
face
image
layer
pyramid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110984213.2A
Other languages
Chinese (zh)
Other versions
CN113808272A (en
Inventor
樊养余
刘洋
马浩悦
李文星
郭哲
齐敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110984213.2A priority Critical patent/CN113808272B/en
Publication of CN113808272A publication Critical patent/CN113808272A/en
Application granted granted Critical
Publication of CN113808272B publication Critical patent/CN113808272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

According to the texture mapping method in the three-dimensional virtual human head and face modeling, a personalized human face model is constructed, a front view and a side view of the real human face of a person to be mapped are cut according to the positions of characteristic points, and the offset angle of each flush area are calculated; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view; and introducing a Laplacian pyramid algorithm to fuse the front view after cutting, the side view after miscut transformation and the mirror image of the side view after miscut transformation, and mapping the fused image into the personalized face model to obtain the texture mapping face model of the character to be mapped. The invention can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.

Description

Texture mapping method in three-dimensional virtual human head and face modeling
Technical Field
The invention belongs to the technical field of image processing and computer graphics, and particularly relates to a texture mapping method in three-dimensional virtual human head and face modeling.
Background
Texture mapping is an important stage of three-dimensional virtual face modeling, and realistic three-dimensional virtual face reconstruction is based on good texture mapping, but in practice, the following problems exist:
(1) The three-dimensional scanning data is utilized to carry out geometric reconstruction and texture feature recovery on the human face, and although ideal human face effect can be obtained, the model precision is high, but the scanning equipment has high cost, complex operation and difficult popularization;
(2) When the front side photo of the character is used for generating the head panoramic texture map, textures of the front side photo are derived from different photos, so that gaps are generated due to the color difference of the texture source pictures, and color fusion is required to be carried out on areas with different texture sources, so that the texture colors can be smoothly transited.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a texture mapping method in three-dimensional virtual human head and face modeling. The technical problems to be solved by the invention are realized by the following technical scheme:
the texture mapping method in the three-dimensional virtual human head and face modeling provided by the invention comprises the following steps:
acquiring a front view and a side view of a real face of a person to be mapped;
marking the feature points of the front view and the side view, selecting the facial edge feature points in the front view and the side view after the feature points are marked, correcting the facial edge feature points in the front view and the side view after the feature points are marked, and cutting according to the positions of the corrected facial edge feature points to obtain the front view and the side view containing the facial edge feature points after cutting;
acquiring a head data set describing the proportion and the structure of the head of the real person;
constructing a virtual general face model by utilizing the head data set;
performing geometric deformation and characteristic point adaptation on the general face model based on the characteristic points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain a personalized face model;
calculating the offset and the offset angle of each flush area aiming at the flush areas of the front view after cutting and the side view after cutting;
performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view;
calculating a Laplacian pyramid of mirror images of the front view after cutting, the side view after miscut transformation and the side view after miscut transformation;
splicing the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to the dividing lines of each layer to obtain a spliced Laplacian pyramid;
reconstructing a Gaussian pyramid layer by layer according to the spliced Laplacian pyramid;
determining a fusion image according to the Gaussian pyramid;
and mapping the fusion image into the personalized face model to obtain the texture mapping face model of the character to be mapped.
Optionally, the constructing a virtual generic face model using the head dataset includes:
according to the data of the basic proportion and the structure of the head of the real person described in the head data set, a virtual general head-face model is built in 3DsMax and is refined;
wherein, the edge characteristic point transformation obviously selects a specific area,
the general head-face model is a three-dimensional grid patch model, and the three-dimensional grid patch model is expressed as:
M={V M ,F M ,G M }
wherein V is M Representing a set of vertex coordinates of a three-dimensional mesh, F M Representing the index sets of vertices that make up a patch, G M Representing other information including a smooth group, a texture link, and a texture reference.
Optionally, labeling feature points on the front view and the side view, selecting facial edge feature points in the front view and the side view after labeling feature points, correcting facial edge feature points in the front view and the side view after labeling feature points, and clipping according to positions of corrected facial edge feature points, where obtaining the front view and the side view containing facial edge feature points after clipping includes:
the front view and the side view are subjected to contour line cutting, scaling in equal proportion and face alignment, and the front view and the side view after alignment are obtained;
wherein the front view and the side view after alignment are divided into a plurality of areas according to contour lines;
marking characteristic points in different areas of the front view and the side view after alignment;
selecting a first face edge characteristic point in the front view of the real person after the characteristic points are marked, and selecting a second face edge characteristic point in the side view;
correcting the first face edge feature point and the second face edge feature point;
taking the corrected first face edge characteristic points and the corrected second face edge characteristic points which are positioned in the flush area as matching characteristic points, and storing coordinate information of the matching characteristic points;
connecting each first face edge feature point in the front view to draw a first curve, and connecting each second face edge feature point in the side view to draw a second curve;
cutting the front view by taking the first curve and the mirror image curve of the first curve as the critical, reserving the middle part of the front view, cutting the left side area of the side view by taking the second curve as the critical, and obtaining the cut front view and the cut side view.
Optionally, the performing geometric deformation and feature point adaptation on the universal face model based on the feature points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain a personalized face model includes:
determining characteristic points in the general face model;
based on the feature points marked by the front view and the side view, the radial basis interpolation algorithm is utilized to locally deform the curved surface near the feature points of the general face model, and the personalized face model is obtained.
Optionally, the front view and the side view are cut according to contour lines, scaled according to equal proportions and face aligned, and obtaining the front view and the side view after alignment includes:
cutting the front view and the side view respectively to obtain a front view and a side view composed of different areas;
scaling the front view and the side view after cutting in equal proportion in the corresponding areas so that the front view and the side view have the same size in the same area;
and taking the top of the head, eyes, lips and chin as contour lines, and aligning the front view and the side view after the scaling in equal proportion to each other to obtain a front view and a side face image after the alignment.
6. The texture mapping method of claim 3, wherein the modifying the first and second face edge feature points comprises:
and correcting the first face edge characteristic point and the second face edge characteristic point by dragging so as to enable the first face edge characteristic point and the second face edge characteristic point to be positioned at the face edge.
Wherein the laplacian pyramid is represented as:
the laplacian image after each layer is stitched is expressed as:
the reconstructed gaussian pyramid is expressed as:
wherein the expanse operator represents that the input image is interpolated and amplified, G l Image representing layer I Gaussian pyramid, P l Representing a layer l laplacian pyramid image, N being the number of layers.
Optionally, the splitting the split front view, the split side view, and the laplacian pyramid of the mirror image according to the dividing lines of each layer to obtain the split laplacian pyramid includes:
the front view after cutting, the side view after misplaced transformation and the Laplacian pyramid of the mirror image are spliced according to the dividing lines of all layers, fusion is carried out on the dividing lines in a pixel weighted average mode, and the Laplacian image after each layer of splicing is obtained;
and restoring the Laplacian pyramid from the spliced Laplacian image of each layer to obtain the spliced Laplacian pyramid.
Optionally, mapping the fused image into the personalized face model includes:
expanding the grids of the face personalized model in an orthogonal projection mode;
projecting the front grid of the personalized model of the human face to a two-dimensional plane right in front of the human face in the fused image;
and projecting two lateral grids of the face personalized model to the lateral plane of the fusion image which is vertical to the lateral plane.
Optionally, after projecting the two lateral grids of the face personalized model onto the lateral planes of the perpendicular fused image, the texture mapping method further comprises:
and fusing the projected two lateral grids and the projected front grid according to the projection boundary to obtain the texture mapping face model of the character to be mapped.
According to the texture mapping method in the three-dimensional virtual human head and face modeling, a personalized human face model is constructed, a front view and a side view of the real human face of a person to be mapped are cut according to the positions of characteristic points, and the offset angle of each flush area are calculated; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view; and introducing a Laplacian pyramid algorithm to fuse the front view after cutting, the side view after miscut transformation and the mirror image of the side view after miscut transformation, and mapping the fused image into the personalized face model to obtain the texture mapping face model of the character to be mapped. The invention can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a schematic flow chart of a texture mapping method in three-dimensional virtual human head and face modeling according to an embodiment of the present invention;
FIG. 2a is a front face and side face diagram of a real person with feature points labeled;
FIG. 2b is a front side view pretreatment result of a character;
FIG. 2c is a front side view feature point calibration result;
FIG. 2d is a front side view area division;
FIG. 3a is a generic face geometry model and its mesh structure;
FIG. 3b is a schematic diagram of FDP points in the MPEG-4 standard;
FIG. 3c is a schematic diagram of a feature point used in the present invention;
FIG. 4 is a personalized face model modeling test result;
FIG. 5 is a radiometric illustration;
FIG. 6 is a comparison of the shear shift results;
FIG. 7 is a graph of image cropping and direct stitching results;
FIG. 8 is a pyramid decomposition and fusion process diagram of three images;
FIG. 9 is a flowchart of a Laplacian pyramid restored image;
FIG. 10 is a texture fusion result based on the Laplacian pyramid;
FIG. 11 is a projection decomposition of a personalized face model mesh;
FIG. 12 is a projection of a positive side grid onto a two-dimensional plane;
FIG. 13 is a graph of the positive side splice results;
fig. 14 is an image of a personalized face model with texture mapping added.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
As shown in fig. 1, the texture mapping method in three-dimensional virtual human head and face modeling provided by the invention comprises the following steps:
s1, acquiring a front view and a side view of a real face of a person to be mapped;
s2, marking feature points of the front view and the side view, selecting facial edge feature points in the front view and the side view after feature point marking, correcting the facial edge feature points in the front view and the side view after feature point marking, and cutting according to the corrected facial edge feature point positions to obtain the front view and the side view containing the facial edge feature points after cutting;
as an alternative embodiment of the present invention, step S2 includes:
s21: the front view and the side view are subjected to contour line cutting, scaling in equal proportion and face alignment, and the front view and the side view after alignment are obtained;
wherein the front view and the side view after alignment are divided into a plurality of areas according to contour lines;
the step S21 includes:
step a, cutting the front view and the side view respectively to obtain the front view and the side view composed of different areas;
step b, scaling the front view and the side view after cutting in equal proportion in the corresponding areas so that the front view and the side view have the same size in the same area;
and c, taking the top of the head, eyes, lips and chin as contour lines, and aligning the front view and the side view after equal scaling to obtain the front view and the side face image after mutual alignment.
The step cuts and scales the face front side view to keep the facial area information in the image to the greatest extent and center the face area information in the image, and simultaneously takes the top of the head, eyes, lips and chin as four main contour lines to Ji Zhengce view facial information, as shown in fig. 2 b.
Step S22: marking characteristic points in different areas of the front view and the side view after alignment;
FIG. 2a is a front face and side face of a real person with feature points labeled;
fig. 2a is a front face and a side face of a real person after feature points are marked. The step marks the characteristic points of the human face in the front view and the side view respectively. The width and height information of the feature points can be obtained from the frontal view and are respectively marked as (x, y) 1 ) The height and depth information of the feature points can be obtained from the side photograph and recorded as (y 2 Z). Let y= (y) 1 +y 2 ) And 2, obtaining three-dimensional coordinates (x, y, z) of the personalized face feature points.
Step S23: selecting a first facial edge characteristic point in a front view of the real character after marking the characteristic points, and selecting a second facial edge characteristic point in a side view;
in the front view and the side view after marking the feature points, respectively selecting a first face edge feature point of each area of the front view and a second face edge feature point of each area of the side view;
step S24: correcting the first face edge feature point and the second face edge feature point;
the invention can correct the first face edge feature point and the second face edge feature point by dragging so as to enable the first face edge feature point and the second face edge feature point to be positioned at the face edge.
Step S25: taking the corrected first face edge characteristic points and the corrected second face edge characteristic points which are positioned in the flush area as matching characteristic points, and storing coordinate information of the matching characteristic points;
defining face edge feature points of a front side view, in the step, 10 transformation face edge feature points can be defined in a face area, the transformation face edge feature points are uniformly distributed on an image vertical middle parting line by default by taking the image height as a unit, the image is divided into 9 transverse contour regions, draggable points are created and drawn on the image, each draggable point is selected and moved to a proper position of the face edge, the coordinates of the draggable points are recorded, the transformation feature points are aligned to each draggable point, and the i (i is more than or equal to 0 and less than or equal to 10) feature points are recorded as p i (x i ,y i ) The center of the image may be the origin of coordinates. The facial edge feature point sets of the front side view are respectively denoted as P f And P s As in fig. 2c, the facial edge feature points are marked with white points, and finally the facial edge feature point position information is saved.
Step S26: connecting each first face edge feature point in the front view to draw a first curve, and connecting each second face edge feature point in the side view to draw a second curve;
step S27: cutting the front view by taking the first curve and the mirror image curve of the first curve as the critical, reserving the middle part of the front view, cutting the left side area of the side view by taking the second curve as the critical, and obtaining the front view after cutting and the side view after cutting.
First according to defined facial edge feature point set P f And P s And connecting points in the front side view, drawing a characteristic curve, and dividing the front side view into nine transverse areas according to the position information of the facial edge characteristic points.
S3, acquiring a head data set describing the proportion and the structure of the head of the real person;
s4, constructing a virtual general face model by utilizing the head data set;
the method comprises the steps that a virtual general head-face model is built in 3DsMax and is refined according to data of basic proportion and structure of the head of a real person described in a head data set;
wherein, the edge characteristic point transformation obviously selects a specific area,
the three-dimensional mesh patch model is expressed as:
M={V M ,F M ,G M } (1)
wherein V is M Representing a set of vertex coordinates of a three-dimensional mesh, F M Representing the index sets of vertices that make up a patch, G M Representing other information including a smooth group, a texture link, and a texture reference.
The invention can scan the real face by a three-dimensional laser scanner to obtain an original three-dimensional face grid model, or the required head model is directly derived from a model library by Proser professional human modeling software, and a face dimension model can be manually established by 3DS MAX, MAYA and other professional modeling software. In the step, a virtual human universal head and face model is built in 3DsMax, and the universal head and face model is a three-dimensional grid patch model as shown in fig. 3 a.
S5, performing geometric deformation and characteristic point adaptation on the general face model based on the characteristic points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain a personalized face model;
as an alternative embodiment of the present invention, step S5 includes:
s51: determining characteristic points in the general face model;
referring to fig. 3b, fig. 3b is an FDP point in the MPEG-4 standard, and the face feature point selected in the feature point adaptation of the present invention is based on the MPEG-4 standard, in which the FDP is related to the geometric modeling of the face. In MPEG-4, 84 FDP feature points are defined. These feature points are divided into 11 groups of cheeks, eyes, nose, mouth, ears, etc., and by definition of these feature points a generic face model can be converted into a specific face model. Referring to fig. 2b, fig. 2b redefines 153 face feature points, which are defined in the general face model according to the present invention, with reference to the MPEG-4 standard, as shown in fig. 3c, including 14 parts of eyes, eyelids, face profile, etc.
S52: and carrying out local deformation on a curved surface near the feature points of the general face model based on the feature points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain a personalized face model.
In order to normalize the universal face to the same coordinate space as the personalized face in the photo, it is necessary to perform an overall transformation on the universal face model so that the size thereof is substantially the same as the personalized face. Setting the vertex coordinates of any grid of the general face model as V (Vx, vy and Vz), taking the centers of two eyes as the origin, marking as O (Ox, oy and Oz), measuring the width, height and depth of the model as Lx, ly and Lz respectively, and calculating the new position V '(V' x, V 'y and V' z) after transformation according to the following formula:
V i ′=(V i -O i )·l i /L i ,i=x,y,z.
the radial basis function (Radial Basic Function, RBF) is a deformation method based on spatially discrete data interpolation for approximation of a multivariate function in a multidimensional space. The method first fits a continuous multi-variable function by using a linear combination of basis functions. The radial basis function has a good fitting effect on irregular point clouds, and can generate a smooth surface in a three-dimensional space, so that the method is widely applied to reconstruction of a three-dimensional face model.
The principle of the radial basis interpolation algorithm applied to the geometric deformation of the face is as follows: n feature points defined in the general face model are known, and coordinates of all mesh vertices of the general face model are known, assuming that the feature points are located from the original position p i Move to a new position p' i The displacement is Δp i =p′ i -p i The displacement Δp of each non-feature point p can be interpolated using a radial basis function. At this time, the displacement Δp of the feature point i As function values in the interpolation function, n feature points p i As observed values, each parameter value in the interpolation function can be trained under the condition that the expression form of the interpolation function is known. And substituting the non-characteristic points into an interpolation function, wherein the obtained interpolation function value F (p) is the displacement deltap of the non-characteristic points.
Let the coordinates of the feature points (observation points) of the general human face model be p= { P 1 ,p 2 ,……p n The coordinate of the personalized face feature point obtained through data acquisition is P '= { P' 1 ,p' 2 ,......p' n Further find the displacement of the feature point as: f= { Δp 1 ,Δp 2 ,......Δp n }. The radial basis function is known in the form of:
where mp+t is a low order polynomial, here denoted affine transformation. To maintain the smoothness of the interpolation result, the following constraint conditions are established:
based on the obtained characteristic point displacement, Δp is set k =f(p k ) K is more than or equal to 0 and less than or equal to n, namely:
the simultaneous constraint conditions can yield n+4 equations. Writing the equation set in matrix form as
Wherein phi is j,i =φ(||p j -p i I) 0.ltoreq.j, i.ltoreq.N. The expression form of the basis function is that the invention selects an exponential function phi (R) =exp (-R/R), and the parameter R is selected as 64.
Solving the linear equation set to obtain a radial basis function coefficient c i And affine transformation components M and t. In three dimensions, c i And t is a three-dimensional row vector, M is a 3×3 matrix
Substituting the non-characteristic point coordinates p of the face general model into the interpolation function expression to obtain the generated displacement deltap after the non-characteristic point deformation, so that the coordinates after the non-characteristic point deformation, namely the personalized face non-characteristic point coordinates p' =p+deltap, can be obtained by all calculation. So far, we get the personalized face mesh model.
The invention utilizes the radial basis interpolation algorithm to carry out geometric deformation on the general face model obtained in the step S4 to realize individuation of the general face, the obtained result has better accuracy in the performance of facial features, and the test effect is shown in figure 4. The character front view size input in this step is 583px x 618 px, and the side view size is 640px x 650 px.
S6, calculating offset and offset angle of each flush area aiming at the flush areas of the front view after cutting and the side view after cutting;
first, after the front side view is divided into regions, the horizontal offset Δx required for the i-th region in the side view is:
Δx=(x f,i+1 -x f,i )-(x s,i+1 -x s,i )
wherein x is f,i And x f,i+1 Is the horizontal coordinate of the upper and lower boundaries of the ith area in front view, x s,i And x s,i+1 Is the ith in side viewHorizontal coordinates of upper and lower boundaries of the region;
further, the offset angle θ is calculated, and in the horizontal offset transform, the transform offset amounts for pixels in a certain line of the image are equal, so that the offset angle θ can be calculated by using the boundary pixels of the image as a special case transform, as shown in fig. 5. In the feature point transformation legend, the transition of two vector starting points formed by four feature points to coincidence is temporarily assumed, so that the miscut offset delta x can be simplified into the transverse difference of the positions of the lower boundary feature points, and on the premise of ensuring the consistency of delta x, the results before and after the transformation of the side view boundary pixels are compared, so that the miscut angle theta meets the following relational expression:
tanθ=(x’ i+1,0 -x i+1,0 )/(y i+1,0 -y i,0 )
constructing a horizontal miscut matrix of the region has:
s7, performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view;
in this step, the miscut process is sequentially completed for each region of the side view, so as to obtain an output image with a characteristic curve consistent with the front view, and the comparison result is shown in fig. 6.
S8, calculating a Laplacian pyramid of mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation;
the method comprises the steps that a front view after cutting, a side view after misplaced transformation and a Laplacian pyramid of a mirror image are spliced according to dividing lines of the layers, fusion is carried out on the dividing lines in a pixel weighted average mode, and a Laplacian image after each layer of splicing is obtained; as shown by the effect of fig. 7. And restoring the Laplacian pyramid from the spliced Laplacian image of each layer to obtain the spliced Laplacian pyramid.
S9, splicing the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to the dividing lines of each layer to obtain a spliced Laplacian pyramid;
s10, reconstructing a Gaussian pyramid layer by layer according to the spliced Laplacian pyramid;
s11, determining a fusion image according to a Gaussian pyramid;
the process of obtaining the fusion image by fusion of the laplacian pyramids is shown in fig. 8. For three face texture images on the left side, the front side and the right side, the Gaussian pyramid decomposition of the images is solved first. Let the original image be G 0 In G 0 As the zeroth layer of the gaussian pyramid. The original input image is subjected to low-pass filtering and interlaced downsampling by 2 to obtain a first layer G of a Gaussian pyramid 1 The method comprises the steps of carrying out a first treatment on the surface of the The first layer image is also subjected to low-pass filtering and downsampling by 2 to obtain a second layer image G of the Gaussian pyramid 2 The method comprises the steps of carrying out a first treatment on the surface of the The above process is repeated, and the obtained current layer image is 1/4 of the previous layer image in size. To this end, from G 0 、G 1 、…、G l An image sequence constituting a gaussian pyramid. Image G of the first layer of a Gaussian pyramid l As shown in the following formula.
Wherein the method comprises the steps of
Where w (m, n) is a generator kernel, a 5×5 gaussian template is used in the present invention.
The laplacian pyramid is a residual prediction pyramid. The prediction residual is the difference between the predicted image obtained by interpolation and amplification of the first layer image and the first layer +1 image. After establishing the Gaussian pyramid of the image, each layer of image G l Interpolation amplification to obtain amplified image Pixel size and G of (2) l-1 The same, i.e. G l Four times the interpolation magnification, the formula is as follows:
the Laplacian pyramid is constructed from the following formula
By LP 1 、LP 0 、…、LP N The pyramid is established as a Laplacian pyramid, and each layer of image is the difference obtained by interpolating and amplifying the image of the Gaussian pyramid and the image of the previous layer. After the Laplacian pyramid of each image is obtained, each layer in the Laplacian pyramids of the left side, the front side and the right side images is spliced according to a boundary, the Laplacian pyramids of the first layer after the splicing are fused in a pixel weighted average mode near the boundary of the spliced images, and the Laplacian pyramids of the first layer after the splicing are recorded as follows:
and recovering the corresponding Gaussian pyramid layer by layer according to the spliced Laplacian pyramid. The formula for reconstructing the gaussian pyramid is as follows:
in the formula, the expanse operator represents that the input image is interpolated and amplified, namelyG l Image representing layer I Gaussian pyramid, P l Representing a layer l laplacian pyramid image, N being the number of layers.
The invention can adopt 4 layers of Laplacian pyramids, and recursively draw from top to bottom from the topmost layer of the Laplacian pyramids to finally obtain the Gaussian pyramid of the spliced image, wherein the bottommost image G of the Gaussian pyramid 0 The final image after the front side photo is fused is obtained, and the recursive flow and the fusion result are shown in fig. 9 and 10.
S12, mapping the fusion image into a personalized face model to obtain a texture mapping face model of the character to be mapped.
As an alternative embodiment of the present invention, step S12 includes:
s121: expanding the grids of the personalized model of the face in an orthogonal projection mode;
s122: projecting the front grid of the personalized model of the human face to a two-dimensional plane in front of the human face in the fused image;
s123: and projecting two lateral grids of the face personalized model to the lateral planes of the fusion image which are perpendicular to each other.
The two projected side grids and the projected front grid are fused according to the projection boundary to obtain the texture mapping face model of the character to be mapped.
The invention can divide the face grid into a front grid and a side grid. When the positive side photos are spliced, the invention defines the boundary of image splicing according to the characteristic points in the photos. In the three-dimensional face model, the same characteristic points can be found out, and an approximate dividing line is determined according to the connection relation of the model grid vertices. At this time, the mesh of the three-dimensional face model is divided into three parts, as shown in fig. 11. The whole grid is unfolded in an orthogonal projection mode, wherein the front grid is projected to a two-dimensional plane right in front of the human face, the two side grids are projected to perpendicular side planes, and the projection result is shown in fig. 12. And splicing the grids in the three planes to form a complete face unfolding grid. The splicing mode of the plane grid is the same as that of the texture: and carrying out affine transformation on the projections of the two side grids according to the definition of the grid dividing line, so that the projected dividing line coincides with the projected dividing line of the front grid. And finally, aligning the positive side grids along the dividing line to obtain a complete face model grid development diagram, as shown in fig. 13.
Because the projection mode of the face grid is the same as that of the positive side photo and the splicing process of the face grid is the same affine transformation, the characteristic points in the unfolded grid and the characteristic points in the texture are completely overlapped. And aligning the unfolded grid with the face texture map, wherein the two-dimensional coordinates of each vertex in the grid in the texture space are texture coordinates. After determining the texture coordinates of the vertices, the DirectX 3D rendering environment automatically maps the textures in fig. 9 to the personalized face model surface, and finally obtains a face model as shown in fig. 14.
According to the texture mapping method in the three-dimensional virtual human head and face modeling, a personalized human face model is constructed, a front view and a side view of the real human face of a person to be mapped are cut according to the positions of characteristic points, and the offset angle of each flush area are calculated; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view; and a Laplacian pyramid algorithm is introduced to fuse the front view after cutting, the side view after miscut transformation and the mirror image of the side view after miscut transformation, and the fused image is mapped into a personalized face model to obtain a texture mapping face model of the character to be mapped. The invention can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (10)

1. A texture mapping method in three-dimensional virtual human head and face modeling, comprising:
acquiring a front view and a side view of a real face of a person to be mapped;
marking the feature points of the front view and the side view, selecting the facial edge feature points in the front view and the side view after the feature points are marked, correcting the facial edge feature points in the front view and the side view after the feature points are marked, and cutting according to the positions of the corrected facial edge feature points to obtain the front view and the side view containing the facial edge feature points after cutting;
acquiring a head data set describing the proportion and the structure of the head of the real person;
constructing a virtual general face model by utilizing the head data set;
performing geometric deformation and characteristic point adaptation on the general face model based on the characteristic points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain a personalized face model;
calculating the offset and the offset angle of each flush area aiming at the flush areas of the front view after cutting and the side view after cutting;
performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view;
calculating a Laplacian pyramid of mirror images of the front view after cutting, the side view after miscut transformation and the side view after miscut transformation;
splicing the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to the dividing lines of each layer to obtain a spliced Laplacian pyramid;
reconstructing a Gaussian pyramid layer by layer according to the spliced Laplacian pyramid;
determining a fusion image according to the Gaussian pyramid;
mapping the fusion image into the personalized face model to obtain a texture mapping face model of the character to be mapped;
the laplacian pyramid is constructed from the following formula:
by LP 1 、LP 0 、…、LP N The pyramid is Laplacian pyramid G l+1 * Representing an enlarged image obtained by interpolation and enlargement of the l+1 layer image; g l * Pixel size and G of (2) l-1 The same is true of the fact that,
the step of reconstructing the Gaussian pyramid layer by layer according to the spliced Laplacian pyramid comprises the following steps:
each layer in the Laplacian pyramid is spliced according to a boundary, the layers are fused in a pixel weighted average mode near the boundary of the spliced image, and the Laplacian image of the first layer after splicing is obtained by the following steps:
LP l (Total) =LP l (left) +LP l (positive) +LP l (Right)
And according to the spliced Laplacian pyramid, and starting from the topmost layer of the Laplacian pyramid, recursively estimating from top to bottom layer by layer, reconstructing and recovering a corresponding Gaussian pyramid, wherein the reconstructed Gaussian pyramid has the following formula:
in the formula, the expanse operator represents interpolation amplification of an input image, namely G l * =Expend(G l ),G l Image representing layer I Gaussian pyramid, P l Representing a Laplacian pyramid image of a first layer, wherein N is the number of layers; g N An image representing an N-th layer gaussian pyramid;
the determining a fused image according to the gaussian pyramid includes:
by G 0 As the zeroth layer of the Gaussian pyramid, performing low-pass filtering and interlaced downsampling on the original input image to obtain a first layer G of the Gaussian pyramid 1 The method comprises the steps of carrying out a first treatment on the surface of the The first layer image is subjected to low-pass filtering and downsampling by 2 to obtain a second layer image G of the Gaussian pyramid 2 The method comprises the steps of carrying out a first treatment on the surface of the Repeating the above steps to obtain a current layer image with a size of 1/4 of that of the previous layer image, and G 0 、G 1 、…、G l Image sequence forming a Gaussian pyramid, image G of the first layer of the Gaussian pyramid l The following formula is shown:
wherein,
where w (m, n) is a generator kernel;
will G 0 、G 1 、…、G l The image sequence constituting the gaussian pyramid is used as a fusion image.
2. The texture mapping method of claim 1, wherein constructing a virtual generic face model using the head dataset comprises:
according to the data of the basic proportion and the structure of the head of the real person described in the head data set, a virtual general head-face model is built in 3DsMax and is refined;
wherein, the edge characteristic point transformation obviously selects a specific area,
the general head-face model is a three-dimensional grid patch model, and the three-dimensional grid patch model is expressed as:
M={V M ,F M ,G M }
wherein V is M Representing a set of vertex coordinates of a three-dimensional mesh, F M Representing the index sets of vertices that make up a patch, G M Representing other information including a smooth group, a texture link, and a texture reference.
3. The texture mapping method according to claim 1, wherein the feature point labeling is performed on the front view and the side view, the facial edge feature points in the front view and the side view labeled with the feature points are selected, the facial edge feature points in the front view and the side view labeled with the feature points are corrected, clipping is performed according to the corrected facial edge feature point positions, and obtaining the front view and the side view containing the facial edge feature points after clipping includes:
the front view and the side view are subjected to contour line cutting, scaling in equal proportion and face alignment, and the front view and the side view after alignment are obtained;
wherein the front view and the side view after alignment are divided into a plurality of areas according to contour lines;
marking characteristic points in different areas of the front view and the side view after alignment;
selecting a first face edge characteristic point in the front view of the real person after the characteristic points are marked, and selecting a second face edge characteristic point in the side view;
correcting the first face edge feature point and the second face edge feature point;
taking the corrected first face edge characteristic points and the corrected second face edge characteristic points which are positioned in the flush area as matching characteristic points, and storing coordinate information of the matching characteristic points;
connecting each first face edge feature point in the front view to draw a first curve, and connecting each second face edge feature point in the side view to draw a second curve;
cutting the front view by taking the first curve and the mirror image curve of the first curve as the critical, reserving the middle part of the front view, cutting the left side area of the side view by taking the second curve as the critical, and obtaining the cut front view and the cut side view.
4. The texture mapping method according to claim 1, wherein the performing geometric deformation and feature point adaptation on the generic face model based on feature points marked in the front view and the side view by using a radial basis interpolation algorithm to obtain a personalized face model includes:
determining characteristic points in the general face model;
based on the feature points marked by the front view and the side view, the radial basis interpolation algorithm is utilized to locally deform the curved surface near the feature points of the general face model, and the personalized face model is obtained.
5. The texture mapping method of claim 3, wherein obtaining the front view and the side view after alignment according to contour cut, scale-up and face alignment comprises:
cutting the front view and the side view respectively to obtain a front view and a side view composed of different areas;
scaling the front view and the side view after cutting in equal proportion in the corresponding areas so that the front view and the side view have the same size in the same area;
and taking the top of the head, eyes, lips and chin as contour lines, and aligning the front view and the side view after the scaling in equal proportion to each other to obtain a front view and a side face image after the alignment.
6. The texture mapping method of claim 3, wherein the modifying the first and second face edge feature points comprises:
and correcting the first face edge characteristic point and the second face edge characteristic point by dragging so as to enable the first face edge characteristic point and the second face edge characteristic point to be positioned at the face edge.
7. The texture mapping method of claim 1, wherein,
the laplacian pyramid is represented as:
the laplacian image after each layer is stitched is expressed as:
LP l (Total) =LP l (left) +LP l (positive) +LP l (Right)
The reconstructed gaussian pyramid is expressed as:
wherein the expanse operator represents that the input image is interpolated and amplified, G l Image representing layer I Gaussian pyramid, P l Representing a layer l laplacian pyramid image, N being the number of layers.
8. The texture mapping method according to claim 1, wherein the splicing the cut front view, the miscut transformed side view, and the laplacian pyramid of the mirror image according to the parting line of each layer, to obtain the spliced laplacian pyramid comprises:
the front view after cutting, the side view after misplaced transformation and the Laplacian pyramid of the mirror image are spliced according to the dividing lines of all layers, fusion is carried out on the dividing lines in a pixel weighted average mode, and the Laplacian image after each layer of splicing is obtained;
and restoring the Laplacian pyramid from the spliced Laplacian image of each layer to obtain the spliced Laplacian pyramid.
9. The texture mapping method of claim 1, wherein mapping the fused image into the personalized face model comprises:
expanding the grids of the face personalized model in an orthogonal projection mode;
projecting the front grid of the personalized model of the human face to a two-dimensional plane right in front of the human face in the fused image;
and projecting two lateral grids of the face personalized model to the lateral plane of the fusion image which is vertical to the lateral plane.
10. The texture mapping method according to claim 1, wherein after projecting two side grids of the face personalized model to a side plane of a fused image that is perpendicular, the texture mapping method further comprises:
and fusing the projected two lateral grids and the projected front grid according to the projection boundary to obtain the texture mapping face model of the character to be mapped.
CN202110984213.2A 2021-08-25 2021-08-25 Texture mapping method in three-dimensional virtual human head and face modeling Active CN113808272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110984213.2A CN113808272B (en) 2021-08-25 2021-08-25 Texture mapping method in three-dimensional virtual human head and face modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110984213.2A CN113808272B (en) 2021-08-25 2021-08-25 Texture mapping method in three-dimensional virtual human head and face modeling

Publications (2)

Publication Number Publication Date
CN113808272A CN113808272A (en) 2021-12-17
CN113808272B true CN113808272B (en) 2024-04-12

Family

ID=78894189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110984213.2A Active CN113808272B (en) 2021-08-25 2021-08-25 Texture mapping method in three-dimensional virtual human head and face modeling

Country Status (1)

Country Link
CN (1) CN113808272B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067041B (en) * 2022-01-14 2022-06-14 深圳大学 Material generation method and device of three-dimensional model, computer equipment and storage medium
CN115797556B (en) * 2022-11-22 2023-07-11 灵瞳智能科技(北京)有限公司 Virtual digital human face contour 3D reconstruction device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222363A (en) * 2011-07-19 2011-10-19 杭州实时数码科技有限公司 Method for fast constructing high-accuracy personalized face model on basis of facial images
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
KR20140122401A (en) * 2013-04-10 2014-10-20 한국과학기술원 Method and apparatus for gernerating 3 dimension face image
WO2017029488A2 (en) * 2015-08-14 2017-02-23 Metail Limited Methods of generating personalized 3d head models or 3d body models
CN109685740A (en) * 2018-12-25 2019-04-26 努比亚技术有限公司 Method and device, mobile terminal and the computer readable storage medium of face normalization
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110458752A (en) * 2019-07-18 2019-11-15 西北工业大学 A kind of image based under the conditions of partial occlusion is changed face method
WO2020244076A1 (en) * 2019-06-05 2020-12-10 平安科技(深圳)有限公司 Face recognition method and apparatus, and electronic device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100682889B1 (en) * 2003-08-29 2007-02-15 삼성전자주식회사 Method and Apparatus for image-based photorealistic 3D face modeling

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222363A (en) * 2011-07-19 2011-10-19 杭州实时数码科技有限公司 Method for fast constructing high-accuracy personalized face model on basis of facial images
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
KR20140122401A (en) * 2013-04-10 2014-10-20 한국과학기술원 Method and apparatus for gernerating 3 dimension face image
WO2017029488A2 (en) * 2015-08-14 2017-02-23 Metail Limited Methods of generating personalized 3d head models or 3d body models
CN109685740A (en) * 2018-12-25 2019-04-26 努比亚技术有限公司 Method and device, mobile terminal and the computer readable storage medium of face normalization
WO2020244076A1 (en) * 2019-06-05 2020-12-10 平安科技(深圳)有限公司 Face recognition method and apparatus, and electronic device and storage medium
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110458752A (en) * 2019-07-18 2019-11-15 西北工业大学 A kind of image based under the conditions of partial occlusion is changed face method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Label Distribution-Based Facial Attractiveness Computation by Deep Residual Learning;Yangyu Fan Etal.;IEEE Transactions on multimedia;20180831;第20卷(第8期);全文 *
Yanyu Fan Etal..Full Face-and-Head 3D Model With Photorealistic Texture.IEEE Access.2020,第8卷全文. *
基于多尺度分析的自动人脸照片移植;黄炎辉等;计算机应用研究;20171130;第34卷(第11期);全文 *
基于网络的三维人脸重构系统的研究;施立;中国优秀硕士论文全文数据库信息科技辑;20131115(第11期);全文 *
胡永利等.基于形变模型的三维人脸重建方法及其改进.计算机工程.2005,第31卷(第19期),全文. *
詹永照 ; 胡灵敏 ; 沈荣荣 ; .近似人脸全视角图生成及在特定人脸模型映射.系统仿真学报.2009,(第03期),全文. *

Also Published As

Publication number Publication date
CN113808272A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN109859098B (en) Face image fusion method and device, computer equipment and readable storage medium
CN108765550B (en) Three-dimensional face reconstruction method based on single picture
EP0950988B1 (en) Three-Dimensional image generating apparatus
US6532011B1 (en) Method of creating 3-D facial models starting from face images
RU2215326C2 (en) Image-based hierarchic presentation of motionless and animated three-dimensional object, method and device for using this presentation to visualize the object
CN113808272B (en) Texture mapping method in three-dimensional virtual human head and face modeling
JP2000067267A (en) Method and device for restoring shape and pattern in there-dimensional scene
US7076117B2 (en) Methods and apparatus for cut-and-paste editing of multiresolution surfaces
WO2002013144A1 (en) 3d facial modeling system and modeling method
JP5842541B2 (en) 3D portrait creation device
JP2015122069A (en) Resizing image
US7616198B2 (en) System and computer-implemented method for modeling the three-dimensional shape of an object by shading of a two-dimensional image of the object
CN111127642A (en) Human face three-dimensional reconstruction method
Jeong et al. Automatic generation of subdivision surface head models from point cloud data
KR20030054360A (en) Apparatus and Method for Converting 2D Images to 3D Object
JP2832463B2 (en) 3D model reconstruction method and display method
JP2004199301A (en) Image processor
CN115082640A (en) Single image-based 3D face model texture reconstruction method and equipment
CN112561784B (en) Image synthesis method, device, electronic equipment and storage medium
CN114742954A (en) Method for constructing large-scale diversified human face image and model data pairs
US20040174361A1 (en) Geometric and brightness modeling of images
JP2023505615A (en) Face mesh deformation with fine wrinkles
JPH11175765A (en) Method and device for generating three-dimensional model and storage medium
JP2003077011A (en) Display method and device for three-dimensional shape model
JPH0935082A (en) Image processing method and image processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant