CN113808272A - Texture mapping method in three-dimensional virtual human head and face modeling - Google Patents
Texture mapping method in three-dimensional virtual human head and face modeling Download PDFInfo
- Publication number
- CN113808272A CN113808272A CN202110984213.2A CN202110984213A CN113808272A CN 113808272 A CN113808272 A CN 113808272A CN 202110984213 A CN202110984213 A CN 202110984213A CN 113808272 A CN113808272 A CN 113808272A
- Authority
- CN
- China
- Prior art keywords
- view
- face
- feature points
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000009466 transformation Effects 0.000 claims abstract description 35
- 210000003128 head Anatomy 0.000 claims description 31
- 230000004927 fusion Effects 0.000 claims description 13
- 230000006978 adaptation Effects 0.000 claims description 6
- 239000000463 material Substances 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 4
- 210000000887 face Anatomy 0.000 claims description 3
- 238000007670 refining Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims 1
- 230000001815 facial effect Effects 0.000 description 10
- 238000006073 displacement reaction Methods 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000001914 filtration Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- NTHWMYGWWRZVTN-UHFFFAOYSA-N sodium silicate Chemical compound [Na+].[Na+].[O-][Si]([O-])=O NTHWMYGWWRZVTN-UHFFFAOYSA-N 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The invention provides a texture mapping method in three-dimensional virtual human head and face modeling, which comprises the steps of constructing an individualized human face model, cutting a front view and a side view of a real human face of a person to be mapped according to the positions of feature points, and calculating the offset and offset angle of each flush area; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view; and introducing a Laplacian pyramid algorithm to fuse the mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation, mapping the fused images into the personalized face model, and obtaining a texture mapping face model of the person to be mapped. The method can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.
Description
Technical Field
The invention belongs to the technical field of image processing and computer graphics, and particularly relates to a texture mapping method in three-dimensional virtual human head and face modeling.
Background
Texture mapping is an important stage of three-dimensional virtual face modeling, and realistic three-dimensional virtual face reconstruction is based on good texture mapping, but the following problems exist in practice:
(1) the geometric reconstruction and the textural feature recovery of the human face by utilizing the three-dimensional scanning data can obtain an ideal human face effect, and the model precision is high, but the scanning equipment is expensive in manufacturing cost, complex in operation and difficult to popularize;
(2) when the front side photo of a person is used to generate the head panorama texture map, the texture comes from different photos, so that gaps are generated due to the color difference of texture source pictures, and therefore, areas with different texture sources need to be subjected to color fusion so as to enable the texture color to be in smooth transition.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a texture mapping method in three-dimensional virtual human head and face modeling. The technical problem to be solved by the invention is realized by the following technical scheme:
the texture mapping method in the three-dimensional virtual human head and face modeling provided by the invention comprises the following steps:
acquiring a front view and a side view of a real face of a person to be mapped;
marking feature points of the front view and the side view, selecting face edge feature points in the front view and the side view after marking the feature points, correcting the face edge feature points in the front view and the side view after marking the feature points, and cutting according to the positions of the corrected face edge feature points to obtain a front view and a side view which contain the face edge feature points after cutting;
acquiring a head data set describing the proportion and the structure of the head of a real person;
constructing a virtual general human face model by using the head data set;
performing geometric deformation and feature point adaptation on the general face model based on the feature points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain an individualized face model;
calculating the offset and offset angle of each flush region according to the flush regions of the cut front view and the cut side view;
performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view;
calculating a Laplacian pyramid of mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation;
performing boundary splicing on the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to each layer to obtain a spliced Laplacian pyramid;
reconstructing a Gaussian pyramid layer by layer according to the spliced Laplacian pyramid;
determining a fusion image according to the Gaussian pyramid;
and mapping the fusion image into the personalized human face model to obtain a texture mapping human face model of the person to be mapped.
Optionally, the constructing a virtual universal face model by using the head data set includes:
establishing a virtual general head and face model in 3DsMax and refining according to the data describing the basic proportion and structure of the head of the real person in the head data set;
wherein, the edge feature point transformation obviously grabs a specific area,
wherein the general face model is a three-dimensional mesh patch model, and the three-dimensional mesh patch model is expressed as:
M={VM,FM,GM}
wherein, VMSet of vertex coordinates representing a three-dimensional mesh, FMRepresenting sets of vertex indices, G, that make up patchesMRepresenting other information including smooth groups, material links, and material references.
Optionally, the feature point labeling is performed on the front view and the side view, the facial edge feature points in the front view and the side view after the feature point labeling are selected, the facial edge feature points in the front view and the side view after the feature point labeling are corrected, the cutting is performed according to the positions of the corrected facial edge feature points, and the front view and the side view which contain the facial edge feature points after the cutting are obtained include:
cutting the front view and the side view according to contour lines, scaling the front view and the side view in an equal proportion and aligning the faces to obtain an aligned front view and an aligned side view;
the front view and the side view after alignment are divided into a plurality of areas according to contour lines;
marking feature points in different areas of the front view and the side view after alignment;
selecting a first face edge feature point from the front view of the real person marked with the feature point, and selecting a second face edge feature point from the side view;
correcting the first face edge feature point and the second face edge feature point;
taking the corrected first face edge characteristic points and the corrected second face edge characteristic points which are positioned in the flush area as matching characteristic points, and storing coordinate information of the matching characteristic points;
connecting each first face edge feature point in the front view to draw a first curve, and connecting each second face edge feature point in the side view to draw a second curve;
and cutting the left area of the side view by taking the first curve and the mirror image curve of the first curve as a critical value, reserving the middle part of the front view, and cutting the left area of the side view by taking the second curve as a critical value to obtain the cut front view and the cut side view.
Optionally, the obtaining of the personalized face model by using the radial basis interpolation algorithm and performing geometric deformation and feature point adaptation on the general face model based on the feature points labeled by the front view and the side view includes:
determining feature points in the general face model;
and based on the feature points marked by the front view and the side view, locally deforming the curved surface near the feature points of the general face model by using a radial basis interpolation algorithm to obtain an individualized face model.
Optionally, the obtaining the aligned front view and side view by cutting the front view and side view according to contour lines, scaling the front view and side view and aligning the face comprises:
respectively cutting the front view and the side view to obtain a front view and a side view which are composed of different areas;
scaling the cut front view and the cut side view in corresponding areas to enable the front view and the side view to be the same in size in the same area;
and taking the vertex, the eyes, the lips and the chin as contour lines, and aligning the front view and the side view after the equal scaling to obtain a front view and a side face image after the mutual alignment.
6. The texture mapping method according to claim 3, wherein the correcting the first and second facial edge feature points comprises:
and correcting the first face edge feature point and the second face edge feature point by dragging so as to enable the first face edge feature point and the second face edge feature point to be located at face edge positions.
Wherein the Laplacian pyramid is represented as:
the laplacian image after each layer stitching is represented as:
the reconstructed gaussian pyramid is represented as:
wherein the extended operator represents the interpolation of the input image, GlImage representing the first layer of the Gaussian pyramid, PlIndicating the Laplace pyramid image of the ith layer, and N is the number of layers.
Optionally, the splicing the cut front view, the miscut transformed side view, and the laplacian pyramid of the mirror image according to the boundaries of the layers to obtain the spliced laplacian pyramid includes:
performing boundary splicing on the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to each layer, and fusing on the boundaries in a pixel weighted average mode to obtain a Laplacian image after each layer of splicing;
and restoring the Laplacian pyramid from each layer of spliced Laplacian images to obtain the spliced Laplacian pyramid.
Optionally, mapping the fused image into the personalized face model includes:
expanding the grids of the human face personalized model according to an orthogonal projection mode;
projecting the front mesh of the human face personalized model to a two-dimensional plane in the fusion image in front of the human face;
and projecting two side grids of the human face personalized model to the vertical side plane of the fusion image.
Optionally, after projecting the two side meshes of the face personalized model to the perpendicular side planes of the fused image, the texture mapping method further includes:
and fusing the two projected side grids and the front grid according to a projection boundary to obtain a texture mapping face model of the person to be mapped.
The invention provides a texture mapping method in three-dimensional virtual human head and face modeling, which comprises the steps of constructing an individualized human face model, cutting a front view and a side view of a real human face of a person to be mapped according to the position of a characteristic point, and calculating the offset and offset angle of each flush area; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view; and introducing a Laplacian pyramid algorithm to fuse mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation, mapping the fused images into the personalized face model, and obtaining a texture mapping face model of the person to be mapped. The method can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a flow chart illustrating a texture mapping method in three-dimensional virtual human head-face modeling according to an embodiment of the present invention;
FIG. 2a is a front face and a side face of a real person labeled with feature points;
FIG. 2b is the result of the front side view preprocessing of the character;
FIG. 2c is a front side view feature point calibration result;
FIG. 2d is a front side view zone division;
FIG. 3a is a generic face geometry model and its mesh structure;
FIG. 3b is a diagram illustrating FDP points in the MPEG-4 standard;
FIG. 3c is a schematic representation of a feature point used in the present invention;
FIG. 4 is a personalized face model modeling test result;
FIG. 5 is a radiation conversion illustration;
FIG. 6 is a comparison of the results of a miscut transform;
FIG. 7 is an image cropping and direct stitching result;
FIG. 8 is a diagram of a pyramid decomposition and fusion process for three images;
FIG. 9 is a flow chart of a Laplacian pyramid restored image;
FIG. 10 is a Laplacian pyramid based texture fusion result;
FIG. 11 is a projection decomposition of a personalized face model mesh;
FIG. 12 is a front side grid projected onto a two-dimensional plane;
FIG. 13 is a front side stitching result graph;
FIG. 14 is an image of a personalized face model with texture mapping added.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
As shown in fig. 1, the texture mapping method in three-dimensional virtual human head-face modeling provided by the present invention includes:
s1, acquiring a front view and a side view of the real face of the person to be mapped;
s2, marking feature points of the front view and the side view, selecting face edge feature points in the front view and the side view after marking the feature points, correcting the face edge feature points in the front view and the side view after marking the feature points, and cutting according to the positions of the corrected face edge feature points to obtain a front view and a side view which contain the face edge feature points after cutting;
as an alternative embodiment of the present invention, step S2 includes:
s21: cutting the front view and the side view according to contour lines, scaling the front view and the side view in an equal proportion and aligning the faces to obtain a front view and a side view after alignment;
the front view and the side view after alignment are divided into a plurality of areas according to contour lines;
step S21 includes:
step a, respectively cutting a front view and a side view to obtain a front view and a side view composed of different areas;
b, scaling the cut front view and the cut side view in the corresponding areas in equal proportion so as to enable the front view and the side view to be the same in size in the same area;
and c, taking the vertex, the eyes, the lips and the chin as contour lines, aligning the front view and the side view after equal scaling, and obtaining the front view and the side face image after mutual alignment.
The front side view of the face is cut and scaled in an equal ratio, the information of the face area in the image is kept to the maximum degree and centered, and meanwhile, the face information of the front side view is aligned according to four main contour lines of the top of the head, the eyes, the lips and the chin, as shown in fig. 2 b.
Step S22: marking feature points in different areas of the front view and the side view after alignment;
FIG. 2a is a front face and a side face of a real person marked with feature points;
fig. 2a shows the front face and the side face of the real person after the feature points are marked. In the step, characteristic points of the human face are respectively marked in the front view and the side view. From the frontal view, the width and height information of the feature point can be obtained, and are respectively marked as (x, y)1) The height and depth information of the feature points can be obtained from the side photos and is marked as (y)2Z). Finally, let y be (y)1+y2) And/2, obtaining three-dimensional coordinates (x, y, z) of the personalized human face characteristic points.
Step S23: selecting a first face edge feature point from the front view of the real person marked with the feature point, and selecting a second face edge feature point from the side view;
respectively selecting a first face edge feature point of each area of the front view and a second face edge feature point of each area of the side view from the front view and the side view after the feature points are marked;
step S24: correcting the first face edge characteristic point and the second face edge characteristic point;
the invention can correct the first face edge feature point and the second face edge feature point by dragging so as to enable the first face edge feature point and the second face edge feature point to be located at face edge positions.
Step S25: taking the corrected first face edge feature point and the corrected second face edge feature point which are positioned in the flush area as matching feature points, and storing coordinate information of the matching feature points;
defining face edge feature points of a front side view, in the step, 10 transformation face edge feature points can be defined in a face area, the face edge feature points are taken as a unit and are distributed on a vertical bisector of an image by default, the image is divided into 9 horizontal equal-height areas, draggable points are created and drawn on the image, each dragging point is selected, the image is moved to a proper position of the face edge, the coordinates of the dragging point are recorded, the transformation feature points are aligned to each dragging point, and the ith (i is more than or equal to 0 and less than or equal to 10) feature point is recorded as pi(xi,yi) The origin of coordinates may be at the center of the image. The facial edge feature point sets of the front side view are respectively marked as PfAnd PsThe face edge feature points are marked with white dots as in fig. 2c, and finally the face edge feature point position information is saved.
Step S26: connecting each first face edge feature point in the front view to draw a first curve, and connecting each second face edge feature point in the side view to draw a second curve;
step S27: and cutting the left area of the side view by taking the first curve and the mirror image curve of the first curve as a critical value, reserving the middle part of the front view, and cutting the left area of the side view by taking the second curve as a critical value to obtain the cut front view and the cut side view.
Firstly, according to the defined face edge feature point set PfAnd PsAnd connecting each point in the front side view to draw a characteristic curve, and dividing the front side view into nine transverse areas according to the position information of the facial edge characteristic points.
S3, acquiring a head data set describing the proportion and the structure of the head of the real person;
s4, constructing a virtual universal human face model by using the head data set;
in the step, a virtual general head-face model is established and refined in 3DsMax according to data describing the basic proportion and structure of the head of a real person in a head data set;
wherein, the edge feature point transformation obviously grabs a specific area,
the three-dimensional mesh patch model is represented as:
M={VM,FM,GM} (1)
wherein, VMSet of vertex coordinates representing a three-dimensional mesh, FMRepresenting sets of vertex indices, G, that make up patchesMOther information is represented, including smooth groups, material links, and material references.
The invention can obtain an original three-dimensional face grid model by scanning a real face through the three-dimensional laser scanner, or directly derive a required human head model from a model base through the professional human body modeling software of the prose, or manually establish a face dimensional model through professional modeling software such as 3DS MAX, MAYA and the like. In the step, a virtual human universal head model is established in 3DsMax, and the universal head model is a three-dimensional grid patch model as shown in FIG. 3 a.
S5, performing geometric deformation and feature point adaptation on the universal face model based on the feature points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain an individualized face model;
as an alternative embodiment of the present invention, step S5 includes:
s51: determining feature points in the general face model;
referring to fig. 3b, fig. 3b is the FDP point in the MPEG-4 standard, the face feature points selected in the feature point adaptation of the present invention are based on the MPEG-4 standard, in which the FDP is related to the face geometry. In MPEG-4, a total of 84 FDP feature points are defined. These feature points are divided into 11 groups of cheeks, eyes, nose, mouth, ears, etc., and a general face model can be converted into a specific face model by the definition of these feature points. Referring to fig. 2b, fig. 2b redefines 153 facial feature points, as shown in fig. 3c, including 14 parts of eyes, eyelids, face contour, etc., with reference to the MPEG-4 standard for the facial feature points defined in the generic face model according to the present invention.
S52: and local deformation is carried out on the curved surface near the characteristic points of the universal face model based on the characteristic points marked by the front view and the side view by utilizing a radial basis interpolation algorithm to obtain the personalized face model.
In order to normalize the generic face and the personalized face in the photo to the same coordinate space, it is necessary to perform an overall transformation on the generic face model so that the size of the generic face model is substantially the same as that of the personalized face. Assuming that the coordinates of any grid vertex of the general face model are V (Vx, Vy, Vz), the centers of two eyes are used as the original points and are marked as O (Ox, Oy, Oz), the width, height and depth of the measured model are respectively marked as Lx, Ly and Lz, and then the transformed new position V '(V' x, V 'y, V' z) is calculated by the following formula:
Vi′=(Vi-Oi)·li/Li,i=x,y,z.
radial Basic Function (RBF) is a deformation method based on spatial discrete data interpolation, and is used for approximation of multivariate functions in multidimensional space. The method first fits a continuous multivariate function by using a linear combination of basis functions. The radial basis function has a good fitting effect on irregular point clouds, and can generate a smooth surface in a three-dimensional space, so that the radial basis function is widely applied to reconstruction of a three-dimensional face model.
The radial basis interpolation algorithm is applied to the geometric deformation of the human face, and the principle is as follows: knowing the n feature points defined in the generic face model and knowing the coordinates of all the mesh vertices of the generic face model, it is assumed that the feature points are from the original position piMove to new position p'iThe displacement of which is Δ pi=p′i-piThe displacement Δ p of each non-feature point p can be interpolated using the radial basis function. At this time, the displacement Δ p of the characteristic pointiAs a function value in an interpolation function, and n feature points piThe three-dimensional coordinates of the interpolation function are used as observed values, and under the condition that the expression form of the interpolation function is known, the parameter values in the interpolation function can be trained. And substituting the non-characteristic points into the interpolation function to obtain an interpolation function value F (p), which is the displacement delta p of the non-characteristic points.
Let the coordinate of the feature point (observation point) of the general face model be P ═ P1,p2,......pnAnd the coordinates of the personalized human face characteristic point obtained by data acquisition are P '═ P'1,p'2,......p'nAnd calculating the displacement of the characteristic points as follows: f ═ Δ p1,Δp2,......Δpn}. The known radial basis function form is:
where Mp + t is a low order polynomial, here representing an affine transformation. To maintain smoothness of the interpolation result, the following constraint conditions are established:
let Δ p be based on the determined characteristic point displacementk=f(pk) K is more than or equal to 0 and less than or equal to n, namely:
and (4) obtaining n +4 equations by simultaneous constraint conditions. Writing the system of equations in matrix form as
Wherein phij,i=φ(||pj-piAnd | | j) is not less than 0, and i is not less than N. Expression of basis functions the exponential function phi (R) is selected to be exp (-R/R) and the parameter R is selected to be 64.
Solving the linear equation set to obtain a radial basis function coefficient ciAnd affine transformation components M and t. In three-dimensional space, ciAnd t is a three-dimensional row vector, M is a 3 × 3 square matrix
And substituting the non-characteristic point coordinate p of the general face model into the interpolation function expression to obtain the displacement delta p generated after the deformation of the non-characteristic point, so that the coordinate after the deformation of the non-characteristic point, namely the personalized face non-characteristic point coordinate p' ═ p + delta p, and further the personalized face model grid point coordinate can be obtained by calculation. At this point, we have obtained a personalized face mesh model.
The invention realizes the individuation of the general face by geometrically deforming the general face model obtained in S4 by using a radial basis interpolation algorithm, the obtained result has better precision in the expression of facial features, and the test effect is shown in figure 4. The figure input in this step has a front view size of 583px 658px and a side view size of 640px 658 px.
S6, calculating the offset and offset angle of each flush area according to the flush areas of the cut front view and the cut side view;
first, after the forward view is divided into regions, the required horizontal shear offset Δ x for the ith region in the side view is:
Δx=(xf,i+1-xf,i)-(xs,i+1-xs,i)
wherein xf,iAnd xf,i+1Is the upper and lower edges of the ith area in front viewHorizontal coordinate of world, xs,iAnd xs,i+1The horizontal coordinates of the upper and lower boundaries of the ith area in the side view;
further, the miscut angle θ is calculated, and since the shift amount of the conversion for a certain line of pixels of the image is equal in the horizontal miscut conversion, the miscut angle θ can be calculated by using the boundary pixels of the image as a special case conversion, as shown in fig. 5. In the feature point conversion legend, assuming that two vector starting points formed by four feature points are translated to be coincident temporarily, the miscut offset Δ x can be simplified into a lateral difference of the position of the lower boundary feature point, and on the premise of ensuring that Δ x is consistent, comparing results before and after side view boundary pixel conversion to obtain a miscut angle θ satisfying the following relation:
tanθ=(x′i+1,0-xi+1,0)/(yi+1,0-yi,0)
the horizontal miscut matrix for constructing the region has:
s7, performing the miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view;
in this step, the miscut processing is sequentially performed for each region of the side view, and an output image with a characteristic curve consistent with that of the front view is obtained, and the comparison result is shown in fig. 6.
S8, calculating a Laplacian pyramid of mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation;
in the step, the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image are spliced according to the boundaries of each layer, and the Laplacian pyramid is fused on the boundaries in a pixel weighted average mode to obtain the Laplacian image after each layer of splicing; as shown by the effect of fig. 7. And restoring the Laplacian pyramid from each layer of spliced Laplacian images to obtain the spliced Laplacian pyramid.
S9, splicing the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to boundaries of each layer to obtain a spliced Laplacian pyramid;
s10, reconstructing a Gaussian pyramid layer by layer according to the spliced Laplacian pyramid;
s11, determining a fused image according to the Gaussian pyramid;
fig. 8 shows a process of obtaining a fused image by laplacian pyramid fusion. For three face texture images of the left side, the front side and the right side, Gaussian pyramid decomposition of the images is firstly solved. Let original image be G0In the order of G0As the zeroth layer of the gaussian pyramid. Low-pass filtering and alternate-line alternate-row 2-reduction sampling are carried out on the original input image to obtain a first layer G of the Gaussian pyramid1(ii) a Low-pass filtering and 2-down sampling are carried out on the first layer image to obtain a second layer image G of the Gaussian pyramid2(ii) a The above process is repeated, and the size of the current layer image is 1/4 which is the size of the previous layer image in turn. To this end, from G0、G1、…、GlForming a sequence of images of a gaussian pyramid. Image G of the l-th layer of the Gaussian pyramidlAs shown in the following formula.
Where w (m, n) is the generating kernel function, the present invention uses a 5 × 5 gaussian template.
The laplacian pyramid is a residual prediction pyramid. The prediction residual refers to the difference between the l layer image and the predicted image obtained by interpolating and amplifying the l +1 layer image. After the Gaussian pyramid of the image is established, each layer of image G is processedlInterpolating and amplifying to obtain an amplified image Gl *,Gl *Pixel size and Gl-1Same, i.e. GlThe interpolation is amplified by four times, and the formula is as follows:
the Laplacian pyramid is constructed by the following formula
From LP1、LP0、…、LPNThe established pyramid is the Laplacian pyramid, and each layer of image is the difference of the image of the Gaussian pyramid and the image of the previous layer after interpolation and amplification. After the laplacian pyramid of each image is obtained, the left image, the front image and the right image are spliced according to the boundary, the left image, the front image and the right image are fused in a pixel weighted average mode near the boundary of the spliced image, and the laplacian image of the I layer after splicing is recorded as follows:
LPl (Total)=LPl (left)+LPl (Zheng)+LPl (Right)
And restoring the corresponding Gaussian pyramids layer by layer according to the spliced Laplacian pyramids. The formula for reconstructing the gaussian pyramid is as follows:
where the extended operator means interpolating and magnifying the input image, i.e. Gl *=Expend(Gl), GlImage representing the first layer of Gaussian pyramid, PlIndicating the Laplace pyramid image of the ith layer, and N is the number of layers.
The invention can adopt 4 layers of Laplacian pyramids, and recurs layer by layer from top to bottom starting from the topmost layer of the Laplacian pyramid to finally obtain the Gaussian pyramid of the spliced image, wherein the bottommost layer image G of the Gaussian pyramid0That is, the final image after the front side photograph fusion, and the recursion flow and the fusion result thereof are shown in fig. 9 and 10.
And S12, mapping the fused image into a personalized human face model to obtain a texture mapping human face model of the person to be mapped.
As an alternative embodiment of the present invention, step S12 includes:
s121: expanding the grids of the human face personalized model according to an orthogonal projection mode;
s122: projecting the front mesh of the human face personalized model to a two-dimensional plane in the fused image in front of the human face;
s123: and projecting two side grids of the human face personalized model to the vertical side plane of the fused image.
The two projected side grids and the projected front grid are fused according to a projection boundary, and a texture mapping face model of the person to be mapped is obtained.
The invention can divide the face mesh into a front mesh and a side mesh. When splicing front side photos, the invention defines the boundary of image splicing according to the characteristic points in the photos. In the three-dimensional face model, the same feature points can be found out, and an approximate boundary line is determined according to the connection relation of the model mesh vertexes. At this time, the mesh of the three-dimensional face model is divided into three parts, as shown in fig. 11. The whole mesh is expanded in an orthogonal projection manner, wherein the front mesh is projected to a two-dimensional plane right in front of the human face, the two side meshes are projected to perpendicular side planes, and the projection result is shown in fig. 12. And splicing the grids in the three planes into a complete face expansion grid. The splicing mode of the plane grid is the same as that of the texture: keeping the projection of the front surface mesh unchanged, affine transformation is carried out on the projections of the two side surface meshes according to the definition of the boundary of the mesh, so that the boundary of the projections is superposed with the boundary of the projection of the front surface mesh. Finally, the front side meshes are aligned along the boundary, and a complete face model mesh expansion diagram is obtained, as shown in fig. 13.
Because the projection mode of the face grid is the same as that of the front side photo, and the same affine transformation is carried out in the splicing process, the feature points in the expanded grid and the feature points in the texture are completely overlapped. Therefore, the expanded mesh is aligned with the face texture map, and the two-dimensional coordinates of each vertex in the mesh in the texture space are texture coordinates. After determining the texture coordinates of the vertices, the DirectX 3D rendering environment automatically maps the texture in fig. 9 to the surface of the personalized face model, and finally the face model is obtained as shown in fig. 14.
The invention provides a texture mapping method in three-dimensional virtual human head and face modeling, which comprises the steps of constructing an individualized human face model, cutting a front view and a side view of a real human face of a person to be mapped according to the position of a characteristic point, and calculating the offset and offset angle of each flush area; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a side view after the miscut transformation; and introducing a Laplacian pyramid algorithm to fuse the cut front view, the side view after the miscut transformation and the mirror image of the side view after the miscut transformation, and mapping the fused image into a personalized face model to obtain a texture mapping face model of the person to be mapped. The method can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions may be made without departing from the spirit of the invention, which should be construed as belonging to the scope of the invention.
Claims (10)
1. A texture mapping method in three-dimensional virtual human head and face modeling is characterized by comprising the following steps:
acquiring a front view and a side view of a real face of a person to be mapped;
marking feature points of the front view and the side view, selecting face edge feature points in the front view and the side view after marking the feature points, correcting the face edge feature points in the front view and the side view after marking the feature points, and cutting according to the positions of the corrected face edge feature points to obtain a front view and a side view which contain the face edge feature points after cutting;
acquiring a head data set describing the proportion and the structure of the head of a real person;
constructing a virtual general human face model by using the head data set;
performing geometric deformation and feature point adaptation on the general face model based on the feature points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain an individualized face model;
calculating the offset and offset angle of each flush area according to the flush areas of the cut front view and the cut side view;
performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view;
calculating a Laplacian pyramid of mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation;
performing boundary splicing on the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to each layer to obtain a spliced Laplacian pyramid;
reconstructing a Gaussian pyramid layer by layer according to the spliced Laplacian pyramid;
determining a fusion image according to the Gaussian pyramid;
and mapping the fusion image into the personalized human face model to obtain a texture mapping human face model of the person to be mapped.
2. A texture mapping method as claimed in claim 1 wherein the construction of a virtual generic face model using the head data set comprises:
establishing a virtual general head and face model in 3DsMax and refining according to the data describing the basic proportion and structure of the head of the real person in the head data set;
wherein, the edge feature point transformation obviously grabs a specific area,
wherein the general face model is a three-dimensional mesh patch model, and the three-dimensional mesh patch model is expressed as:
M={VM,FM,GM}
wherein, VMSet of vertex coordinates representing a three-dimensional mesh, FMRepresenting sets of vertex indices, G, that make up patchesMRepresenting other information including smooth groups, material links, and material references.
3. The texture mapping method according to claim 1, wherein the step of labeling feature points of the front view and the side view, selecting face edge feature points in the front view and the side view after the feature points are labeled, correcting the face edge feature points in the front view and the side view after the feature points are labeled, and performing cutting according to the corrected face edge feature point positions to obtain the front view and the side view which contain the face edge feature points after the cutting comprises:
cutting the front view and the side view according to contour lines, scaling the front view and the side view in an equal proportion and aligning the faces to obtain an aligned front view and an aligned side view;
the front view and the side view after alignment are divided into a plurality of areas according to contour lines;
marking feature points in different areas of the front view and the side view after alignment;
selecting a first face edge feature point from the front view of the real person marked with the feature point, and selecting a second face edge feature point from the side view;
correcting the first face edge feature point and the second face edge feature point;
taking the corrected first face edge feature point and the corrected second face edge feature point which are positioned in the flush area as matching feature points, and storing coordinate information of the matching feature points;
connecting each first face edge feature point in the front view to draw a first curve, and connecting each second face edge feature point in the side view to draw a second curve;
and cutting the left area of the side view by taking the first curve and the mirror image curve of the first curve as a critical value, reserving the middle part of the front view, and cutting the left area of the side view by taking the second curve as a critical value to obtain the cut front view and the cut side view.
4. The texture mapping method according to claim 1, wherein the obtaining of the personalized face model by using the radial basis interpolation algorithm and performing geometric deformation and feature point adaptation on the generic face model based on the feature points labeled by the front view and the side view comprises:
determining feature points in the general face model;
and based on the feature points marked by the front view and the side view, locally deforming the curved surface near the feature points of the general face model by using a radial basis interpolation algorithm to obtain an individualized face model.
5. A texture mapping method as claimed in claim 3, wherein the obtaining of the front view and the side view after alignment by contour cutting, scaling and face alignment comprises:
respectively cutting the front view and the side view to obtain a front view and a side view which are composed of different areas;
scaling the cut front view and the cut side view in corresponding areas to enable the front view and the side view to be the same in size in the same area;
and taking the vertex, the eyes, the lips and the chin as contour lines, and aligning the front view and the side view after the equal scaling to obtain a front view and a side face image after the mutual alignment.
6. The texture mapping method according to claim 3, wherein the correcting the first and second face edge feature points comprises:
and correcting the first face edge feature point and the second face edge feature point by dragging so as to enable the first face edge feature point and the second face edge feature point to be located at face edge positions.
7. The texture mapping method according to claim 1,
the laplacian pyramid is represented as:
the laplacian image after each layer stitching is represented as:
LPl (Total)=LPl (left)+LPl (Zheng)+LPl (Right),
The reconstructed gaussian pyramid is represented as:
wherein the extended operator represents the interpolation of the input image, GlImage representing the first layer of Gaussian pyramid, PlMeans first layer tensionThe placian pyramid image, N is the number of layers.
8. The texture mapping method according to claim 1, wherein the obtaining a stitched laplacian pyramid by stitching boundaries of the layers of the cropped front view, the miscut transformed side view, and the laplacian pyramid of the mirror image comprises:
splicing the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to boundaries of each layer, and fusing the boundary in a pixel weighted average mode to obtain a Laplacian image after splicing of each layer;
and restoring the Laplacian pyramid from each layer of spliced Laplacian images to obtain the spliced Laplacian pyramid.
9. The texture mapping method of claim 1, wherein mapping the fused image into the personalized face model comprises:
expanding the grids of the human face personalized model according to an orthogonal projection mode;
projecting the front mesh of the human face personalized model to a two-dimensional plane in the fusion image in front of the human face;
and projecting two side grids of the human face personalized model to the vertical side plane of the fusion image.
10. The texture mapping method according to claim 1, wherein after projecting two side meshes of the face personalized model to perpendicular side planes of the fused image, the texture mapping method further comprises:
and fusing the two projected side grids and the front grid according to a projection boundary to obtain a texture mapping face model of the person to be mapped.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110984213.2A CN113808272B (en) | 2021-08-25 | 2021-08-25 | Texture mapping method in three-dimensional virtual human head and face modeling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110984213.2A CN113808272B (en) | 2021-08-25 | 2021-08-25 | Texture mapping method in three-dimensional virtual human head and face modeling |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113808272A true CN113808272A (en) | 2021-12-17 |
CN113808272B CN113808272B (en) | 2024-04-12 |
Family
ID=78894189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110984213.2A Active CN113808272B (en) | 2021-08-25 | 2021-08-25 | Texture mapping method in three-dimensional virtual human head and face modeling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113808272B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067041A (en) * | 2022-01-14 | 2022-02-18 | 深圳大学 | Material generation method and device of three-dimensional model, computer equipment and storage medium |
CN115797556A (en) * | 2022-11-22 | 2023-03-14 | 灵瞳智能科技(北京)有限公司 | Virtual digital human face contour 3D reconstruction device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050063582A1 (en) * | 2003-08-29 | 2005-03-24 | Samsung Electronics Co., Ltd. | Method and apparatus for image-based photorealistic 3D face modeling |
CN102222363A (en) * | 2011-07-19 | 2011-10-19 | 杭州实时数码科技有限公司 | Method for fast constructing high-accuracy personalized face model on basis of facial images |
CN102663820A (en) * | 2012-04-28 | 2012-09-12 | 清华大学 | Three-dimensional head model reconstruction method |
KR20140122401A (en) * | 2013-04-10 | 2014-10-20 | 한국과학기술원 | Method and apparatus for gernerating 3 dimension face image |
WO2017029488A2 (en) * | 2015-08-14 | 2017-02-23 | Metail Limited | Methods of generating personalized 3d head models or 3d body models |
CN109685740A (en) * | 2018-12-25 | 2019-04-26 | 努比亚技术有限公司 | Method and device, mobile terminal and the computer readable storage medium of face normalization |
CN110443885A (en) * | 2019-07-18 | 2019-11-12 | 西北工业大学 | Three-dimensional number of people face model reconstruction method based on random facial image |
CN110458752A (en) * | 2019-07-18 | 2019-11-15 | 西北工业大学 | A kind of image based under the conditions of partial occlusion is changed face method |
WO2020244076A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Face recognition method and apparatus, and electronic device and storage medium |
-
2021
- 2021-08-25 CN CN202110984213.2A patent/CN113808272B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050063582A1 (en) * | 2003-08-29 | 2005-03-24 | Samsung Electronics Co., Ltd. | Method and apparatus for image-based photorealistic 3D face modeling |
CN102222363A (en) * | 2011-07-19 | 2011-10-19 | 杭州实时数码科技有限公司 | Method for fast constructing high-accuracy personalized face model on basis of facial images |
CN102663820A (en) * | 2012-04-28 | 2012-09-12 | 清华大学 | Three-dimensional head model reconstruction method |
KR20140122401A (en) * | 2013-04-10 | 2014-10-20 | 한국과학기술원 | Method and apparatus for gernerating 3 dimension face image |
WO2017029488A2 (en) * | 2015-08-14 | 2017-02-23 | Metail Limited | Methods of generating personalized 3d head models or 3d body models |
CN109685740A (en) * | 2018-12-25 | 2019-04-26 | 努比亚技术有限公司 | Method and device, mobile terminal and the computer readable storage medium of face normalization |
WO2020244076A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Face recognition method and apparatus, and electronic device and storage medium |
CN110443885A (en) * | 2019-07-18 | 2019-11-12 | 西北工业大学 | Three-dimensional number of people face model reconstruction method based on random facial image |
CN110458752A (en) * | 2019-07-18 | 2019-11-15 | 西北工业大学 | A kind of image based under the conditions of partial occlusion is changed face method |
Non-Patent Citations (6)
Title |
---|
YANGYU FAN ETAL.: "Label Distribution-Based Facial Attractiveness Computation by Deep Residual Learning", IEEE TRANSACTIONS ON MULTIMEDIA, vol. 20, no. 8, 31 August 2018 (2018-08-31) * |
YANYU FAN ETAL.: "Full Face-and-Head 3D Model With Photorealistic Texture", IEEE ACCESS, vol. 8 * |
施立: "基于网络的三维人脸重构系统的研究", 中国优秀硕士论文全文数据库信息科技辑, no. 11, 15 November 2013 (2013-11-15) * |
胡永利等: "基于形变模型的三维人脸重建方法及其改进", 计算机工程, vol. 31, no. 19 * |
詹永照;胡灵敏;沈荣荣;: "近似人脸全视角图生成及在特定人脸模型映射", 系统仿真学报, no. 03 * |
黄炎辉等: "基于多尺度分析的自动人脸照片移植", 计算机应用研究, vol. 34, no. 11, 30 November 2017 (2017-11-30) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067041A (en) * | 2022-01-14 | 2022-02-18 | 深圳大学 | Material generation method and device of three-dimensional model, computer equipment and storage medium |
CN115797556A (en) * | 2022-11-22 | 2023-03-14 | 灵瞳智能科技(北京)有限公司 | Virtual digital human face contour 3D reconstruction device |
Also Published As
Publication number | Publication date |
---|---|
CN113808272B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5818773B2 (en) | Image processing apparatus, image processing method, and program | |
US20220046218A1 (en) | Disparity image stitching and visualization method based on multiple pairs of binocular cameras | |
JP3030485B2 (en) | Three-dimensional shape extraction method and apparatus | |
CN101916454B (en) | Method for reconstructing high-resolution human face based on grid deformation and continuous optimization | |
JP3954211B2 (en) | Method and apparatus for restoring shape and pattern in 3D scene | |
CN109859098A (en) | Facial image fusion method, device, computer equipment and readable storage medium storing program for executing | |
US20150178988A1 (en) | Method and a system for generating a realistic 3d reconstruction model for an object or being | |
US20050140670A1 (en) | Photogrammetric reconstruction of free-form objects with curvilinear structures | |
KR100327541B1 (en) | 3D facial modeling system and modeling method | |
US7076117B2 (en) | Methods and apparatus for cut-and-paste editing of multiresolution surfaces | |
CN113808272B (en) | Texture mapping method in three-dimensional virtual human head and face modeling | |
CN102663818A (en) | Method and device for establishing three-dimensional craniomaxillofacial morphology model | |
CN111462030A (en) | Multi-image fused stereoscopic set vision new angle construction drawing method | |
JP3467725B2 (en) | Image shadow removal method, image processing apparatus, and recording medium | |
CN109461197B (en) | Cloud real-time drawing optimization method based on spherical UV and re-projection | |
CN113989441B (en) | Automatic three-dimensional cartoon model generation method and system based on single face image | |
CN118247429A (en) | Air-ground cooperative rapid three-dimensional modeling method and system | |
CN113379899A (en) | Automatic extraction method for regional images of construction engineering working face | |
CN115082640B (en) | 3D face model texture reconstruction method and device based on single image | |
CN114049281B (en) | Wide-angle portrait photo distortion correction method based on self-adaptive grid | |
CN112561784B (en) | Image synthesis method, device, electronic equipment and storage medium | |
JP2000339465A (en) | Corresponding method for three-dimensional form | |
JP7251003B2 (en) | Face mesh deformation with fine wrinkles | |
JPH03138784A (en) | Reconstructing method and display method for three-dimensional model | |
KR20120118462A (en) | Concave surface modeling in image-based visual hull |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |