CN113808272A - Texture mapping method in three-dimensional virtual human head and face modeling - Google Patents

Texture mapping method in three-dimensional virtual human head and face modeling Download PDF

Info

Publication number
CN113808272A
CN113808272A CN202110984213.2A CN202110984213A CN113808272A CN 113808272 A CN113808272 A CN 113808272A CN 202110984213 A CN202110984213 A CN 202110984213A CN 113808272 A CN113808272 A CN 113808272A
Authority
CN
China
Prior art keywords
view
face
feature point
feature points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110984213.2A
Other languages
Chinese (zh)
Other versions
CN113808272B (en
Inventor
樊养余
刘洋
马浩悦
李文星
郭哲
齐敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110984213.2A priority Critical patent/CN113808272B/en
Publication of CN113808272A publication Critical patent/CN113808272A/en
Application granted granted Critical
Publication of CN113808272B publication Critical patent/CN113808272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a texture mapping method in three-dimensional virtual human head and face modeling, which comprises the steps of constructing an individualized human face model, cutting a front view and a side view of a real human face of a person to be mapped according to the positions of feature points, and calculating the offset and offset angle of each flush area; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view; and introducing a Laplacian pyramid algorithm to fuse the mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation, mapping the fused images into the personalized face model, and obtaining a texture mapping face model of the person to be mapped. The method can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.

Description

Texture mapping method in three-dimensional virtual human head and face modeling
Technical Field
The invention belongs to the technical field of image processing and computer graphics, and particularly relates to a texture mapping method in three-dimensional virtual human head and face modeling.
Background
Texture mapping is an important stage of three-dimensional virtual face modeling, and realistic three-dimensional virtual face reconstruction is based on good texture mapping, but the following problems exist in practice:
(1) the geometric reconstruction and the textural feature recovery of the human face by utilizing the three-dimensional scanning data can obtain an ideal human face effect, and the model precision is high, but the scanning equipment is expensive in manufacturing cost, complex in operation and difficult to popularize;
(2) when the front side photo of a person is used to generate the head panorama texture map, the texture comes from different photos, so that gaps are generated due to the color difference of texture source pictures, and therefore, areas with different texture sources need to be subjected to color fusion so as to enable the texture color to be in smooth transition.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a texture mapping method in three-dimensional virtual human head and face modeling. The technical problem to be solved by the invention is realized by the following technical scheme:
the texture mapping method in the three-dimensional virtual human head and face modeling provided by the invention comprises the following steps:
acquiring a front view and a side view of a real face of a person to be mapped;
marking feature points of the front view and the side view, selecting face edge feature points in the front view and the side view after marking the feature points, correcting the face edge feature points in the front view and the side view after marking the feature points, and cutting according to the positions of the corrected face edge feature points to obtain a front view and a side view which contain the face edge feature points after cutting;
acquiring a head data set describing the proportion and the structure of the head of a real person;
constructing a virtual general human face model by using the head data set;
performing geometric deformation and feature point adaptation on the general face model based on the feature points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain an individualized face model;
calculating the offset and offset angle of each flush region according to the flush regions of the cut front view and the cut side view;
performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view;
calculating a Laplacian pyramid of mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation;
performing boundary splicing on the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to each layer to obtain a spliced Laplacian pyramid;
reconstructing a Gaussian pyramid layer by layer according to the spliced Laplacian pyramid;
determining a fusion image according to the Gaussian pyramid;
and mapping the fusion image into the personalized human face model to obtain a texture mapping human face model of the person to be mapped.
Optionally, the constructing a virtual universal face model by using the head data set includes:
establishing a virtual general head and face model in 3DsMax and refining according to the data describing the basic proportion and structure of the head of the real person in the head data set;
wherein, the edge feature point transformation obviously grabs a specific area,
wherein the general face model is a three-dimensional mesh patch model, and the three-dimensional mesh patch model is expressed as:
M={VM,FM,GM}
wherein, VMSet of vertex coordinates representing a three-dimensional mesh, FMRepresenting sets of vertex indices, G, that make up patchesMRepresenting other information including smooth groups, material links, and material references.
Optionally, the feature point labeling is performed on the front view and the side view, the facial edge feature points in the front view and the side view after the feature point labeling are selected, the facial edge feature points in the front view and the side view after the feature point labeling are corrected, the cutting is performed according to the positions of the corrected facial edge feature points, and the front view and the side view which contain the facial edge feature points after the cutting are obtained include:
cutting the front view and the side view according to contour lines, scaling the front view and the side view in an equal proportion and aligning the faces to obtain an aligned front view and an aligned side view;
the front view and the side view after alignment are divided into a plurality of areas according to contour lines;
marking feature points in different areas of the front view and the side view after alignment;
selecting a first face edge feature point from the front view of the real person marked with the feature point, and selecting a second face edge feature point from the side view;
correcting the first face edge feature point and the second face edge feature point;
taking the corrected first face edge characteristic points and the corrected second face edge characteristic points which are positioned in the flush area as matching characteristic points, and storing coordinate information of the matching characteristic points;
connecting each first face edge feature point in the front view to draw a first curve, and connecting each second face edge feature point in the side view to draw a second curve;
and cutting the left area of the side view by taking the first curve and the mirror image curve of the first curve as a critical value, reserving the middle part of the front view, and cutting the left area of the side view by taking the second curve as a critical value to obtain the cut front view and the cut side view.
Optionally, the obtaining of the personalized face model by using the radial basis interpolation algorithm and performing geometric deformation and feature point adaptation on the general face model based on the feature points labeled by the front view and the side view includes:
determining feature points in the general face model;
and based on the feature points marked by the front view and the side view, locally deforming the curved surface near the feature points of the general face model by using a radial basis interpolation algorithm to obtain an individualized face model.
Optionally, the obtaining the aligned front view and side view by cutting the front view and side view according to contour lines, scaling the front view and side view and aligning the face comprises:
respectively cutting the front view and the side view to obtain a front view and a side view which are composed of different areas;
scaling the cut front view and the cut side view in corresponding areas to enable the front view and the side view to be the same in size in the same area;
and taking the vertex, the eyes, the lips and the chin as contour lines, and aligning the front view and the side view after the equal scaling to obtain a front view and a side face image after the mutual alignment.
6. The texture mapping method according to claim 3, wherein the correcting the first and second facial edge feature points comprises:
and correcting the first face edge feature point and the second face edge feature point by dragging so as to enable the first face edge feature point and the second face edge feature point to be located at face edge positions.
Wherein the Laplacian pyramid is represented as:
Figure BDA0003230019230000041
the laplacian image after each layer stitching is represented as:
Figure BDA0003230019230000051
the reconstructed gaussian pyramid is represented as:
Figure BDA0003230019230000052
wherein the extended operator represents the interpolation of the input image, GlImage representing the first layer of the Gaussian pyramid, PlIndicating the Laplace pyramid image of the ith layer, and N is the number of layers.
Optionally, the splicing the cut front view, the miscut transformed side view, and the laplacian pyramid of the mirror image according to the boundaries of the layers to obtain the spliced laplacian pyramid includes:
performing boundary splicing on the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to each layer, and fusing on the boundaries in a pixel weighted average mode to obtain a Laplacian image after each layer of splicing;
and restoring the Laplacian pyramid from each layer of spliced Laplacian images to obtain the spliced Laplacian pyramid.
Optionally, mapping the fused image into the personalized face model includes:
expanding the grids of the human face personalized model according to an orthogonal projection mode;
projecting the front mesh of the human face personalized model to a two-dimensional plane in the fusion image in front of the human face;
and projecting two side grids of the human face personalized model to the vertical side plane of the fusion image.
Optionally, after projecting the two side meshes of the face personalized model to the perpendicular side planes of the fused image, the texture mapping method further includes:
and fusing the two projected side grids and the front grid according to a projection boundary to obtain a texture mapping face model of the person to be mapped.
The invention provides a texture mapping method in three-dimensional virtual human head and face modeling, which comprises the steps of constructing an individualized human face model, cutting a front view and a side view of a real human face of a person to be mapped according to the position of a characteristic point, and calculating the offset and offset angle of each flush area; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view; and introducing a Laplacian pyramid algorithm to fuse mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation, mapping the fused images into the personalized face model, and obtaining a texture mapping face model of the person to be mapped. The method can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a flow chart illustrating a texture mapping method in three-dimensional virtual human head-face modeling according to an embodiment of the present invention;
FIG. 2a is a front face and a side face of a real person labeled with feature points;
FIG. 2b is the result of the front side view preprocessing of the character;
FIG. 2c is a front side view feature point calibration result;
FIG. 2d is a front side view zone division;
FIG. 3a is a generic face geometry model and its mesh structure;
FIG. 3b is a diagram illustrating FDP points in the MPEG-4 standard;
FIG. 3c is a schematic representation of a feature point used in the present invention;
FIG. 4 is a personalized face model modeling test result;
FIG. 5 is a radiation conversion illustration;
FIG. 6 is a comparison of the results of a miscut transform;
FIG. 7 is an image cropping and direct stitching result;
FIG. 8 is a diagram of a pyramid decomposition and fusion process for three images;
FIG. 9 is a flow chart of a Laplacian pyramid restored image;
FIG. 10 is a Laplacian pyramid based texture fusion result;
FIG. 11 is a projection decomposition of a personalized face model mesh;
FIG. 12 is a front side grid projected onto a two-dimensional plane;
FIG. 13 is a front side stitching result graph;
FIG. 14 is an image of a personalized face model with texture mapping added.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
As shown in fig. 1, the texture mapping method in three-dimensional virtual human head-face modeling provided by the present invention includes:
s1, acquiring a front view and a side view of the real face of the person to be mapped;
s2, marking feature points of the front view and the side view, selecting face edge feature points in the front view and the side view after marking the feature points, correcting the face edge feature points in the front view and the side view after marking the feature points, and cutting according to the positions of the corrected face edge feature points to obtain a front view and a side view which contain the face edge feature points after cutting;
as an alternative embodiment of the present invention, step S2 includes:
s21: cutting the front view and the side view according to contour lines, scaling the front view and the side view in an equal proportion and aligning the faces to obtain a front view and a side view after alignment;
the front view and the side view after alignment are divided into a plurality of areas according to contour lines;
step S21 includes:
step a, respectively cutting a front view and a side view to obtain a front view and a side view composed of different areas;
b, scaling the cut front view and the cut side view in the corresponding areas in equal proportion so as to enable the front view and the side view to be the same in size in the same area;
and c, taking the vertex, the eyes, the lips and the chin as contour lines, aligning the front view and the side view after equal scaling, and obtaining the front view and the side face image after mutual alignment.
The front side view of the face is cut and scaled in an equal ratio, the information of the face area in the image is kept to the maximum degree and centered, and meanwhile, the face information of the front side view is aligned according to four main contour lines of the top of the head, the eyes, the lips and the chin, as shown in fig. 2 b.
Step S22: marking feature points in different areas of the front view and the side view after alignment;
FIG. 2a is a front face and a side face of a real person marked with feature points;
fig. 2a shows the front face and the side face of the real person after the feature points are marked. In the step, characteristic points of the human face are respectively marked in the front view and the side view. From the frontal view, the width and height information of the feature point can be obtained, and are respectively marked as (x, y)1) The height and depth information of the feature points can be obtained from the side photos and is marked as (y)2Z). Finally, let y be (y)1+y2) And/2, obtaining three-dimensional coordinates (x, y, z) of the personalized human face characteristic points.
Step S23: selecting a first face edge feature point from the front view of the real person marked with the feature point, and selecting a second face edge feature point from the side view;
respectively selecting a first face edge feature point of each area of the front view and a second face edge feature point of each area of the side view from the front view and the side view after the feature points are marked;
step S24: correcting the first face edge characteristic point and the second face edge characteristic point;
the invention can correct the first face edge feature point and the second face edge feature point by dragging so as to enable the first face edge feature point and the second face edge feature point to be located at face edge positions.
Step S25: taking the corrected first face edge feature point and the corrected second face edge feature point which are positioned in the flush area as matching feature points, and storing coordinate information of the matching feature points;
defining face edge feature points of a front side view, in the step, 10 transformation face edge feature points can be defined in a face area, the face edge feature points are taken as a unit and are distributed on a vertical bisector of an image by default, the image is divided into 9 horizontal equal-height areas, draggable points are created and drawn on the image, each dragging point is selected, the image is moved to a proper position of the face edge, the coordinates of the dragging point are recorded, the transformation feature points are aligned to each dragging point, and the ith (i is more than or equal to 0 and less than or equal to 10) feature point is recorded as pi(xi,yi) The origin of coordinates may be at the center of the image. The facial edge feature point sets of the front side view are respectively marked as PfAnd PsThe face edge feature points are marked with white dots as in fig. 2c, and finally the face edge feature point position information is saved.
Step S26: connecting each first face edge feature point in the front view to draw a first curve, and connecting each second face edge feature point in the side view to draw a second curve;
step S27: and cutting the left area of the side view by taking the first curve and the mirror image curve of the first curve as a critical value, reserving the middle part of the front view, and cutting the left area of the side view by taking the second curve as a critical value to obtain the cut front view and the cut side view.
Firstly, according to the defined face edge feature point set PfAnd PsAnd connecting each point in the front side view to draw a characteristic curve, and dividing the front side view into nine transverse areas according to the position information of the facial edge characteristic points.
S3, acquiring a head data set describing the proportion and the structure of the head of the real person;
s4, constructing a virtual universal human face model by using the head data set;
in the step, a virtual general head-face model is established and refined in 3DsMax according to data describing the basic proportion and structure of the head of a real person in a head data set;
wherein, the edge feature point transformation obviously grabs a specific area,
the three-dimensional mesh patch model is represented as:
M={VM,FM,GM} (1)
wherein, VMSet of vertex coordinates representing a three-dimensional mesh, FMRepresenting sets of vertex indices, G, that make up patchesMOther information is represented, including smooth groups, material links, and material references.
The invention can obtain an original three-dimensional face grid model by scanning a real face through the three-dimensional laser scanner, or directly derive a required human head model from a model base through the professional human body modeling software of the prose, or manually establish a face dimensional model through professional modeling software such as 3DS MAX, MAYA and the like. In the step, a virtual human universal head model is established in 3DsMax, and the universal head model is a three-dimensional grid patch model as shown in FIG. 3 a.
S5, performing geometric deformation and feature point adaptation on the universal face model based on the feature points marked by the front view and the side view by using a radial basis interpolation algorithm to obtain an individualized face model;
as an alternative embodiment of the present invention, step S5 includes:
s51: determining feature points in the general face model;
referring to fig. 3b, fig. 3b is the FDP point in the MPEG-4 standard, the face feature points selected in the feature point adaptation of the present invention are based on the MPEG-4 standard, in which the FDP is related to the face geometry. In MPEG-4, a total of 84 FDP feature points are defined. These feature points are divided into 11 groups of cheeks, eyes, nose, mouth, ears, etc., and a general face model can be converted into a specific face model by the definition of these feature points. Referring to fig. 2b, fig. 2b redefines 153 facial feature points, as shown in fig. 3c, including 14 parts of eyes, eyelids, face contour, etc., with reference to the MPEG-4 standard for the facial feature points defined in the generic face model according to the present invention.
S52: and local deformation is carried out on the curved surface near the characteristic points of the universal face model based on the characteristic points marked by the front view and the side view by utilizing a radial basis interpolation algorithm to obtain the personalized face model.
In order to normalize the generic face and the personalized face in the photo to the same coordinate space, it is necessary to perform an overall transformation on the generic face model so that the size of the generic face model is substantially the same as that of the personalized face. Assuming that the coordinates of any grid vertex of the general face model are V (Vx, Vy, Vz), the centers of two eyes are used as the original points and are marked as O (Ox, Oy, Oz), the width, height and depth of the measured model are respectively marked as Lx, Ly and Lz, and then the transformed new position V '(V' x, V 'y, V' z) is calculated by the following formula:
Vi′=(Vi-Oi)·li/Li,i=x,y,z.
radial Basic Function (RBF) is a deformation method based on spatial discrete data interpolation, and is used for approximation of multivariate functions in multidimensional space. The method first fits a continuous multivariate function by using a linear combination of basis functions. The radial basis function has a good fitting effect on irregular point clouds, and can generate a smooth surface in a three-dimensional space, so that the radial basis function is widely applied to reconstruction of a three-dimensional face model.
The radial basis interpolation algorithm is applied to the geometric deformation of the human face, and the principle is as follows: knowing the n feature points defined in the generic face model and knowing the coordinates of all the mesh vertices of the generic face model, it is assumed that the feature points are from the original position piMove to new position p'iThe displacement of which is Δ pi=p′i-piThe displacement Δ p of each non-feature point p can be interpolated using the radial basis function. At this time, the displacement Δ p of the characteristic pointiAs a function value in an interpolation function, and n feature points piThe three-dimensional coordinates of the interpolation function are used as observed values, and under the condition that the expression form of the interpolation function is known, the parameter values in the interpolation function can be trained. And substituting the non-characteristic points into the interpolation function to obtain an interpolation function value F (p), which is the displacement delta p of the non-characteristic points.
Let the coordinate of the feature point (observation point) of the general face model be P ═ P1,p2,......pnAnd the coordinates of the personalized human face characteristic point obtained by data acquisition are P '═ P'1,p'2,......p'nAnd calculating the displacement of the characteristic points as follows: f ═ Δ p1,Δp2,......Δpn}. The known radial basis function form is:
Figure BDA0003230019230000121
where Mp + t is a low order polynomial, here representing an affine transformation. To maintain smoothness of the interpolation result, the following constraint conditions are established:
Figure BDA0003230019230000122
let Δ p be based on the determined characteristic point displacementk=f(pk) K is more than or equal to 0 and less than or equal to n, namely:
Figure BDA0003230019230000123
and (4) obtaining n +4 equations by simultaneous constraint conditions. Writing the system of equations in matrix form as
Figure BDA0003230019230000124
Wherein phij,i=φ(||pj-piAnd | | j) is not less than 0, and i is not less than N. Expression of basis functions the exponential function phi (R) is selected to be exp (-R/R) and the parameter R is selected to be 64.
Solving the linear equation set to obtain a radial basis function coefficient ciAnd affine transformation components M and t. In three-dimensional space, ciAnd t is a three-dimensional row vector, M is a 3 × 3 square matrix
And substituting the non-characteristic point coordinate p of the general face model into the interpolation function expression to obtain the displacement delta p generated after the deformation of the non-characteristic point, so that the coordinate after the deformation of the non-characteristic point, namely the personalized face non-characteristic point coordinate p' ═ p + delta p, and further the personalized face model grid point coordinate can be obtained by calculation. At this point, we have obtained a personalized face mesh model.
The invention realizes the individuation of the general face by geometrically deforming the general face model obtained in S4 by using a radial basis interpolation algorithm, the obtained result has better precision in the expression of facial features, and the test effect is shown in figure 4. The figure input in this step has a front view size of 583px 658px and a side view size of 640px 658 px.
S6, calculating the offset and offset angle of each flush area according to the flush areas of the cut front view and the cut side view;
first, after the forward view is divided into regions, the required horizontal shear offset Δ x for the ith region in the side view is:
Δx=(xf,i+1-xf,i)-(xs,i+1-xs,i)
wherein xf,iAnd xf,i+1Is the upper and lower edges of the ith area in front viewHorizontal coordinate of world, xs,iAnd xs,i+1The horizontal coordinates of the upper and lower boundaries of the ith area in the side view;
further, the miscut angle θ is calculated, and since the shift amount of the conversion for a certain line of pixels of the image is equal in the horizontal miscut conversion, the miscut angle θ can be calculated by using the boundary pixels of the image as a special case conversion, as shown in fig. 5. In the feature point conversion legend, assuming that two vector starting points formed by four feature points are translated to be coincident temporarily, the miscut offset Δ x can be simplified into a lateral difference of the position of the lower boundary feature point, and on the premise of ensuring that Δ x is consistent, comparing results before and after side view boundary pixel conversion to obtain a miscut angle θ satisfying the following relation:
tanθ=(x′i+1,0-xi+1,0)/(yi+1,0-yi,0)
the horizontal miscut matrix for constructing the region has:
Figure BDA0003230019230000131
s7, performing the miscut transformation on the cut side view according to the offset and the offset angle to obtain a miscut transformed side view;
in this step, the miscut processing is sequentially performed for each region of the side view, and an output image with a characteristic curve consistent with that of the front view is obtained, and the comparison result is shown in fig. 6.
S8, calculating a Laplacian pyramid of mirror images of the cut front view, the side view after the miscut transformation and the side view after the miscut transformation;
in the step, the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image are spliced according to the boundaries of each layer, and the Laplacian pyramid is fused on the boundaries in a pixel weighted average mode to obtain the Laplacian image after each layer of splicing; as shown by the effect of fig. 7. And restoring the Laplacian pyramid from each layer of spliced Laplacian images to obtain the spliced Laplacian pyramid.
S9, splicing the cut front view, the side view after the miscut transformation and the Laplacian pyramid of the mirror image according to boundaries of each layer to obtain a spliced Laplacian pyramid;
s10, reconstructing a Gaussian pyramid layer by layer according to the spliced Laplacian pyramid;
s11, determining a fused image according to the Gaussian pyramid;
fig. 8 shows a process of obtaining a fused image by laplacian pyramid fusion. For three face texture images of the left side, the front side and the right side, Gaussian pyramid decomposition of the images is firstly solved. Let original image be G0In the order of G0As the zeroth layer of the gaussian pyramid. Low-pass filtering and alternate-line alternate-row 2-reduction sampling are carried out on the original input image to obtain a first layer G of the Gaussian pyramid1(ii) a Low-pass filtering and 2-down sampling are carried out on the first layer image to obtain a second layer image G of the Gaussian pyramid2(ii) a The above process is repeated, and the size of the current layer image is 1/4 which is the size of the previous layer image in turn. To this end, from G0、G1、…、GlForming a sequence of images of a gaussian pyramid. Image G of the l-th layer of the Gaussian pyramidlAs shown in the following formula.
Figure BDA0003230019230000151
Wherein
Figure BDA0003230019230000152
Where w (m, n) is the generating kernel function, the present invention uses a 5 × 5 gaussian template.
The laplacian pyramid is a residual prediction pyramid. The prediction residual refers to the difference between the l layer image and the predicted image obtained by interpolating and amplifying the l +1 layer image. After the Gaussian pyramid of the image is established, each layer of image G is processedlInterpolating and amplifying to obtain an amplified image Gl *,Gl *Pixel size and Gl-1Same, i.e. GlThe interpolation is amplified by four times, and the formula is as follows:
Figure BDA0003230019230000153
Figure BDA0003230019230000156
the Laplacian pyramid is constructed by the following formula
Figure BDA0003230019230000155
From LP1、LP0、…、LPNThe established pyramid is the Laplacian pyramid, and each layer of image is the difference of the image of the Gaussian pyramid and the image of the previous layer after interpolation and amplification. After the laplacian pyramid of each image is obtained, the left image, the front image and the right image are spliced according to the boundary, the left image, the front image and the right image are fused in a pixel weighted average mode near the boundary of the spliced image, and the laplacian image of the I layer after splicing is recorded as follows:
LPl (Total)=LPl (left)+LPl (Zheng)+LPl (Right)
And restoring the corresponding Gaussian pyramids layer by layer according to the spliced Laplacian pyramids. The formula for reconstructing the gaussian pyramid is as follows:
Figure BDA0003230019230000161
where the extended operator means interpolating and magnifying the input image, i.e. Gl *=Expend(Gl), GlImage representing the first layer of Gaussian pyramid, PlIndicating the Laplace pyramid image of the ith layer, and N is the number of layers.
The invention can adopt 4 layers of Laplacian pyramids, and recurs layer by layer from top to bottom starting from the topmost layer of the Laplacian pyramid to finally obtain the Gaussian pyramid of the spliced image, wherein the bottommost layer image G of the Gaussian pyramid0That is, the final image after the front side photograph fusion, and the recursion flow and the fusion result thereof are shown in fig. 9 and 10.
And S12, mapping the fused image into a personalized human face model to obtain a texture mapping human face model of the person to be mapped.
As an alternative embodiment of the present invention, step S12 includes:
s121: expanding the grids of the human face personalized model according to an orthogonal projection mode;
s122: projecting the front mesh of the human face personalized model to a two-dimensional plane in the fused image in front of the human face;
s123: and projecting two side grids of the human face personalized model to the vertical side plane of the fused image.
The two projected side grids and the projected front grid are fused according to a projection boundary, and a texture mapping face model of the person to be mapped is obtained.
The invention can divide the face mesh into a front mesh and a side mesh. When splicing front side photos, the invention defines the boundary of image splicing according to the characteristic points in the photos. In the three-dimensional face model, the same feature points can be found out, and an approximate boundary line is determined according to the connection relation of the model mesh vertexes. At this time, the mesh of the three-dimensional face model is divided into three parts, as shown in fig. 11. The whole mesh is expanded in an orthogonal projection manner, wherein the front mesh is projected to a two-dimensional plane right in front of the human face, the two side meshes are projected to perpendicular side planes, and the projection result is shown in fig. 12. And splicing the grids in the three planes into a complete face expansion grid. The splicing mode of the plane grid is the same as that of the texture: keeping the projection of the front surface mesh unchanged, affine transformation is carried out on the projections of the two side surface meshes according to the definition of the boundary of the mesh, so that the boundary of the projections is superposed with the boundary of the projection of the front surface mesh. Finally, the front side meshes are aligned along the boundary, and a complete face model mesh expansion diagram is obtained, as shown in fig. 13.
Because the projection mode of the face grid is the same as that of the front side photo, and the same affine transformation is carried out in the splicing process, the feature points in the expanded grid and the feature points in the texture are completely overlapped. Therefore, the expanded mesh is aligned with the face texture map, and the two-dimensional coordinates of each vertex in the mesh in the texture space are texture coordinates. After determining the texture coordinates of the vertices, the DirectX 3D rendering environment automatically maps the texture in fig. 9 to the surface of the personalized face model, and finally the face model is obtained as shown in fig. 14.
The invention provides a texture mapping method in three-dimensional virtual human head and face modeling, which comprises the steps of constructing an individualized human face model, cutting a front view and a side view of a real human face of a person to be mapped according to the position of a characteristic point, and calculating the offset and offset angle of each flush area; performing miscut transformation on the cut side view according to the offset and the offset angle to obtain a side view after the miscut transformation; and introducing a Laplacian pyramid algorithm to fuse the cut front view, the side view after the miscut transformation and the mirror image of the side view after the miscut transformation, and mapping the fused image into a personalized face model to obtain a texture mapping face model of the person to be mapped. The method can effectively reduce texture cracks under the condition of ensuring low data acquisition cost, so that the mapped texture mapping face model is more vivid.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions may be made without departing from the spirit of the invention, which should be construed as belonging to the scope of the invention.

Claims (10)

1.一种三维虚拟人头脸建模中的纹理映射方法,其特征在于,包括:1. a texture mapping method in three-dimensional virtual human head face modeling, is characterized in that, comprises: 获取待映射人物真实人脸的正面视图以及侧面视图;Obtain the frontal view and side view of the real face of the person to be mapped; 对所述正面视图以及侧面视图进行特征点标注,选取特征点标注后的正面视图和侧面视图中的面部边缘特征点,并对特征点标注后的正面和侧面视图中面部边缘特征点进行修正,并按照修正后的面部边缘特征点位置进行裁剪,获得裁剪后包含面部边缘特征点的正面视图以及侧面视图;The frontal view and the side view are marked with feature points, the facial edge feature points in the frontal view and the side view after the feature point are selected, and the facial edge feature points in the front and side views after the feature point are marked are corrected, And according to the corrected position of the facial edge feature points to crop, to obtain the frontal view and side view including the facial edge feature points after cropping; 获取描述真实人物头部比例以及结构的头部数据集;Obtain a head dataset that describes the proportion and structure of a real person's head; 利用所述头部数据集构建虚拟的通用人脸模型;Use the head data set to construct a virtual universal face model; 利用径向基插值算法,基于正面视图以及侧面视图标注后的特征点对所述通用人脸模型进行几何形变以及特征点适配,得到个性化人脸模型;Using the radial basis interpolation algorithm, the general face model is geometrically deformed and the feature points are adapted based on the feature points marked by the frontal view and the side view to obtain a personalized face model; 针对裁剪后的正面视图与裁剪后的侧面视图的齐平区域,计算每个齐平区域的偏移量以及偏移角;For the flush area of the cropped front view and the cropped side view, calculate the offset and offset angle of each flush area; 将裁剪后的侧面视图按照所述偏移量以及偏移角进行错切变换,获得错切变换后的侧面视图;Perform staggered transformation on the cropped side view according to the offset and offset angle to obtain the staggered transformed side view; 计算裁剪后的正面视图、错切变换后的侧面视图以及错切变换后的侧面视图的镜像图像的拉普拉斯金字塔;Calculate the Laplacian pyramid of the cropped frontal view, the staggered transformed side view, and the mirror image of the staggered transformed side view; 将裁剪后的正面视图、错切变换后的侧面视图以及镜像图像的拉普拉斯金字塔,按照各层进行分界线拼接,获得拼接后的拉普拉斯金字塔;Splicing the cropped frontal view, the staggered transformed side view, and the Laplacian pyramid of the mirror image according to the dividing lines of each layer to obtain the spliced Laplacian pyramid; 根据拼接后的拉普拉斯金字塔逐层重构高斯金字塔;Reconstruct the Gaussian pyramid layer by layer according to the spliced Laplacian pyramid; 根据所述高斯金字塔确定融合图像;determining a fusion image according to the Gaussian pyramid; 将所述融合图像映射入所述个性化人脸模型中,获得待映射人物的纹理映射人脸模型。The fusion image is mapped into the personalized face model to obtain a texture-mapped face model of the person to be mapped. 2.根据权利要求1所述的纹理映射方法,其特征在于,所述利用所述头部数据集构建虚拟的通用人脸模型包括:2. The texture mapping method according to claim 1, wherein the building a virtual universal face model by using the head data set comprises: 依据所述头部数据集中描述真实人物头部的基本比例以及结构的数据,在3DsMax中建立虚拟的通用头脸模型并进行细化;According to the data describing the basic proportion and structure of the real person's head in the head data set, a virtual general head and face model is established in 3DsMax and refined; 其中,边缘特征点变换比较明显勾选特定区域,Among them, the edge feature point transformation is more obvious to check the specific area, 其中,所述通用头脸模型为三维网格面片模型,所述三维网格面片模型表示为:Wherein, the general head and face model is a three-dimensional mesh mesh model, and the three-dimensional mesh mesh model is expressed as: M={VM,FM,GM} M ={VM, FM , GM } 其中,VM表示三维网格的顶点坐标集合,FM表示组成面片的各顶点索引集合,GM表示其他信息,所述其他信息包括平滑组、材质链接以及材质引用。Wherein, V M represents the vertex coordinate set of the 3D mesh, F M represents the vertex index set that constitutes the patch, and G M represents other information including smoothing group, material link and material reference. 3.根据权利要求1所述的纹理映射方法,其特征在于,对所述正面视图以及侧面视图进行特征点标注,选取特征点标注后的正面和侧面视图中的面部边缘特征点,并对特征点标注后的正面和侧面视图中面部边缘特征点进行修正,并按照修正后的面部边缘特征点位置进行裁剪,获得裁剪后包含面部边缘特征点的正面视图以及侧面视图包括:3. texture mapping method according to claim 1, is characterized in that, described frontal view and side view are carried out feature point labeling, select the facial edge feature point in the frontal and side view after feature point labeling, and feature point. Correct the facial edge feature points in the front and side views after point labeling, and crop according to the position of the corrected facial edge feature points to obtain the cropped front view and side view including the facial edge feature points including: 对所述正面视图以及侧面视图按照等高线切割、等比例缩放以及人脸对齐,获得对齐之后的正面视图以及侧面视图;The frontal view and the side view are cut according to contour lines, proportionally scaled and aligned to obtain the frontal view and the side view after alignment; 其中,对齐之后的正面视图以及侧面视图按照等高线分割为多个区域;Among them, the front view and side view after alignment are divided into multiple areas according to contour lines; 在对齐之后的正面视图以及侧面视图的不同区域进行特征点标注;Feature points are marked in different areas of the front view and side view after alignment; 在标注特征点后的真实人物的正面视图中选择第一面部边缘特征点,以及侧面视图中选择第二面部边缘特征点;Select the first facial edge feature point in the frontal view of the real person after marking the feature points, and select the second facial edge feature point in the side view; 对所述第一面部边缘特征点以及第二面部边缘特征点进行修正;Correcting the first facial edge feature point and the second facial edge feature point; 将修正后的位于齐平区域的第一面部边缘特征点以及第二面部边缘特征点作为匹配特征点,并存储匹配特征点坐标信息;Taking the corrected first face edge feature point and the second face edge feature point located in the flush area as matching feature points, and storing the matching feature point coordinate information; 将正面视图中的每个第一面部边缘特征点连接绘制第一曲线,以及在侧面视图中的每个第二面部边缘特征点连接绘制第二曲线;connecting each of the first facial edge feature points in the frontal view to draw a first curve, and connecting each of the second facial edge feature points in the side view to draw a second curve; 以第一曲线以及第一曲线的镜像曲线为临界对所述正面视图进行裁剪,保留正面视图的中间部分,以及以第二曲线为临界,将侧面视图的左侧区域裁剪,获得裁剪后的正面视图以及裁剪后的侧面视图。The front view is cropped with the first curve and the mirrored curve of the first curve as the threshold, and the middle part of the front view is reserved, and the second curve is used as the threshold, and the left area of the side view is cropped to obtain the cropped front view and a cropped side view. 4.根据权利要求1所述的纹理映射方法,其特征在于,所述利用径向基插值算法,基于正面视图以及侧面视图标注后的特征点对所述通用人脸模型进行几何形变以及特征点适配,得到个性化人脸模型包括:4. texture mapping method according to claim 1, is characterized in that, described utilizing radial basis interpolation algorithm, based on the feature points after front view and side view labeling, described general face model is geometrically deformed and feature point Adaptation to get a personalized face model includes: 在通用人脸模型确定特征点;Determine feature points in the general face model; 基于正面视图以及侧面视图标注后的特征点,利用径向基插值算法对通用人脸模型特征点附近的曲面进行局部变形,获得个性化人脸模型。Based on the feature points marked by the frontal view and the side view, the radial basis interpolation algorithm is used to locally deform the surface near the feature points of the general face model to obtain a personalized face model. 5.根据权利要求3所述的纹理映射方法,其特征在于,对所述正面视图以及侧面视图按照等高线切割、等比例缩放以及人脸对齐,获得对齐之后的正面视图以及侧面视图包括:5. texture mapping method according to claim 3, is characterized in that, to described frontal view and side view according to contour cutting, equal scaling and face alignment, obtain the frontal view and side view after alignment comprising: 对所述正面视图以及侧面视图分别进行切割,获得不同区域组成的正面视图以及侧面视图;The front view and the side view are respectively cut to obtain the front view and the side view composed of different regions; 将所述切割后的正面视图以及侧面视图在对应的区域进行等比例缩放,以使所述正面视图以及侧面视图在同一区域大小相同;The front view and the side view after the cut are proportionally scaled in the corresponding area, so that the front view and the side view are the same size in the same area; 将头顶、眼睛、嘴唇、下巴作为等高线,对所述等比例缩放后的正面视图以及侧面视图进行相互对齐,获得互相对齐之后的正面视图与侧脸图像。Using the top of the head, the eyes, the lips, and the chin as contour lines, the frontal view and the side view after being scaled in equal proportions are aligned with each other, and the aligned frontal view and the side face image are obtained. 6.根据权利要求3所述的纹理映射方法,其特征在于,所述对所述第一面部边缘特征点以及第二面部边缘特征点进行修正包括:6. The texture mapping method according to claim 3, wherein the modifying the first facial edge feature point and the second facial edge feature point comprises: 对所述第一面部边缘特征点以及第二面部边缘特征点使用拖动以进行修正,以使第一面部边缘特征点以及第二面部边缘特征点位于面部边缘位置。The first facial edge feature point and the second facial edge feature point are corrected by dragging, so that the first facial edge feature point and the second facial edge feature point are located at the facial edge position. 7.根据权利要求1所述的纹理映射方法,其特征在于,7. The texture mapping method according to claim 1, wherein, 所述拉普拉斯金字塔表示为:The Laplacian pyramid is represented as:
Figure FDA0003230019220000041
Figure FDA0003230019220000041
每一层拼接后的拉普拉斯图像表示为:The stitched Laplacian image of each layer is expressed as: LPl (总)=LPl (左)+LPl (正)+LPl (右)LP l (total) = LP l (left) +LP l (positive) +LP l (right) , 所述重构后的高斯金字塔表示为:The reconstructed Gaussian pyramid is expressed as:
Figure FDA0003230019220000042
Figure FDA0003230019220000042
其中,Expend算子表示对输入的图像进行内插放大,Gl表示第l层高斯金字塔的图像,Pl表示第l层拉普拉斯金字塔图像,N为层数。Among them, the Expend operator represents the interpolation and enlargement of the input image, G l represents the image of the l-th Gaussian pyramid, P l represents the l-th layer of the Laplacian pyramid image, and N is the number of layers.
8.根据权利要求1所述的纹理映射方法,其特征在于,所述将裁剪后的正面视图、错切变换后的侧面视图以及镜像图像的拉普拉斯金字塔,按照各层进行分界线拼接,获得拼接后的拉普拉斯金字塔包括:8. texture mapping method according to claim 1, is characterized in that, described by the front view after cropping, the side view after staggering transformation and the Laplacian pyramid of mirror image, carry out dividing line splicing according to each layer , the Laplacian pyramid obtained after splicing includes: 将裁剪后的正面视图、错切变换后的侧面视图以及镜像图像的拉普拉斯金字塔按照各层进行分界线拼接,在分界线上采用像素加权平均的方式进行融合,获得每一层拼接后的拉普拉斯图像;The cropped front view, the staggered transformed side view, and the Laplacian pyramid of the mirror image are spliced according to the dividing line of each layer, and the pixel weighted average is used for fusion on the dividing line. the Laplace image; 每一层拼接后的拉普拉斯图像还原出拉普拉斯金字塔,获得拼接后的拉普拉斯金字塔。The spliced Laplacian image of each layer is restored to the Laplacian pyramid to obtain the spliced Laplacian pyramid. 9.根据权利要求1所述的纹理映射方法,其特征在于,将所述融合图像映射入所述个性化人脸模型中包括:9. The texture mapping method according to claim 1, wherein mapping the fusion image into the personalized face model comprises: 将所述人脸个性化模型的网格按照正交投影的方式展开;Expanding the grid of the face individuation model according to the orthogonal projection; 将人脸个性化模型的正面网格投影到所述融合图像中人脸正前方的二维平面;Projecting the frontal grid of the face individuation model to the two-dimensional plane directly in front of the face in the fusion image; 将所述人脸个性化模型的两个侧面网格投影到,相垂直的所述融合图像的侧面平面。The two side grids of the face personalized model are projected onto the side planes of the fused image that are perpendicular to each other. 10.根据权利要求1所述的纹理映射方法,其特征在于,在将所述人脸个性化模型的两个侧面网格投影到相垂直的融合图像的侧面平面之后,所述纹理映射方法还包括:10. The texture mapping method according to claim 1, wherein after projecting the two side meshes of the face individuation model to the side planes of the perpendicular fused images, the texture mapping method further comprises: include: 将投影后的两个侧面网格以及正面网格按照投影分界线进行融合,获得待映射人物的纹理映射人脸模型。The projected two side meshes and the front mesh are fused according to the projection boundary to obtain the texture-mapped face model of the person to be mapped.
CN202110984213.2A 2021-08-25 2021-08-25 Texture mapping method in three-dimensional virtual human head and face modeling Active CN113808272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110984213.2A CN113808272B (en) 2021-08-25 2021-08-25 Texture mapping method in three-dimensional virtual human head and face modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110984213.2A CN113808272B (en) 2021-08-25 2021-08-25 Texture mapping method in three-dimensional virtual human head and face modeling

Publications (2)

Publication Number Publication Date
CN113808272A true CN113808272A (en) 2021-12-17
CN113808272B CN113808272B (en) 2024-04-12

Family

ID=78894189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110984213.2A Active CN113808272B (en) 2021-08-25 2021-08-25 Texture mapping method in three-dimensional virtual human head and face modeling

Country Status (1)

Country Link
CN (1) CN113808272B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067041A (en) * 2022-01-14 2022-02-18 深圳大学 Material generation method, device, computer equipment and storage medium for three-dimensional model
CN115797556A (en) * 2022-11-22 2023-03-14 灵瞳智能科技(北京)有限公司 Virtual digital human face contour 3D reconstruction device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063582A1 (en) * 2003-08-29 2005-03-24 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
CN102222363A (en) * 2011-07-19 2011-10-19 杭州实时数码科技有限公司 Method for fast constructing high-accuracy personalized face model on basis of facial images
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
KR20140122401A (en) * 2013-04-10 2014-10-20 한국과학기술원 Method and apparatus for gernerating 3 dimension face image
WO2017029488A2 (en) * 2015-08-14 2017-02-23 Metail Limited Methods of generating personalized 3d head models or 3d body models
CN109685740A (en) * 2018-12-25 2019-04-26 努比亚技术有限公司 Method and device, mobile terminal and the computer readable storage medium of face normalization
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110458752A (en) * 2019-07-18 2019-11-15 西北工业大学 An Image Face Swapping Method Based on Local Occlusion
WO2020244076A1 (en) * 2019-06-05 2020-12-10 平安科技(深圳)有限公司 Face recognition method and apparatus, and electronic device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063582A1 (en) * 2003-08-29 2005-03-24 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
CN102222363A (en) * 2011-07-19 2011-10-19 杭州实时数码科技有限公司 Method for fast constructing high-accuracy personalized face model on basis of facial images
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
KR20140122401A (en) * 2013-04-10 2014-10-20 한국과학기술원 Method and apparatus for gernerating 3 dimension face image
WO2017029488A2 (en) * 2015-08-14 2017-02-23 Metail Limited Methods of generating personalized 3d head models or 3d body models
CN109685740A (en) * 2018-12-25 2019-04-26 努比亚技术有限公司 Method and device, mobile terminal and the computer readable storage medium of face normalization
WO2020244076A1 (en) * 2019-06-05 2020-12-10 平安科技(深圳)有限公司 Face recognition method and apparatus, and electronic device and storage medium
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110458752A (en) * 2019-07-18 2019-11-15 西北工业大学 An Image Face Swapping Method Based on Local Occlusion

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
YANGYU FAN ETAL.: "Label Distribution-Based Facial Attractiveness Computation by Deep Residual Learning", IEEE TRANSACTIONS ON MULTIMEDIA, vol. 20, no. 8, 31 August 2018 (2018-08-31) *
YANYU FAN ETAL.: "Full Face-and-Head 3D Model With Photorealistic Texture", IEEE ACCESS, vol. 8 *
施立: "基于网络的三维人脸重构系统的研究", 中国优秀硕士论文全文数据库信息科技辑, no. 11, 15 November 2013 (2013-11-15) *
胡永利等: "基于形变模型的三维人脸重建方法及其改进", 计算机工程, vol. 31, no. 19 *
詹永照;胡灵敏;沈荣荣;: "近似人脸全视角图生成及在特定人脸模型映射", 系统仿真学报, no. 03 *
黄炎辉等: "基于多尺度分析的自动人脸照片移植", 计算机应用研究, vol. 34, no. 11, 30 November 2017 (2017-11-30) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067041A (en) * 2022-01-14 2022-02-18 深圳大学 Material generation method, device, computer equipment and storage medium for three-dimensional model
CN115797556A (en) * 2022-11-22 2023-03-14 灵瞳智能科技(北京)有限公司 Virtual digital human face contour 3D reconstruction device

Also Published As

Publication number Publication date
CN113808272B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
JP5818773B2 (en) Image processing apparatus, image processing method, and program
CN101916454B (en) Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
JP3954211B2 (en) Method and apparatus for restoring shape and pattern in 3D scene
JP3030485B2 (en) Three-dimensional shape extraction method and apparatus
US20050140670A1 (en) Photogrammetric reconstruction of free-form objects with curvilinear structures
US20150178988A1 (en) Method and a system for generating a realistic 3d reconstruction model for an object or being
CN106023288A (en) Image-based dynamic substitute construction method
CN101404091A (en) Three-dimensional human face reconstruction method and system based on two-step shape modeling
CN112767531B (en) Mobile-end-oriented human body model face area modeling method for virtual fitting
US7076117B2 (en) Methods and apparatus for cut-and-paste editing of multiresolution surfaces
CN102663818A (en) Method and device for establishing three-dimensional craniomaxillofacial morphology model
WO2002013144A1 (en) 3d facial modeling system and modeling method
CN111462030A (en) Multi-image fused stereoscopic set vision new angle construction drawing method
CN113808272A (en) Texture mapping method in three-dimensional virtual human head and face modeling
JP7251003B2 (en) Face mesh deformation with fine wrinkles
CN115861525A (en) Multi-view Face Reconstruction Method Based on Parametric Model
CN110751665A (en) Method and system for reconstructing 3D portrait model by portrait embossment
CN114972612B (en) A kind of image texture generation method and related equipment based on three-dimensional simplified model
CN113989441B (en) Automatic three-dimensional cartoon model generation method and system based on single face image
CN107590858A (en) Medical sample methods of exhibiting and computer equipment, storage medium based on AR technologies
CN107862732B (en) Real-time three-dimensional eyelid reconstruction method and device
CN115082640B (en) 3D face model texture reconstruction method and device based on single image
CN104361630B (en) A kind of acquisition methods of face surface optical field
CN114049281B (en) Wide-angle portrait photo distortion correction method based on self-adaptive grid
JPH03138784A (en) Reconstructing method and display method for three-dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant