WO2016029768A1 - 一种3d人脸重建的方法及其装置 - Google Patents

一种3d人脸重建的方法及其装置 Download PDF

Info

Publication number
WO2016029768A1
WO2016029768A1 PCT/CN2015/085133 CN2015085133W WO2016029768A1 WO 2016029768 A1 WO2016029768 A1 WO 2016029768A1 CN 2015085133 W CN2015085133 W CN 2015085133W WO 2016029768 A1 WO2016029768 A1 WO 2016029768A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
model
average
image
image feature
Prior art date
Application number
PCT/CN2015/085133
Other languages
English (en)
French (fr)
Inventor
吴松城
吴智华
陈军宏
Original Assignee
厦门幻世网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 厦门幻世网络科技有限公司 filed Critical 厦门幻世网络科技有限公司
Publication of WO2016029768A1 publication Critical patent/WO2016029768A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • Embodiments of the present application relate to the field of information data processing technologies, and in particular, to a method and apparatus for 3D face reconstruction.
  • 3D face reconstruction based on image data has been widely used.
  • the method of 3D face reconstruction is to collect multiple face images from multiple angles, and then align and synthesize these face images to obtain a 3D face.
  • this 3D face reconstruction method has special requirements for the image used to reconstruct the object, that is, it can generally only be performed on a picture containing a face, and for a picture without a face, 3D face reconstruction cannot be achieved, which is reduced. user experience.
  • this 3D face reconstruction can restore the depth information of the face through multiple pictures, but the alignment and synthesis calculations are large, and the reconstruction method is extremely cumbersome, which is not conducive to improving the efficiency of 3D face reconstruction.
  • the embodiments of the present application provide a 3D face reconstruction method and apparatus thereof, so as to implement 3D face reconstruction for a picture regardless of whether or not a face is included.
  • the method for 3D face reconstruction includes:
  • the corresponding points of the image feature points in the 3D average face model are determined, and the 3D face reconstruction is obtained based on the information of the image feature points and the corresponding relationship between the image feature points and the corresponding points in the 3D average face model.
  • a deformation model coefficient, and processing the 3D average face model according to the deformation model coefficient to obtain a reconstructed preliminary 3D face are determined, and the 3D face reconstruction is obtained based on the information of the image feature points and the corresponding relationship between the image feature points and the corresponding points in the 3D average face model.
  • the texture coordinates of each point on the face of the preliminary 3D person are obtained, and 3D face reconstruction is realized.
  • the method further comprises obtaining a principal component component when obtaining the 3D average face model
  • the average face model is processed to obtain a reconstructed preliminary 3D face, which specifically includes:
  • Y image is the image feature point vector
  • S is the component matrix composed of the main component components
  • diag is the diagonal matrix formed by ⁇ i
  • ⁇ i is the standard deviation of the i-th principal component
  • c is the corresponding component of each principal component
  • L is a projection matrix of the 3D average face model to the image, and is used to reflect the correspondence between the image feature points and the corresponding points in the 3D average face model
  • the distribution of each image feature point has a certain probability
  • the fuzzy matrix A represents the probability distribution of the image feature points
  • the cost function is:
  • the acquiring the principal component component specifically includes:
  • the 3D face model library is selected, and the principal component is obtained by performing principal component analysis on the 3D face model library.
  • performing principal component analysis on the 3D face model library to obtain the principal component component specifically includes:
  • the vertex data of the M face model is obtained from the 3D face model library, and the face model vertex data can be expressed as:
  • X i (x 1 ,y 1 ,z 1 ,x 2 ,y 2 ,z 2 ,...,x N ,y N ,z N ) T ⁇ R 3N
  • X i is a geometric vector composed of vertex coordinates of the i-th personal face
  • N is the number of vertices
  • R 3N is a real space
  • the covariance matrix is obtained based on the 3D average face model according to the following formula:
  • the component matrix S composed of the principal component components is obtained.
  • the obtaining the texture coordinates of each point on the face of the preliminary 3D person specifically includes:
  • d subscript indicates the direction of the coordinate axis
  • G d is the coordinate value of the corresponding point of the image feature point on the 3D human face
  • K is the number of feature points
  • ⁇ d (x, x i ) represents the point x and image features
  • An RBF function that points the distance of the corresponding point x i on the face of the 3D person
  • P d (x) represents a linear function based on the point x
  • the texture coordinates of the points on the 3D human face are obtained by using the texture coordinate coefficients.
  • the embodiment of the present application also provides a device for 3D face reconstruction.
  • the device comprises: a first obtaining unit, a second acquiring unit, a fitting operation unit, a face reconstruction unit and a texture obtaining unit, wherein:
  • the first acquiring unit is configured to acquire an image for 3D face reconstruction, and obtain a preset number of image feature point information as a 3D face reconstruction feature point from the image, where the set of image feature points is used for Reflecting the contours of the face;
  • the second acquiring unit is configured to acquire a 3D average face model
  • the fitting operation unit is configured to determine a corresponding point of the image feature point in the 3D average face model, based on the information of the image feature point and the correspondence between the image feature point and the corresponding point in the 3D average face model Fitting the operation to obtain the deformation model coefficients of the 3D face reconstruction;
  • the face reconstruction unit is configured to process the 3D average face model according to the deformation model coefficient to obtain a reconstructed preliminary 3D face;
  • the texture obtaining unit is configured to obtain texture coordinates of each point on the preliminary 3D human face, and implement 3D face reconstruction.
  • the second obtaining unit is further configured to acquire a principal component component when acquiring a 3D average face model
  • the fitting operation unit includes a corresponding point determining subunit, a cost function constructing subunit, and a cost function solving. Subunits, where:
  • the corresponding point determining subunit is configured to determine a corresponding point of the image feature point in the 3D average face model
  • the cost function construction subunit is configured to construct a cost function E according to the following formula:
  • Y image is the image feature point vector
  • S is the component matrix composed of the main component components
  • diag is the diagonal matrix formed by ⁇ i
  • ⁇ i is the standard deviation of the ith principal component
  • c is the corresponding component of each principal component
  • L is a projection matrix of the 3D average face model to the image, and is used to reflect the correspondence between the image feature points and the corresponding points in the 3D average face model
  • the cost function solving subunit is used to solve the cost function E to obtain a deformation model coefficient c;
  • the face reconstruction unit is specifically configured to calculate a preliminary 3D face X model according to the deformation model coefficient c according to the following formula:
  • the second obtaining unit includes a model library selecting subunit and a principal component analyzing subunit, wherein: the model library selecting subunits are used to select a 3D face model library; and the principal component analyzing subunit is used Principal component analysis is performed on the 3D face model library to obtain a principal component component.
  • the principal component analysis subunit comprises: a vertex data acquisition subunit, an average model acquisition subunit, and a covariance matrix solving subunit, wherein:
  • the vertex data acquisition subunit is configured to obtain vertex data of M 3D face models from a 3D face model library, and the face model vertex data can be expressed as:
  • X i (x 1 ,y 1 ,z 1 ,x 2 ,y 2 ,z 2 ,...,x N ,y N ,z N ) T ⁇ R 3N
  • X i is a geometric shape vector composed of vertex coordinates of the i-th 3D face model, and N is the number of vertices;
  • the average model acquisition sub-unit is configured to calculate a 3D average face model for the vertex data of the M 3D face models according to the following formula:
  • the covariance matrix solving subunit is configured to obtain a covariance matrix according to the following formula based on the 3D average face model:
  • the embodiment of the present application acquires a 3D average face model in the 3D face reconstruction process, so that 3D face reconstruction can be realized based on the correspondence between the 3D average face model and the 3D face reconstruction image.
  • the method reconstructs the 3D face, so that the image to be reconstructed is no longer restricted, and may be an image containing a face or a face image, thereby improving the user experience.
  • the embodiment of the present application can be completed by using a single image, thereby avoiding image alignment and comprehensive calculation, reducing the calculation amount, and improving the efficiency of 3D face reconstruction.
  • FIG. 1 is a flow chart of an embodiment of a 3D face reconstruction method of the present application
  • FIG. 2a-2d are schematic diagrams of effects of 3D face reconstruction based on an image containing a human face, wherein: FIG. 2a is an image containing a real face for 3D face reconstruction, and FIG. 2b is an image containing a real face. The 3D face obtained after the image is reconstructed, FIG. 2c is an image for 3D face reconstruction that does not include the real face but contains the contour of the face, and FIG. 2d is a 3D person obtained by reconstructing the image containing the contour of the face. face;
  • FIG. 3a, 3b are schematic diagrams of effects of performing 3D face reconstruction based on an image that does not include a face, wherein: FIG. 3a is an image for 3D face reconstruction, and FIG. 3b is a reconstructed 3D face;
  • FIG. 4 is a flow chart of a manner in which a 3D average face model of the present application is fitted to an image feature point and a 3D average face model is processed;
  • FIG. 5 is a structural block diagram of an embodiment of a 3D face reconstruction device of the present application.
  • FIG. 1 the figure shows a flow of an embodiment (hereinafter referred to as a basic embodiment) of a 3D face reconstruction method provided by the present application, and the process includes:
  • Step S11 Acquire an image for 3D face reconstruction, and obtain a preset number of image feature point information as a 3D face reconstruction feature point from the image, where the set of image feature points is used to reflect the face contour;
  • the reconstruction of the 3D face needs to be based on the image.
  • one or more images for 3D face reconstruction need to be obtained first, and the source of the image may be in multiple ways, for example, the image may be
  • the image is stored on the local storage device, and may be an image obtained by browsing and downloading on the network.
  • the format of the image is not limited in this embodiment. For example, it may be a JPEG format, a BMP format, a TIFF, a RAW, or the like.
  • the purpose of the embodiment is to perform 3D face reconstruction, in the present embodiment, it does not mean that only an image containing a human face can be used, and in fact, it is completely possible to not include Any image of the face achieves 3D face reconstruction.
  • the scope of 3D face reconstruction is extended, and the established 3D face is beneficial to enhance the fun of face reconstruction and enhance the user experience.
  • the present embodiment acquires image feature points from the image, and these feature points as a whole can be used to reflect the face contour.
  • these points or points at these positions can be used as feature points to extract information of these feature points.
  • image feature point information it can be automated or manual.
  • a program can be written according to a certain algorithm (such as active shape model) from the image for 3D face reconstruction.
  • the coordinate values of the main contour portion are dynamically read, and these coordinate values are used as image feature point information.
  • the position designation can be manually performed on the picture, and the coordinate information of the specified position is recognized as the image feature point information.
  • Step S12 acquiring a 3D average face model
  • a person skilled in the art can obtain a 3D average face model in various ways. It should be noted that although in this embodiment, an image for performing 3D face reconstruction is acquired first, and then a 3D average face model is acquired, the present application is not limited to this manner, and the 3D average person may be acquired first. The face model, and then the image for 3D face reconstruction is acquired, or the above two steps are performed simultaneously.
  • Step S13 determining corresponding points of the image feature points in the 3D average face model, and fitting the operation based on the information of the image feature points and the corresponding relationship between the image feature points and the corresponding points in the 3D average face model to obtain 3D.
  • Deformation model coefficients for face reconstruction
  • each image feature point is obtained relative to the 3D average person.
  • the deformation model coefficients of the corresponding points in the face model thereby facilitating the use of the deformation model coefficients to achieve 3D face reconstruction.
  • the correspondence between each image feature point and the corresponding point on the 3D average face model can be expressed as follows:
  • Represents the correspondence, P 1, 2, ..., K, where K represents the number of feature points, and P represents the feature point number.
  • Step S14 processing the 3D average face model according to the deformation model coefficient to obtain a reconstructed preliminary 3D face
  • the deformation model coefficients represent the changes of the feature points on the image relative to the 3D average face model. Therefore, using the deformation model coefficients obtained in the previous steps, the 3D average face model can be processed to obtain the 3D face reconstruction.
  • Step S15 Obtain texture coordinates of each point on the face of the preliminary 3D person to implement 3D face reconstruction
  • the linear interpolation interpolation can be used to calculate the texture coordinates of each point, thus achieving the perfection of the contour.
  • the present application obtains the texture coordinates of each point in the following manner:
  • d subscript indicates the direction of the coordinate axis
  • G d is the coordinate value of the corresponding point of the image feature point on the 3D human face
  • K is the number of feature points
  • ⁇ d (x, x i ) represents the point x and image features
  • An RBF function that points to the distance x i of the face of the 3D person
  • P d (x) represents a linear function based on the point x.
  • the 3D average face model is acquired in the 3D face reconstruction process, so that the 3D face reconstruction can be implemented based on the correspondence between the 3D average face model and the 3D face reconstruction image.
  • the image for 3D face reconstruction has an adjustment effect on the 3D average face model through the deformation coefficient, which is different from the prior art.
  • the 3D face reconstruction only analyzes and calculates the image, so that the image to be reconstructed is no longer restricted.
  • the image used to reconstruct the 3D face can be either an image containing a face or a face. Image, as mentioned above, 3D face reconstruction based on this image without face, can get interesting 3D face, greatly improving the user experience.
  • FIGS. 2a, 2b, 2c, 2d, and 3a, 3b respectively show schematic diagrams of 3D face reconstruction based on an image including a face and an image containing no face, for a face containing
  • FIG. 2a there may be two situations in the actual application process.
  • One is an image containing a real face, as shown in FIG. 2a, and the other is an image containing a non-real face but having a face contour, for example, a character cartoon.
  • Figure, figure line diagram, etc. as shown in Figure 2c.
  • Figures 2a, 2c, 3a are images for 3D face reconstruction
  • Figures 2b, 2d, 3b are reconstructed 3D faces [Note: only parts are shown as needed in the figure).
  • the prior art often uses multiple images of different angles to reconstruct the face. Since the faces and the lighting conditions of the images are different, the face information of the photos must be performed. Staggered operation such as gesture alignment and lighting condition judgment
  • the processing link is calculated, and the embodiment of the present application can be completed by using a single image, thereby avoiding the operation processing such as image alignment and conditional judgment, reducing the calculation amount, and improving the efficiency of 3D face reconstruction.
  • there may be a method of performing 3D face reconstruction using a single image which simultaneously analyzes and combines the shape and the texture, and the analysis and synthesis steps need to be repeated multiple times, each calculation.
  • the deformation model coefficients of the 3D face reconstruction are obtained by fitting the 3D average face model with the information of the feature points, and there are various specific implementations for this step.
  • a method for obtaining the deformation model coefficients is exemplarily given herein, which requires acquiring the principal component components. Then proceed as follows (see Figure 4, which shows the specific process):
  • Step S41 determining corresponding points of the image feature points in the 3D average face model
  • Step S42 Construct the cost function E according to the following formula:
  • Y image is the image feature point vector
  • S is the component matrix composed of the main component components
  • diag is the diagonal matrix formed by ⁇ i
  • ⁇ i is the standard deviation of the ith principal component
  • c is the corresponding component of each principal component
  • L is a projection matrix of the 3D average face model to the image, and is used to reflect the correspondence between the image feature points and the corresponding points in the 3D average face model
  • Step S43 Solving the cost function E to obtain a deformation model coefficient c.
  • Step S44 Calculate the preliminary 3D face X model according to the deformation model coefficient c according to the following formula:
  • the solution of the deformation model coefficients can reduce or eliminate the possible errors, thus avoiding the phenomenon of "destruction" of the reconstructed 3D face.
  • the 3D average face model is obtained.
  • the 3D average face model is obtained.
  • the 3D model library is a collection of 3D face models.
  • Each 3D face model in the database can be built in a variety of ways. For example, one way is to scan a human face with a laser scanner to obtain original three-dimensional data corresponding to a human face, and these three-dimensional data form a 3D face model.
  • Each 3D face model in the database can also be presented with different precision requirements.
  • a general 3D face model can meet the needs, but in some cases where it is necessary to refine the 3D face, a 3D face corresponding to the degree of refinement is required. model.
  • a series of optimization operations are usually performed on the basis of the general 3D face model in the actual process to obtain a more fine-grained 3D face model.
  • the original 3D face model is smoothed and filled. Preprocessing operations such as coordinate correction.
  • the pre-processed 3D face points, the number of faces and the structure are inconsistent, and these 3D face models are optically aligned to make the nose of each face
  • the cusp is consistent in the serial number of each 3D face model.
  • the vertex data of the M face model is obtained from the 3D face model library, and the face model vertex data can be expressed as:
  • X i (x 1 ,y 1 ,z 1 ,x 2 ,y 2 ,z 2 ,...,x N ,y N ,z N ) T ⁇ R 3N
  • X i is a geometric vector composed of vertex coordinates of the i-th personal face
  • N is the number of vertices
  • R 3N is a real space
  • the principal component components can be obtained as follows:
  • the covariance matrix is obtained based on the 3D average face model according to the following formula:
  • the covariance matrix can also be expressed as:
  • S denotes a principal component component matrix
  • diag is a diagonal matrix formed by ⁇ i
  • ⁇ i is a standard deviation of the i-th principal component.
  • the present application also provides an embodiment of a device for 3D face reconstruction.
  • a device for 3D face reconstruction Referring to Figure 5, there is shown a block diagram of an embodiment of a 3D face reconstruction device of the present application.
  • the device embodiment may include a first obtaining unit U51, a second acquiring unit U52, a fitting operation unit U53, a face reconstruction unit U54, and a texture obtaining unit U55, wherein:
  • the first acquiring unit U51 is configured to acquire an image for 3D face reconstruction, and obtain a preset number of image feature point information as a 3D face reconstruction feature point from the image, where the set of image feature points is used to reflect Face contour
  • a second obtaining unit U52 configured to acquire a 3D average face model
  • the fitting operation unit U53 is configured to determine corresponding points of the image feature points in the 3D average face model, and based on the information of the image feature points and the correspondence between the image feature points and the corresponding points in the 3D average face model Combining the operations to obtain the deformation model coefficients of the 3D face reconstruction;
  • a face reconstruction unit U54 configured to correct a 3D average face model according to the deformation model coefficient to obtain a reconstructed preliminary 3D face
  • the texture obtaining unit U55 is configured to obtain texture coordinates of each point on the face of the preliminary 3D person, and implement 3D face reconstruction.
  • the working process of the embodiment of the device is: the first acquiring unit U51 acquires an image for 3D face reconstruction, and acquires a preset number of image feature point information as a 3D face reconstruction feature point from the image;
  • the acquiring unit U52 is configured to acquire a 3D average face model; after acquiring the image and image feature point information and the 3D average face model, the fitting operation unit U53 determines that the image feature point is in the 3D average face model Corresponding points, based on the information of the image feature points and the corresponding relationship between the image feature points and the corresponding points in the 3D average face model, the deformation model coefficients of the 3D face reconstruction are obtained, and then the face reconstruction
  • the unit U54 corrects the 3D average face model according to the deformation model coefficient, and obtains the reconstructed preliminary 3D face.
  • the texture obtaining unit U55 obtains the texture coordinates of each point on the preliminary 3D face, thereby realizing the 3D face. reconstruction.
  • the device embodiment can achieve the same technical effects as the foregoing method embodiments, and in order to avoid repetition, it will not be rumored here.
  • the second acquiring unit in the foregoing device embodiment may have multiple functions as needed, and may acquire a 3D average face model and information related to 3D face reconstruction.
  • a functional embodiment is that the second obtaining unit U52 not only obtains a 3D average face model, but also acquires a principal component when acquiring a 3D average face model.
  • the fitting operation unit U53 may include a corresponding point determining subunit U531, a cost function constructing subunit U532, and a cost function solving subunit U533, wherein:
  • Corresponding point determining subunit U531 configured to determine a corresponding point of the image feature point in the 3D average face model
  • the cost function construction subunit U532 is used to construct the cost function E according to the following formula:
  • Y image is the image feature point vector
  • X is the 3D average face model vector
  • S is the component matrix composed of the component components
  • diag is the diagonal matrix formed by ⁇ i
  • ⁇ i is the i-th principal component Standard deviation
  • c is the deformation model coefficient corresponding to each principal component
  • L is the projection matrix of the 3D average face model to the image, and the projection matrix can reflect the image feature point and the corresponding point in the 3D average face model Correspondence relationship
  • the face reconstruction unit U54 can be specifically used to calculate the X model according to the deformation model coefficient c according to the following formula:
  • the second obtaining unit U52 includes a model library selecting subunit U521 and a principal component analyzing subunit U522, wherein: the model library selecting subunit U521 is used to select a 3D face model library; the principal component analyzing subunit U522 is used for The 3D face model library performs principal component analysis to obtain principal component components.
  • the principal component analysis subunit U522 may further include: a vertex data acquisition subunit, an average model acquisition subunit, and a covariance matrix solving subunit, wherein:
  • the vertex data acquisition subunit is configured to obtain vertex data of the M personal face model from the 3D face model library, and the face model vertex data can be expressed as:
  • X i (x 1 ,y 1 ,z 1 ,x 2 ,y 2 ,z 2 ,...,x N ,y N ,z N ) T ⁇ R 3N
  • X i is a vector of the geometry of the i-th coordinate point facial composition, N represents the number of the vertices;
  • the average model acquisition subunit is configured to calculate a 3D average face model for the vertex data of the M face model according to the following formula:
  • the covariance matrix solving subunit is configured to obtain a covariance matrix according to the following formula based on the 3D average face model:
  • Covariance matrix Solve and obtain the component matrix S composed of the principal component components.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施方式提供了一种3D人脸重建的方法,该方法包括:获取用于3D人脸重建的图像,并从所述图像中获取预设数量的作为3D人脸重建特征点的图像特征点信息,图像特征点的集合用于反映人脸轮廓;获取3D平均人脸模型;确定图像特征点在3D平均人脸模型中的对应点,基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数,根据所述形变模型系数修正3D平均人脸模型,得到重建的初步3D人脸;获得初步3D人脸上各点的纹理坐标,实现3D人脸重建。本申请的实施方式还提供了一种3D人脸重建的装置。本申请实施方式可以针对任意图像进行3D人脸重建。

Description

一种3D人脸重建的方法及其装置 技术领域
本申请的实施方式涉及信息数据处理技术领域,尤其涉及一种3D人脸重建的方法及其装置。
背景技术
在信息处理环境中,基于图片数据进行3D人脸重建已得到广泛应用。目前3D人脸重建的方法是从多个角度采集多张人脸图片,然后对这些人脸图片进行对齐和综合运算,得到3D人脸。但是,这种3D人脸重建方法对用于重建对象的图片有特殊要求,即一般仅能针对含有人脸的图片进行,而对于不含有人脸的图片则不能实现3D人脸重建,降低了用户体验。此外,这种3D人脸重建通过多张图片尽管能够还原出人脸的深度信息,但是,其中的对齐和综合运算计算量较大,重建方式极为繁琐,不利于提高3D人脸重建的效率。
发明内容
为了解决上述问题,本申请实施方式提供了一种3D人脸重建的方法及其装置,以便针对无论是否含有人脸的图片均能实现3D人脸重建。
本申请实施方式提供的3D人脸重建的方法包括:
获取用于3D人脸重建的图像,并从所述图像中获取预设数量的作为3D人脸重建特征点的图像特征点信息,图像特征点的集合用于反映人脸轮廓;
获取3D平均人脸模型;
确定图像特征点在3D平均人脸模型中的对应点,基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数,根据所述形变模型系数对3D平均人脸模型进行处理,得到重建的初步3D人脸;
获得所述初步3D人脸上各点的纹理坐标,实现3D人脸重建。
优选地,所述方法还包括在获得3D平均人脸模型时,获取主成分分量;
所述基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数,根据所述形变模型系数对3D平均人脸模型进行处理,得到重建的初步3D人脸,具体包括:
按照如下公式构建代价函数E:
Figure PCTCN2015085133-appb-000001
其中:Yimage为图像特征点向量,
Figure PCTCN2015085133-appb-000002
为3D平均人脸模型向量,S为主成分分量构成的分量矩阵,diag为由σi形成的对角阵,σi表示第i个主成分分量的标准差,c为各个主成分分量对应的形变模型系数,L为3D平均人脸模型到图像的投影矩阵,用于反映图像特征点与3D平均人脸模型中的对应点之间的对应关系;
求解所述代价函数E,得到形变模型系数c;
根据所述形变模型系数c按照如下公式计算得到初步3D人脸Xmodel
Figure PCTCN2015085133-appb-000003
优选地,各个图像特征点的分布具有一定的概率,用模糊矩阵A表示图像特征点的概率分布,则所述代价函数为:
Figure PCTCN2015085133-appb-000004
优选地,所述获取主成分分量具体包括:
选取3D人脸模型库,对该3D人脸模型库进行主成分分析得到主成分分量。
进一步优选地,所述对3D人脸模型库进行主成分分析得到主成分分量具体包括:
从3D人脸模型库中获取M个人脸模型的顶点数据,人脸模型顶点数据可以表示为:
Xi=(x1,y1,z1,x2,y2,z2,…,xN,yN,zN)T∈R3N
其中,Xi为第i个人脸的顶点坐标组成的几何形状向量,N为顶点个 数,R3N为实数空间。
对M个3D人脸模型的顶点数据按照如下公式计算3D平均人脸模型:
Figure PCTCN2015085133-appb-000005
基于3D平均人脸模型按照如下公式得到协方差矩阵:
Figure PCTCN2015085133-appb-000006
利用协方差矩阵
Figure PCTCN2015085133-appb-000007
求解,得到主成分分量构成的分量矩阵S。
优选地,所述获得初步3D人脸上各点的纹理坐标具体包括:
按照如下公式先计算纹理坐标系数,即
Figure PCTCN2015085133-appb-000008
和Pd(x)函数的系数:
Figure PCTCN2015085133-appb-000009
其中:d下标表示坐标轴的方向,Gd为图像特征点在3D人脸上的对应点的坐标值,K为特征点个数,φd(x,xi)表示点x与图像特征点在3D人脸上的对应点xi距离的RBF函数,Pd(x)表示基于点x的一次线性函数;
利用所述纹理坐标系数求得3D人脸上各点的纹理坐标。
本申请实施方式还提供了一种3D人脸重建的装置。该装置包括:第一获取单元、第二获取单元、拟合运算单元、人脸重建单元和纹理获得单元,其中:
所述第一获取单元,用于获取用于3D人脸重建的图像,并从所述图像中获取预设数量的作为3D人脸重建特征点的图像特征点信息,图像特征点的集合用于反映人脸轮廓;
所述第二获取单元,用于获取3D平均人脸模型;
所述拟合运算单元,用于确定图像特征点在3D平均人脸模型中的对应点,基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数;
所述人脸重建单元,用于根据所述形变模型系数对3D平均人脸模型进行处理,得到重建的初步3D人脸;
所述纹理获得单元,用于获得所述初步3D人脸上各点的纹理坐标,实现3D人脸重建。
优选地,所述第二获取单元,还用于在获取3D平均人脸模型时,获取主成分分量,则所述拟合运算单元包括对应点确定子单元、代价函数构建子单元和代价函数求解子单元,其中:
所述对应点确定子单元,用于确定图像特征点在3D平均人脸模型中的对应点;
所述代价函数构建子单元,用于按照如下公式构建代价函数E:
Figure PCTCN2015085133-appb-000010
其中:Yimage为图像特征点向量,
Figure PCTCN2015085133-appb-000011
为3D平均人脸模型向量,S为主成分分量构成的分量矩阵,diag为由σi形成的对角阵,σi为第i个主成分分量的标准差,c为各个主成分分量对应的形变模型系数,L为3D平均人脸模型到图像的投影矩阵,用于反映图像特征点与3D平均人脸模型中的对应点之间的对应关系;
所述代价函数求解子单元,用于求解所述代价函数E,得到形变模型系数c;
所述人脸重建单元,具体用于根据所述形变模型系数c按照如下公式计算得到初步3D人脸Xmodel
Figure PCTCN2015085133-appb-000012
优选地,所述第二获取单元包括模型库选取子单元和主成分分析子单元,其中:所述模型库选取子单元,用于选取3D人脸模型库;所述主成分分析子单元,用于对该3D人脸模型库进行主成分分析得到主成分分量。
优选地,所述主成分分析子单元包括:顶点数据获取子单元、平均模型获取子单元和协方差矩阵求解子单元,其中:
所述顶点数据获取子单元,用于从3D人脸模型库中获取M个3D人脸模型的顶点数据,人脸模型顶点数据可以表示为:
Xi=(x1,y1,z1,x2,y2,z2,…,xN,yN,zN)T∈R3N
其中:Xi为第i个3D人脸模型的顶点坐标组成的几何形状向量,N为顶点个数;
所述平均模型获取子单元,用于对M个3D人脸模型的顶点数据按照如下公式计算3D平均人脸模型:
Figure PCTCN2015085133-appb-000013
所述协方差矩阵求解子单元,用于基于3D平均人脸模型按照如下公式得到协方差矩阵:
Figure PCTCN2015085133-appb-000014
并利用协方差矩阵
Figure PCTCN2015085133-appb-000015
求解,得到主成分分量构成的分量矩阵S。
本申请的实施方式在3D人脸重建过程中,获取3D平均人脸模型,从而可以基于3D平均人脸模型与用于3D人脸重建图像之间的对应关系实现3D人脸重建,通过这种方式重建3D人脸,使作为重建对象的图像不再受到任何限制,既可以是包含人脸的图像,也可以是不包含人脸图像,提高了用户体验。此外,在3D人脸重建过程中,本申请实施方式可以使用单张图像即可完成,从而避免了进行图片对齐和综合运算,降低了计算量,提高了3D人脸重建的效率。
附图说明
通过参考附图阅读下文的详细描述,本发明示例性实施方式的上述以及其他目的、特征和优点将变得易于理解。在附图中,以示例性而非限制性的方式示出了本发明的若干实施方式,其中:
图1为本申请的3D人脸重建方法的一个实施例的流程图;
图2a~2d为本申请基于包含人脸的图像进行3D人脸重建的效果示意图,其中:图2a为用于3D人脸重建的包含真实人脸的图像,图2b为对包含真实人脸的图像进行重建后得到的3D人脸,图2c为用于3D人脸重建的不包含真实人脸但包含人脸轮廓的图像,图2d为对包含人脸轮廓的图像进行重建后得到的3D人脸;
图3a、3b为本申请基于不包含人脸的图像进行3D人脸重建的效果示意图,其中:图3a为用于3D人脸重建的图像,图3b为重建的3D人脸;
图4为本申请3D平均人脸模型与图像特征点进行拟合运算和处理3D平均人脸模型的一种方式的流程图;
图5为本申请的3D人脸重建装置的实施例的结构框图。
具体实施方式
下面将参考若干示例性实施方式来描述本发明的原理和精神。应当理解,给出这些实施方式仅仅是为了使本领域技术人员能够更好地理解进而实现本发明,而并非以任何方式限制本发明的范围。相反,提供这些实施方式是为了使本申请的公开更加透彻和完整,并且能够将本申请公开的范围完整地传达给本领域的技术人员。
参见图1,该图示出了本申请提供的3D人脸重建方法的一个实施例(以下简称基础实施例)的流程,该流程包括:
步骤S11:获取用于3D人脸重建的图像,并从所述图像中获取预设数量的作为3D人脸重建特征点的图像特征点信息,图像特征点的集合用于反映人脸轮廓;
本实施例中重建3D人脸需要以图像为依据,为此,需要先获得一张或多张用于进行3D人脸重建的图像,该图像的来源可以是多种方式,比如,该图像可以是本地存储设备上存储的图像,也可以是网络上浏览、下载而获取到的图像;对于图像的格式,本实施例也不作限制,比如,可以是JPEG格式、BMP格式、TIFF、RAW等。这里还需要特别说明的是尽管本实施例的目的是要进行3D人脸重建,但是,在本实施例中,并不意味着仅能采用包含人脸的图像,实际上,完全可以针对不包含人脸的任意图像实现3D人脸重建。对于不包含人脸的图像,扩展了3D人脸重建的适用范围,建立的3D人脸有利于提高人脸重建的趣味性,增强用户体验。
获取用于进行3D人脸重建的图像后,本实施例从图像中获取图像特征点,这些特征点作为整体可以用于反映人脸轮廓。比如,人脸中的鼻子、眼镜、嘴巴等可以较好地刻画一张脸,那么则可以将这些位置或者这些位置上的某个点作为特征点,抽取这些特征点的信息。在获取图像特征点信息时可以采取自动化的方式,也可以是人工方式,对于前者,可以按照一定的算法(比如主动形状模型)编写程序从用于3D人脸重建的图像中自 动读取主要轮廓部分的坐标值,将这些坐标值作为图像特征点信息,对于后者,可以通过手动方式在图片上进行位置指定,将该指定位置的坐标信息识别出来作为图像特征点信息。
步骤S12:获取3D平均人脸模型;
本领域技术人员可以采取多种方式获取3D平均人脸模型。这里需要说明的是:尽管在本实施例中是先获取用于进行3D人脸重建的图像,再获取3D平均人脸模型,但是,本申请并不限于这种方式,可以先获取3D平均人脸模型,再获取用于3D人脸重建的图像,或者上述两个步骤同时进行。
步骤S13:确定图像特征点在3D平均人脸模型中的对应点,基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数;
在获取到3D平均人脸模型和各图像特征点的信息之后,利用各图像特征点与3D平均人脸模型上相应点的对应关系,进行拟合计算,得到各图像特征点相对于3D平均人脸模型中对应点的形变模型系数,从而便于利用该形变模型系数来实现3D人脸重建。这里各图像特征点与3D平均人脸模型上相应点的对应关系可采用如下方式表示:
Figure PCTCN2015085133-appb-000016
其中:
Figure PCTCN2015085133-appb-000017
表示对应关系,P=1,2,…,K,其中K表示特征点个数,P表示特征点序号,
Figure PCTCN2015085133-appb-000018
表示图像上的第P个图像特征点,
Figure PCTCN2015085133-appb-000019
表示3D平均人脸模型上与第P个图像特征点相应点的序号。
步骤S14:根据所述形变模型系数对3D平均人脸模型进行处理,得到重建的初步3D人脸;
形变模型系数表征了图像上的特征点相对于3D平均人脸模型的变化,因此,利用前述步骤得到的形变模型系数,对3D平均人脸模型进行处理,即可得到符合用于3D人脸重建的图像中的人脸或非人脸景象的初步3D人脸。
步骤S15:获得所述初步3D人脸上各点的纹理坐标,实现3D人脸重建;
通过前述方式可以得到初步的3D人脸,为了更为逼真地将图像中的 信息反映到重建的3D人脸之上,还需要对该初步的3D人脸进行完善,实现该目的的方式很多,比如可以采用线性插值的投影方式计算各点的纹理坐标,从而实现轮廓的完善。比如,本申请采用如下的方式获得各点的纹理坐标:
按照如下公式计算纹理坐标系数,即
Figure PCTCN2015085133-appb-000020
和Pd(x)函数的系数:
Figure PCTCN2015085133-appb-000021
其中:d下标表示坐标轴的方向,Gd为图像特征点在3D人脸上的对应点的坐标值,K为特征点个数,φd(x,xi)表示点x与图像特征点在3D人脸上的对应点xi距离的RBF函数,Pd(x)表示基于x点的一次线性函数。通过已知的图像特征点与3D平均人脸模型上相应点的对应关系
Figure PCTCN2015085133-appb-000022
即可求得纹理坐标系数系数。
然后,利用这些系数求得初步3D人脸上各点的纹理坐标。
本实施例在3D人脸重建过程中,获取3D平均人脸模型,从而可以基于3D平均人脸模型与用于3D人脸重建图像之间的对应关系实现3D人脸重建。与现有技术相比,由于3D人脸重建的基础来自3D平均人脸模型,用于3D人脸重建的图像对3D平均人脸模型通过形变系数起到调整修正作用,而不同于现有技术的3D人脸重建仅仅单纯对图像进行分析计算,从而使作为重建对象的图像不再受到任何限制,用于重建3D人脸的图像既可以是包含人脸的图像,也可以是不包含人脸图像,如前所述,在这种不包括人脸的图像基础上进行的3D人脸重建,可以得到有趣的3D人脸,大大提高了用户体验。为了说明该技术效果,图2a、2b、2c、2d、以及图3a、3b分别示出了基于包含人脸的图像和不包含人脸的图像进行3D人脸重建的示意图,对于包含人脸的图像这种情况,在实际应用过程中可能存在两种情形,一是包含真实人脸的图像,如图2a所示,二是包含非真实人脸但具有人脸轮廓的图像,比如,人物卡通图、人物线条图等,如图2c所示。在图中:图2a、2c、3a为用于进行3D人脸重建的图像,图2b、2d、3b为重建的3D人脸[注:图中仅根据需要示出了局部])。
此外,在3D人脸重建过程中,现有技术往往采用不同角度的多张图片进行人脸重建,由于这些图片呈现出来的人脸姿态和光照等条件不同,必须对这些照片的人脸信息进行姿态对齐以及光照条件判断等繁琐的运 算处理环节,而本申请实施例可以使用单张图像即可完成,从而避免了进行图片对齐和条件判断等运算处理,降低了计算量,提高了3D人脸重建的效率。需要说明的是:在现有技术中可能存在利用单张图像进行3D人脸重建的做法,该做法对形状和贴图同时进行分析综合运算,而且分析综合的步骤需要反复进行多次,每次计算时都需要考虑形状与贴图的相互影响,迭代步骤增多,使计算量依然巨大,而本实施例相对于这种现有技术同样能够取得减少计算量的良好效果,提高3D人脸重建的效率。再者,这种利用单张图像进行3D人脸重建的方法,由于图片本身光照条件的多样性,在重建过程中贴图会对形状的还原产生错误估计,导致重建出来的3D模型比较僵硬、脸部容易出现变形,而本实施例是基于3D平均人脸模型,利用基于特征点的信息获取到的形变模型系数来调整该3D平均人脸模型,从而使重建出来的3D人脸整体上比较精细灵活,能够避免出现“突点”、“跃点”,3D人脸的“皮肤”平滑。
在上述基础实施例的S13步骤中提及通过将3D平均人脸模型与特征点的信息进行拟合运算得到3D人脸重建的形变模型系数,对于该步骤可以有多种具体实现方式。为了较好地说明该技术特征,这里示例性地给出一种形变模型系数求取的方法,该方法需要获取主成分分量。然后按照如下步骤进行(参见图4,该图示出了具体的流程):
步骤S41:确定图像特征点在3D平均人脸模型中的对应点;
步骤S42:按照如下公式构建代价函数E:
Figure PCTCN2015085133-appb-000023
其中:Yimage为图像特征点向量,
Figure PCTCN2015085133-appb-000024
为3D平均人脸模型向量,S为主成分分量构成的分量矩阵,diag为由σi形成的对角阵,σi为第i个主成分分量的标准差,c为各个主成分分量对应的形变模型系数,L为3D平均人脸模型到图像的投影矩阵,用于反映图像特征点与3D平均人脸模型中的对应点之间的对应关系;
步骤S43:求解所述代价函数E,得到形变模型系数c。
步骤S44:根据所述形变模型系数c按照如下公式计算得到初步3D人脸Xmodel
Figure PCTCN2015085133-appb-000025
在上述求取形变模型系数过程中,需要利用到各个图像特征点信息。在实际应用过程中,这些图像特征点存在概率分布的问题,比如,通过编写程序来自动识别特征点,那么不同特征点可能被识别出来的概率不同,对于某些“特征”作用(即真实地反映人脸轮廓的能力)比较突出的点,其被识别出来作为特征点的可能性较大,反之则较小。因此,在构建代价函数过程中,为了避免或减少自动检测或人为调整特征点产生的误差,应当考虑各个特征点的概率分布,假设各个图像特征点的概率分布构成一个模糊矩阵A,那么可以按照如下公式构建代价函数:
Figure PCTCN2015085133-appb-000026
在这种包含了图像特征点概率的代价函数基础之上,进行形变模型系数的求解,能够减小或消除可能存在的误差,从而避免出现对重建的3D人脸产生“破坏”的现象。
在前述基础实施例的S12中提到获取3D平均人脸模型,在实际应用过程中,获取3D平均人脸模型的方式较多,这里示例性的列举一种,该方式选择3D人脸模型库,然后在进行主成分分析过程中得到3D平均人脸模型。3D模型库是3D人脸模型的集合。该数据库中的每个3D人脸模型可以采用多种方式来建立。比如,一种方式是采用激光扫描仪对人脸进行扫描,从而得到与人脸对应的原始三维数据,这些三维数据即形成3D人脸模型。该数据库中的每个3D人脸模型也可以以不同精度要求呈现。比如,在一些对3D人脸重建要求不高的场合,一般的3D人脸模型即可满足需要,但在一些需要精细化展现3D人脸的场合下,则需要对应精细化程度的3D人脸模型。在后者的情况下,实际过程中通常会在一般的3D人脸模型的基础之上进行一系列优化操作,得到更为细粒度的3D人脸模型。比如,为消除或减少扫描过程中光照条件的细微变化、人脸表面的不光滑以及被扫描者的姿态差异等因素对3D人脸模型产生的影响,对原始3D人脸模型进行平滑、补洞、坐标矫正等预处理操作。还比如,为避免不同人脸之间的差异导致经过预处理的三维人脸点数、面数以及结构不一致,对这些3D人脸模型采用光流法进行对齐操作,使得每一个人脸的鼻 尖点在各个3D人脸模型的序号一致。选择了相应的3D模型库后,本申请优选采用主成分分析法来获取主成分分量:
从3D人脸模型库中获取M个人脸模型的顶点数据,人脸模型顶点数据可以表示为:
Xi=(x1,y1,z1,x2,y2,z2,…,xN,yN,zN)T∈R3N
其中,Xi为第i个人脸的顶点坐标组成的几何形状向量,N为顶点个数,R3N为实数空间。
对M个3D人脸模型的顶点数据按照如下公式计算3D平均人脸模型:
Figure PCTCN2015085133-appb-000027
在获取3D平均人脸模型的基础之上,可以按照如下的方式获取主成分分量:
基于3D平均人脸模型按照如下公式得到协方差矩阵:
Figure PCTCN2015085133-appb-000028
根据线性代数理论,协方差矩阵还可以表示为:
Figure PCTCN2015085133-appb-000029
其中,S表示主成分分量矩阵,diag为由σi形成的对角阵,σi为第i个主成分分量的标准差。
然后,通过奇异值分析法可求得S。
上述内容详细介绍了本申请的3D人脸重建的方法的实施例,相应地,本申请还提供了一种3D人脸重建的装置的实施例。参见图5,该图示出了本申请3D人脸重建装置的一个实施例的结构框图。该装置实施例可以包括第一获取单元U51、第二获取单元U52、拟合运算单元U53、人脸重建单元U54和纹理获得单元U55,其中:
第一获取单元U51,用于获取用于3D人脸重建的图像,并从所述图像中获取预设数量的作为3D人脸重建特征点的图像特征点信息,图像特征点的集合用于反映人脸轮廓;
第二获取单元U52,用于获取3D平均人脸模型;
拟合运算单元U53、用于确定图像特征点在3D平均人脸模型中的对应点,基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数;
人脸重建单元U54,用于根据所述形变模型系数修正3D平均人脸模型,得到重建的初步3D人脸;
纹理获得单元U55,用于获得所述初步3D人脸上各点的纹理坐标,实现3D人脸重建。
本装置实施例的工作过程是:第一获取单元U51获取用于3D人脸重建的图像,并从所述图像中获取预设数量的作为3D人脸重建特征点的图像特征点信息;第二获取单元U52,用于获取3D平均人脸模型;在获取到图像和图像特征点的信息以及3D平均人脸模型后,由拟合运算单元U53确定所述图像特征点在3D平均人脸模型中的对应点,并基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数,然后,人脸重建单元U54根据所述形变模型系数修正3D平均人脸模型,得到重建的初步3D人脸,纹理获得单元U55在此基础之上,获得初步3D人脸上各点的纹理坐标,从而实现3D人脸重建。本装置实施例能够取得与前述方法实施例同样的技术效果,为避免重复,这里不再赘言。
在实际应用过程中,上述装置实施例中的第二获取单元可以根据需要具有多种功能,可以获取3D平均人脸模型以及与3D人脸重建相关的信息。比如,一种功能体现方式是第二获取单元U52不仅获得3D平均人脸模型,而且在获取3D平均人脸模型时还获取主成分分量。在这种情况下,拟合运算单元U53可以包括对应点确定子单元U531、代价函数构建子单元U532和代价函数求解子单元U533,其中:
对应点确定子单元U531,用于确定图像特征点在3D平均人脸模型中的对应点;
代价函数构建子单元U532,用于按照如下公式构建代价函数E:
Figure PCTCN2015085133-appb-000030
其中:Yimage为图像特征点向量,X为3D平均人脸模型向量,S为主成 分分量构成的分量矩阵,diag为由σi形成的对角阵,σi为第i个主成分分量的标准差,c为各个主成分分量对应的形变模型系数,L为3D平均人脸模型到图像的投影矩阵,该投影矩阵可以反映所述图像特征点与3D平均人脸模型中的对应点之间的对应关系;
代价函数求解子单元U533,用于求解所述代价函数E,得到形变模型系数c;
同时,人脸重建单元U54则可以具体用于根据所述形变模型系数c按照如下公式计算得到Xmodel
Figure PCTCN2015085133-appb-000031
此外,在实际应用过程中,前述的第二获取单元不仅可以具有不同的功能,在实现这些功能时还可以根据实际情况采取不同的结构。比如,第二获取单元U52包括模型库选取子单元U521和主成分分析子单元U522,其中:模型库选取子单元U521,用于选取3D人脸模型库;主成分分析子单元U522,用于对该3D人脸模型库进行主成分分析得到主成分分量。其中,主成分分析子单元U522可以进一步包括:顶点数据获取子单元、平均模型获取子单元和协方差矩阵求解子单元,其中:
所述顶点数据获取子单元,用于从3D人脸模型库中获取M个人脸模型的顶点数据,人脸模型顶点数据可以表示为:
Xi=(x1,y1,z1,x2,y2,z2,…,xN,yN,zN)T∈R3N
其中,Xi是第i个人脸的点坐标组成的几何形状向量,N表示顶点个数;
所述平均模型获取子单元,用于对M个人脸模型的顶点数据按照如下公式计算3D平均人脸模型:
Figure PCTCN2015085133-appb-000032
所述协方差矩阵求解子单元,用于基于3D平均人脸模型按照如下公式得到协方差矩阵:
Figure PCTCN2015085133-appb-000033
并利用协方差矩阵
Figure PCTCN2015085133-appb-000034
求解,得到主成分分量构成的分量 矩阵S。
需要说明的是为了叙述的简便,本说明书的上述实施例以及实施例的各种变形实现方式重点说明的都是与其他实施例或变形方式的不同之处,各个情形之间相同相似的部分互相参见即可。尤其,对于装置实施例的几个改进方式而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例的各单元可以是或者也可以不是物理上分开的,既可以位于一个地方,或者也可以分布到多个网络环境下。在实际应用过程中,可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的,本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
还值得说明的是,虽然前述内容已经参考若干具体实施方式描述了本发明创造的精神和原理,但是应该理解,本发明创造并不限于所公开的具体实施方式,对各方面的划分也不意味着这些方面中的特征不能组合,这种划分仅是为了表述的方便。本发明创造旨在涵盖所附权利要求的精神和范围内所包括的各种修改和等同布置。

Claims (10)

  1. 一种3D人脸重建的方法,其特征在于,所述方法包括:
    获取用于3D人脸重建的图像,并从所述图像中获取预设数量的作为3D人脸重建特征点的图像特征点信息,图像特征点的集合用于反映人脸轮廓;
    获取3D平均人脸模型;
    确定图像特征点在3D平均人脸模型中的对应点,基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数,根据所述形变模型系数对3D平均人脸模型进行处理,得到重建的初步3D人脸;
    获得所述初步3D人脸上各点的纹理坐标,实现3D人脸重建。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括在获得3D平均人脸模型时,获取主成分分量;
    所述基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数,根据所述形变模型系数对3D平均人脸模型进行处理,得到重建的初步3D人脸,具体包括:
    按照如下公式构建代价函数E:
    Figure PCTCN2015085133-appb-100001
    其中:Yimage为图像特征点向量,
    Figure PCTCN2015085133-appb-100002
    为3D平均人脸模型向量,S为主成分分量构成的分量矩阵,diag为由σi形成的对角阵,σi为第i个主成分分量的标准差,c为各个主成分分量对应的形变模型系数,L为3D平均人脸模型到图像的投影矩阵,用于反映图像特征点与3D平均人脸模型中的对应点之间的对应关系;
    求解所述代价函数E,得到形变模型系数c;
    根据所述形变模型系数c按照如下公式计算得到初步3D人脸Xmodel
    Figure PCTCN2015085133-appb-100003
  3. 根据权利要求2所述的方法,其特征在于,各个图像特征点的分 布具有一定的概率,用模糊矩阵A表示图像特征点的概率分布,则所述代价函数为:
    Figure PCTCN2015085133-appb-100004
  4. 根据权利要求2所述的方法,其特征在于,所述获取主成分分量具体包括:
    选取3D人脸模型库,对该3D人脸模型库进行主成分分析得到主成分分量。
  5. 根据权利要求4所述的方法,其特征在于,所述对3D人脸模型库进行主成分分析得到主成分分量具体包括:
    从3D人脸模型库中获取M个3D人脸模型的顶点数据,人脸模型顶点数据表示为:
    Xi=(x1,y1,z1,x2,y2,z2,...,xN,yN,zN)T∈R3N
    其中:Xi为第i个3D人脸模型的顶点坐标组成的几何形状向量,N为顶点个数,R3N为实数空间;
    对M个3D人脸模型的顶点数据按照如下公式计算3D平均人脸模型:
    Figure PCTCN2015085133-appb-100005
    基于3D平均人脸模型按照如下公式得到协方差矩阵:
    Figure PCTCN2015085133-appb-100006
    利用协方差矩阵
    Figure PCTCN2015085133-appb-100007
    求解,得到主成分分量构成的分量矩阵S。
  6. 根据权利要求1至5中任何一项所述的方法,其特征在于,所述获得初步3D人脸上各点的纹理坐标具体包括:
    按照如下公式计算纹理坐标系数,即
    Figure PCTCN2015085133-appb-100008
    和Pd(x)函数的系数:
    Figure PCTCN2015085133-appb-100009
    其中:d下标表示坐标轴的方向,Gd为图像特征点在3D人脸上的对应点的坐标值,K为特征点个数,φd(x,xi)表示点x与图像特征点在3D人脸上的对应点xi距离的RBF函数,Pd(x)表示基于点x的一次线性函数;
    利用所述纹理坐标系数求得初步3D人脸上各点的纹理坐标。
  7. 一种3D人脸重建的装置,其特征在于,所述装置包括:第一获取单元、第二获取单元、拟合运算单元、人脸重建单元和纹理获得单元,其中:
    所述第一获取单元,用于获取用于3D人脸重建的图像,并从所述图像中获取预设数量的作为3D人脸重建特征点的图像特征点信息,图像特征点的集合用于反映人脸轮廓;
    所述第二获取单元,用于获取3D平均人脸模型;
    所述拟合运算单元,用于确定图像特征点在3D平均人脸模型中的对应点,基于图像特征点的信息以及图像特征点与3D平均人脸模型中的对应点之间的对应关系进行拟合运算,得到3D人脸重建的形变模型系数;
    所述人脸重建单元,用于根据所述形变模型系数对3D平均人脸模型进行处理,得到重建的初步3D人脸;
    所述纹理获得单元,用于获得所述初步3D人脸上各点的纹理坐标,实现3D人脸重建。
  8. 根据权利要求7所述的装置,其特征在于,所述第二获取单元,还用于在获取3D平均人脸模型时,获取主成分分量,则所述拟合运算单元包括对应点确定子单元、代价函数构建子单元和代价函数求解子单元,其中:
    所述对应点确定子单元,用于确定图像特征点在3D平均人脸模型中的对应点;
    所述代价函数构建子单元,用于按照如下公式构建代价函数E:
    Figure PCTCN2015085133-appb-100010
    其中:Yimage为图像特征点向量,
    Figure PCTCN2015085133-appb-100011
    为3D平均人脸模型向量,S为主成分分量构成的分量矩阵,diag为由σi形成的对角阵,σi为第i个主成分分量的标准差,c为各个主成分分量对应的形变模型系数,L为3D平均人脸模型到图像的投影矩阵,用于反映图像特征点与3D平均人脸模型中的对应点之间的对应关系;
    所述代价函数求解子单元,用于求解所述代价函数E,得到形变模型系数c;
    所述人脸重建单元,具体用于根据所述形变模型系数c按照如下公式计算得到初步3D人脸Xmodel
    Figure PCTCN2015085133-appb-100012
  9. 根据权利要求7或8所述的装置,其特征在于,所述第二获取单元包括模型库选取子单元和主成分分析子单元,其中:
    所述模型库选取子单元,用于选取3D人脸模型库;
    所述主成分分析子单元,用于对该3D人脸模型库进行主成分分析得到主成分分量。
  10. 根据权利要求9所述的装置,其特征在于,所述主成分分析子单元包括:顶点数据获取子单元、平均模型获取子单元和协方差矩阵求解子单元,其中:
    所述顶点数据获取子单元,用于从3D人脸模型库中获取M个3D人脸模型的顶点数据,人脸模型顶点数据可以表示为:
    Xi=(x1,y1,z1,x2,y2,z2,...,xN,yN,zN)T∈R3N
    其中:Xi为第i个3D人脸模型的顶点坐标组成的几何形状向量,N为顶点个数,R3N为实数空间;
    所述平均模型获取子单元,用于对M个3D人脸模型的顶点数据按照如下公式计算3D平均人脸模型:
    Figure PCTCN2015085133-appb-100013
    所述协方差矩阵求解子单元,用于基于3D平均人脸模型按照如下公式得到协方差矩阵:
    Figure PCTCN2015085133-appb-100014
    并利用协方差矩阵
    Figure PCTCN2015085133-appb-100015
    求解,得到主成分分量构成的分量矩阵S。
PCT/CN2015/085133 2014-08-29 2015-07-27 一种3d人脸重建的方法及其装置 WO2016029768A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410436238.9A CN104157010B (zh) 2014-08-29 2014-08-29 一种3d人脸重建的方法及其装置
CN201410436238.9 2014-08-29

Publications (1)

Publication Number Publication Date
WO2016029768A1 true WO2016029768A1 (zh) 2016-03-03

Family

ID=51882498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/085133 WO2016029768A1 (zh) 2014-08-29 2015-07-27 一种3d人脸重建的方法及其装置

Country Status (2)

Country Link
CN (1) CN104157010B (zh)
WO (1) WO2016029768A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961149A (zh) * 2017-05-27 2018-12-07 北京旷视科技有限公司 图像处理方法、装置和系统及存储介质
CN109151540A (zh) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 视频图像的交互处理方法及装置
CN109360270A (zh) * 2018-11-13 2019-02-19 盎锐(上海)信息科技有限公司 基于人工智能的3d人脸姿态对齐算法及装置
CN111652974A (zh) * 2020-06-15 2020-09-11 腾讯科技(深圳)有限公司 三维人脸模型的构建方法、装置、设备及存储介质
CN111710035A (zh) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 人脸重建方法、装置、计算机设备及存储介质
CN111914106A (zh) * 2020-08-19 2020-11-10 腾讯科技(深圳)有限公司 纹理与法线库构建方法、纹理与法线图生成方法及装置
CN112085835A (zh) * 2020-08-31 2020-12-15 腾讯科技(深圳)有限公司 三维卡通人脸生成方法、装置、电子设备及存储介质
CN112419485A (zh) * 2020-11-25 2021-02-26 北京市商汤科技开发有限公司 一种人脸重建方法、装置、计算机设备及存储介质
CN112614213A (zh) * 2020-12-14 2021-04-06 杭州网易云音乐科技有限公司 人脸表情确定方法、表情参数确定模型、介质及设备
CN113128253A (zh) * 2019-12-30 2021-07-16 Tcl集团股份有限公司 一种三维人脸模型的重建方法及装置
CN113506220A (zh) * 2021-07-16 2021-10-15 厦门美图之家科技有限公司 3d顶点驱动的人脸姿态编辑方法、系统及电子设备
CN113591602A (zh) * 2021-07-08 2021-11-02 娄浩哲 一种基于单视角的人脸三维轮廓特征重建装置及重建方法
CN115187822A (zh) * 2022-07-28 2022-10-14 广州方硅信息技术有限公司 人脸图像数据集分析方法、直播人脸图像处理方法及装置
US11941753B2 (en) 2018-08-27 2024-03-26 Alibaba Group Holding Limited Face pose estimation/three-dimensional face reconstruction method, apparatus, and electronic device

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104157010B (zh) * 2014-08-29 2017-04-12 厦门幻世网络科技有限公司 一种3d人脸重建的方法及其装置
CN104778004A (zh) * 2015-03-24 2015-07-15 深圳市艾优尼科技有限公司 一种信息内容匹配方法
CN105094523B (zh) * 2015-06-17 2019-02-05 厦门幻世网络科技有限公司 一种3d动画的展现方法及装置
CN107274493B (zh) * 2017-06-28 2020-06-19 河海大学常州校区 一种基于移动平台的三维虚拟试发型人脸重建方法
CN108399649B (zh) * 2018-03-05 2021-07-20 中科视拓(北京)科技有限公司 一种基于级联回归网络的单张图片三维人脸重建方法
CN108717730B (zh) * 2018-04-10 2023-01-10 福建天泉教育科技有限公司 一种3d人物重建的方法及终端
CN108898665A (zh) * 2018-06-15 2018-11-27 上饶市中科院云计算中心大数据研究院 三维人脸重建方法、装置、设备及计算机可读存储介质
CN109584145A (zh) * 2018-10-15 2019-04-05 深圳市商汤科技有限公司 卡通化方法和装置、电子设备和计算机存储介质
CN109409274B (zh) * 2018-10-18 2020-09-04 四川云从天府人工智能科技有限公司 一种基于人脸三维重建和人脸对齐的人脸图像变换方法
CN110675487B (zh) * 2018-12-13 2023-05-09 中科天网(广东)科技有限公司 基于多角度二维人脸的三维人脸建模、识别方法及装置
CN109685873B (zh) * 2018-12-14 2023-09-05 广州市百果园信息技术有限公司 一种人脸重建方法、装置、设备和存储介质
CN109447043A (zh) * 2018-12-23 2019-03-08 广东腾晟信息科技有限公司 一种人脸自动建模方法
CN111508069B (zh) * 2020-05-22 2023-03-21 南京大学 一种基于单张手绘草图的三维人脸重建方法
CN113593042A (zh) * 2021-08-13 2021-11-02 成都数联云算科技有限公司 3d模型重建方法、装置、计算机设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1818977A (zh) * 2006-03-16 2006-08-16 上海交通大学 由一幅正面图像实现快速人脸模型重建的方法
CN101303772A (zh) * 2008-06-20 2008-11-12 浙江大学 一种基于单幅图像的非线性三维人脸建模方法
CN103413351A (zh) * 2013-07-26 2013-11-27 南京航空航天大学 基于压缩感知理论的三维人脸快速重建方法
CN104157010A (zh) * 2014-08-29 2014-11-19 厦门幻世网络科技有限公司 一种3d人脸重建的方法及其装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1818977A (zh) * 2006-03-16 2006-08-16 上海交通大学 由一幅正面图像实现快速人脸模型重建的方法
CN101303772A (zh) * 2008-06-20 2008-11-12 浙江大学 一种基于单幅图像的非线性三维人脸建模方法
CN103413351A (zh) * 2013-07-26 2013-11-27 南京航空航天大学 基于压缩感知理论的三维人脸快速重建方法
CN104157010A (zh) * 2014-08-29 2014-11-19 厦门幻世网络科技有限公司 一种3d人脸重建的方法及其装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GONG, XUN ET AL.: "3D Face Deformable Model Based on Feature Points", JOURNAL OF SOFTWARE, vol. 20, no. 3, 30 March 2009 (2009-03-30) *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961149A (zh) * 2017-05-27 2018-12-07 北京旷视科技有限公司 图像处理方法、装置和系统及存储介质
CN109151540A (zh) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 视频图像的交互处理方法及装置
CN109151540B (zh) * 2017-06-28 2021-11-09 武汉斗鱼网络科技有限公司 视频图像的交互处理方法及装置
US11941753B2 (en) 2018-08-27 2024-03-26 Alibaba Group Holding Limited Face pose estimation/three-dimensional face reconstruction method, apparatus, and electronic device
CN109360270A (zh) * 2018-11-13 2019-02-19 盎锐(上海)信息科技有限公司 基于人工智能的3d人脸姿态对齐算法及装置
CN109360270B (zh) * 2018-11-13 2023-02-10 盎维云(深圳)计算有限公司 基于人工智能的3d人脸姿态对齐方法及装置
CN113128253A (zh) * 2019-12-30 2021-07-16 Tcl集团股份有限公司 一种三维人脸模型的重建方法及装置
CN113128253B (zh) * 2019-12-30 2024-05-03 Tcl科技集团股份有限公司 一种三维人脸模型的重建方法及装置
CN111652974A (zh) * 2020-06-15 2020-09-11 腾讯科技(深圳)有限公司 三维人脸模型的构建方法、装置、设备及存储介质
CN111652974B (zh) * 2020-06-15 2023-08-25 腾讯科技(深圳)有限公司 三维人脸模型的构建方法、装置、设备及存储介质
CN111710035A (zh) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 人脸重建方法、装置、计算机设备及存储介质
CN111710035B (zh) * 2020-07-16 2023-11-07 腾讯科技(深圳)有限公司 人脸重建方法、装置、计算机设备及存储介质
CN111914106A (zh) * 2020-08-19 2020-11-10 腾讯科技(深圳)有限公司 纹理与法线库构建方法、纹理与法线图生成方法及装置
CN111914106B (zh) * 2020-08-19 2023-10-13 腾讯科技(深圳)有限公司 纹理与法线库构建方法、纹理与法线图生成方法及装置
CN112085835A (zh) * 2020-08-31 2020-12-15 腾讯科技(深圳)有限公司 三维卡通人脸生成方法、装置、电子设备及存储介质
CN112085835B (zh) * 2020-08-31 2024-03-22 腾讯科技(深圳)有限公司 三维卡通人脸生成方法、装置、电子设备及存储介质
CN112419485B (zh) * 2020-11-25 2023-11-24 北京市商汤科技开发有限公司 一种人脸重建方法、装置、计算机设备及存储介质
CN112419485A (zh) * 2020-11-25 2021-02-26 北京市商汤科技开发有限公司 一种人脸重建方法、装置、计算机设备及存储介质
CN112614213B (zh) * 2020-12-14 2024-01-23 杭州网易云音乐科技有限公司 人脸表情确定方法、表情参数确定模型、介质及设备
CN112614213A (zh) * 2020-12-14 2021-04-06 杭州网易云音乐科技有限公司 人脸表情确定方法、表情参数确定模型、介质及设备
CN113591602A (zh) * 2021-07-08 2021-11-02 娄浩哲 一种基于单视角的人脸三维轮廓特征重建装置及重建方法
CN113591602B (zh) * 2021-07-08 2024-04-30 娄浩哲 一种基于单视角的人脸三维轮廓特征重建装置及重建方法
CN113506220A (zh) * 2021-07-16 2021-10-15 厦门美图之家科技有限公司 3d顶点驱动的人脸姿态编辑方法、系统及电子设备
CN113506220B (zh) * 2021-07-16 2024-04-05 厦门美图之家科技有限公司 3d顶点驱动的人脸姿态编辑方法、系统及电子设备
CN115187822A (zh) * 2022-07-28 2022-10-14 广州方硅信息技术有限公司 人脸图像数据集分析方法、直播人脸图像处理方法及装置

Also Published As

Publication number Publication date
CN104157010B (zh) 2017-04-12
CN104157010A (zh) 2014-11-19

Similar Documents

Publication Publication Date Title
WO2016029768A1 (zh) 一种3d人脸重建的方法及其装置
US9679192B2 (en) 3-dimensional portrait reconstruction from a single photo
US11302064B2 (en) Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
CN111784821B (zh) 三维模型生成方法、装置、计算机设备及存储介质
JP4733318B2 (ja) 顔の特徴をアニメーション化する方法およびシステムならびに表情変換のための方法およびシステム
KR101560508B1 (ko) 3차원 이미지 모델 조정을 위한 방법 및 장치
WO2015188684A1 (zh) 三维模型重建方法与系统
WO2015139574A1 (zh) 一种静态物体重建方法和系统
US11508107B2 (en) Additional developments to the automatic rig creation process
Li et al. Detail-preserving and content-aware variational multi-view stereo reconstruction
US20110148875A1 (en) Method and apparatus for capturing motion of dynamic object
US10169891B2 (en) Producing three-dimensional representation based on images of a person
JP2011170891A (ja) 顔画像処理方法およびシステム
CN102663820A (zh) 三维头部模型重建方法
JP5460499B2 (ja) 画像処理装置およびコンピュータプログラム
KR20090092473A (ko) 3차원 변형 가능 형상 모델에 기반한 3차원 얼굴 모델링방법
CN113111861A (zh) 人脸纹理特征提取、3d人脸重建方法及设备及存储介质
US11321960B2 (en) Deep learning-based three-dimensional facial reconstruction system
CN111815768B (zh) 三维人脸重建方法和装置
Lacher et al. Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation
Achenbach et al. Accurate Face Reconstruction through Anisotropic Fitting and Eye Correction.
CN113538682A (zh) 模型训练、头部重建方法、电子设备及存储介质
US20230031750A1 (en) Topologically consistent multi-view face inference using volumetric sampling
CN110852934A (zh) 图像处理方法及装置、图像设备及存储介质
CN113223188B (zh) 一种视频人脸胖瘦编辑方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15834828

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.08.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 15834828

Country of ref document: EP

Kind code of ref document: A1