CN109325994B - Method for enhancing data based on three-dimensional face - Google Patents
Method for enhancing data based on three-dimensional face Download PDFInfo
- Publication number
- CN109325994B CN109325994B CN201811056176.3A CN201811056176A CN109325994B CN 109325994 B CN109325994 B CN 109325994B CN 201811056176 A CN201811056176 A CN 201811056176A CN 109325994 B CN109325994 B CN 109325994B
- Authority
- CN
- China
- Prior art keywords
- face
- expression
- point cloud
- dimensional
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Abstract
The invention discloses a method for enhancing three-dimensional face data, which is characterized in that dense corresponding relations of different three-dimensional faces in a database are established, and a face with the largest non-rigid deformation difference is selected to generate a new identity; fitting a multi-linear 3DMM to the face of the database and the face with a new identity, changing expression parameters to generate different expressions, and transferring the different expressions back to the input face to generate a new expression; the new identity and expression information enhances the data volume of the original three-dimensional face data set, greatly reduces the burden of manually obtaining face data of different individuals and different expressions by adopting three-dimensional scanners such as Kinect and the like, and provides powerful support for tasks such as three-dimensional face reconstruction and recognition.
Description
Technical Field
The invention relates to the technical field of computer vision image processing, in particular to a method based on three-dimensional face data enhancement.
Background
The human face is used as a biological feature of human, and because of the difference of the facial features of the human face, more abundant figure information can be obtained, and information transmission, identity confirmation and the like can be carried out through the human face. The problem of three-dimensional reconstruction of human faces has been a research hotspot in the fields of computer vision, image processing and pattern recognition. And the data enhancement not only reduces the intensity of three-dimensional modeling, but also improves the recognition rate of the human face. In the big data era, as software and hardware mature, three-dimensional face reconstruction and identification are gradually applied to the fields of video monitoring and security protection. In addition, the three-dimensional face reconstruction technology is also required to be applied in the related fields of Virtual Reality (VR), medical technology, film characters and the like. The enhancement of the three-dimensional face data has great scientific research value and practical significance for the exploration and research of the three-dimensional face reconstruction problem.
The two-dimensional face recognition technology is mature day by day, but the two-dimensional face image is easily influenced by illumination, expression, posture change and the like, and the performance of a face recognition algorithm is reduced to a certain extent. The three-dimensional face data has good robustness to changes of illumination, expression, posture and the like, original intrinsic information of the face is kept, and the three-dimensional face data has more information content than the two-dimensional face data. However, the three-dimensional face reconstruction technology has the disadvantages of large three-dimensional simulation difference of the same individual, high calculation cost and long reconstruction time. The invention provides a method for enhancing three-dimensional data, which increases data sets, avoids the loss of depth information in the image acquisition process, reduces the calculation cost and improves the reconstruction precision.
Disclosure of Invention
The invention aims to provide a method for enhancing three-dimensional face data, which aims to solve the problems of less three-dimensional face data sets, larger three-dimensional simulation difference of the same individual, poor reconstruction effect and the like in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a method based on three-dimensional face data enhancement is characterized in that: in a three-dimensional face database, establishing dense corresponding relation for each individual face, selecting the face pair with the maximum non-rigid shape difference to generate a new identity, and generating a new expression by utilizing the multi-linear 3DMM, wherein the new identity and expression information enhances the data volume of an original three-dimensional face data set, and the method comprises the following steps:
(1) Establishing dense corresponding relation of the three-dimensional face data: selecting a three-dimensional Face database, and registering the front posture Face point cloud of each individual with an average Face point cloud Model in BFM (Basel Face Model) by adopting an optimal non-rigid ICP (inductively coupled plasma) method, so as to obtain a three-dimensional Face point cloud Model with dense correspondence, wherein each Model has the same dimension;
(2) Generating a new identity: selecting the face pair with the maximum non-rigid shape difference for the face point cloud model with dense correspondence established above, then the face with new identity can be obtained by formula (1),
wherein the three-dimensional face point cloud is represented as F i =[x p ,y p ,z p ] T ,A new face, i.e. a new identity, generated for the corresponding pair of points (i, j);
(3) And generating a new expression: using a multi-linear three-dimensional deformation model (3D)Portable Models, 3DMM for short), in which the shape information model P is formed i From BFM, expression information model P e Obtaining a three-dimensional Face point cloud X with dense corresponding relation after expression enhancement according to the Face from Face Warehouse and the Face with the generated new identity information, wherein the calculation method of the X is as follows:
wherein X represents the human face with enhanced expression,representing an average vector corresponding to the point cloud of the human face, wherein alpha is a shape information parameter vector, and beta is an expression information parameter vector;
(4) Calculating a deformation displacement vector: calculating a displacement vector between the expression-enhanced face point cloud and the BFM average face point cloud, wherein the calculation method comprises the following steps:
Δ i =Ψ i -Ω i (3),
in the formulaRepresenting a set of displacement vectors, Ψ i 3DMM, omega for expression enhancement i Fitting the point cloud number of the face, wherein N is 3DMM and N is 3 DMM;
(5) And transferring the expression to the original face: calculating the original face X using equation (4) i Fitting the closest point index of the 3DMM, and then adding a displacement vector corresponding to the closest point index to the original face to obtain an original face point cloud X 'with enhanced expression' i
X' i =X i +Δ j (5),
Wherein Δ j Is to input original face point cloud X i Corresponding displacement vector, X' i An input that an expression migrates to the original face. Therefore, the expression can be migrated to the original input three-dimensional face point cloud, and the three-dimensional face point cloud with different expressions can be obtained.
The invention establishes dense corresponding relation of the three-dimensional face, generates a new face, namely new identity information, migrates the 3DMM with expression to the original non-expression face, generates a new expression, adds new identity information and expression enhancement information into the traditional 3DMM model, and enhances data.
In the invention, dense corresponding relation is established on a three-dimensional face database based on an algorithm of key points, a data set is generated by dense corresponding real three-dimensional faces with different identities, and in order to ensure that the identities are different as much as possible, the maximum non-rigid shape difference in face pairs is selected, so that a new face is generated, and new identity information is generated; and changing parameters of the fitted 3DMM to generate a new expression, and transferring the 3DMM with the expression to the original face, so that the expression information is enhanced, and more data are provided for later reconstruction and identification.
The three-dimensional face data enhancement method based on the deformation model obtains the 3DMM by establishing the linear combination of the three-dimensional face, combining the input image adjustment, fitting and matching, fits the obtained 3DMM to the input two-dimensional face image, and can realize the reconstruction of the three-dimensional face by utilizing the fitting parameters. Because the human face data volume is large and certain correlation exists between the human face data volume and the human face data volume, in order to more prepare a new human face model and reduce the complexity of calculation, principal Component Analysis (PCA) is adopted to reduce the dimension and eliminate the correlation of the human face data.
According to the invention, more three-dimensional face data are generated by training the faces in the three-dimensional face database, and the problems of larger three-dimensional simulation difference of the same individual, poor reconstruction effect and the like are solved.
The invention has the beneficial effects that:
according to the invention, the three-dimensional face dense corresponding relation with different identities is established, the maximum non-rigid deformation difference is selected, so that new identity information is generated, the 3DMM with the expression is migrated to the original expressionless face by utilizing the multilinear 3DMM fitting, the expression is enhanced, and the three-dimensional face data is enhanced.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
Fig. 2 is a process of fitting a database face for 3DMM of the present invention.
Fig. 3 is a diagram of different expressions produced by the present invention using multi-linear 3 DMM.
Detailed Description
As shown in fig. 1, a method for enhancing based on three-dimensional face data includes the following steps:
(1) Establishing dense corresponding relation of the three-dimensional face: selecting a CASIA-3D faceV1 three-dimensional face database, registering face point clouds of the front postures of each individual with an average face point cloud model in BFM2009 by adopting an optimal step non-rigid ICP method, and obtaining a three-dimensional face point cloud model which is dense and corresponds to the model, wherein each model has the same dimension: 53490*3.
(2) Generating a new identity: for the face point cloud model with dense correspondence established above, the face pair with the largest non-rigid shape difference is selected, and then the face with new identity can be obtained by the formula (1)
Wherein the three-dimensional face point cloud is represented as F i =[x p ,y p ,z p ] T ,A new face, i.e. a new identity, is generated for the corresponding pair of points (i, j).
(3) Generating a new expression: adopting a multi-linear three-dimensional deformation model (3D Morphable Models, 3DMM for short), wherein the shape information model P i From BFM, expression information model P e Obtaining the Face with enhanced expression according to the Face corresponding to the density and the Face generating new identity information from Face WarehouseThe computing method of the three-dimensional face point cloud X and X with dense corresponding relation is as follows:
wherein X represents the human face with enhanced expression,the average vector corresponding to the face point cloud is represented, alpha is a shape information parameter vector, beta is an expression information parameter vector, and the parameter range is limited to be-0.05.
(4) Calculating a displacement vector after deformation: calculating a displacement vector between the expression-enhanced face point cloud and the BFM average face point cloud, wherein the calculation method comprises the following steps:
Δ i =Ψ i -Ω i (3),
in the formulaRepresenting a set of displacement vectors, Ψ i 3DMM, omega for expression enhancement i Fitting 3DMM, wherein N is the point cloud number of 3 DMM;
(5) And transferring the expression to the original face: calculating the original face X using equation (4) i Fitting the closest point index of the 3DMM, and then adding a displacement vector corresponding to the closest point index to the original face to obtain an original face point cloud X 'with enhanced expression' i
X' i =X i +Δ j (5),
Wherein Δ j Is to input original face point cloud X i Corresponding displacement vector, X' i Is an input that the expression migrates to the original face. Therefore, the expression can be migrated to the original input three-dimensional face point cloud, and the three-dimensional face point cloud with different expressions can be obtained.
In fig. 2, (a) a graph is a template face which selects an average face of a multi-linear 3DMM model as registration in the experiment, (b) a graph selects a face from which dense corresponding relation is to be established as a target face, and (c) a graph obtains the registered face by using an optimal step non-rigid ICP registration algorithm. The step is carried out on each face in the database, and then the dense corresponding relation among different faces can be established. The picture data is from a CASIA-3D faceV1 three-dimensional face data set.
In fig. 3, expression parameter β in the multi-linear 3DMM model was randomly changed and defined to be in the range of-0.05 to 0.05, resulting in 10 expressions that look more natural. The picture data is from a Base Face Model three-dimensional Face data set.
Claims (1)
1. A method based on three-dimensional face data enhancement is characterized in that: in a three-dimensional face database, establishing dense corresponding relation for each individual face, selecting the face pair with the maximum non-rigid shape difference to generate a new identity, and generating a new expression by utilizing the multi-linear 3DMM, wherein the new identity and expression information enhances the data volume of an original three-dimensional face data set, and the method comprises the following steps:
(1) And establishing dense corresponding relation of the three-dimensional face data: selecting a three-dimensional face database, and registering the front pose face point cloud of each individual with an average face point cloud model in BFM by adopting an optimal step non-rigid ICP method, so as to obtain a three-dimensional face point cloud model with dense correspondence, wherein each registered face model has the same dimension;
(2) Generating a new identity: for the face point cloud model with dense correspondence established above, the face pair with the largest non-rigid shape difference is selected, and then the face with new identity can be obtained by the formula (1)
Wherein the three-dimensional face point cloud is represented as F i =[x p ,y p ,z p ] T ,A new face, i.e. a new identity, generated for the corresponding pair of points (i, j);
(3) Generating a new expression: employing the multi-linear 3DMM of equation (2), wherein the shape information model P i From BFM, expression information model P e From Face Warehouse, different expressions can be generated by changing the expression parameter beta:
wherein X represents the human face with enhanced expression,representing an average vector corresponding to the point cloud of the human face, wherein alpha is a shape information parameter vector, and beta is an expression information parameter vector;
(4) Calculating a deformation displacement vector: calculating a displacement vector between the expression-enhanced face point cloud and the BFM average face point cloud, wherein the calculation method comprises the following steps:
Δ i =Ψ i -Ω i (3),
in the formulaRepresenting a set of displacement vectors, Ψ i 3DMM, omega for expression enhancement i Fitting 3DMM, wherein N is the number of point clouds of 3 DMM;
(5) And transferring the expression to the original face: calculating the original face X using equation (4) i Fitting the closest point index of the 3DMM in the step (1), and then adding a displacement vector corresponding to the closest point index to the original face to obtain the original face point cloud X 'with enhanced expression' i
X′ i =X i +Δ j (5),
Wherein Δ j Is to input original face point cloud X i Corresponding displacement vector, X' i An input for transferring the expression to the original face; therefore, the expression can be migrated to the original input three-dimensional face point cloud, and the three-dimensional face point cloud with different expressions can be obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811056176.3A CN109325994B (en) | 2018-09-11 | 2018-09-11 | Method for enhancing data based on three-dimensional face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811056176.3A CN109325994B (en) | 2018-09-11 | 2018-09-11 | Method for enhancing data based on three-dimensional face |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109325994A CN109325994A (en) | 2019-02-12 |
CN109325994B true CN109325994B (en) | 2023-03-24 |
Family
ID=65264928
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811056176.3A Active CN109325994B (en) | 2018-09-11 | 2018-09-11 | Method for enhancing data based on three-dimensional face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109325994B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI758662B (en) * | 2019-11-27 | 2022-03-21 | 國立中央大學 | Training data generation method for human facial recognition and data generation apparatus |
US11170203B2 (en) | 2019-11-27 | 2021-11-09 | National Central University | Training data generation method for human facial recognition and data generation apparatus |
CN111160208B (en) * | 2019-12-24 | 2023-04-07 | 陕西西图数联科技有限公司 | Three-dimensional face super-resolution method based on multi-frame point cloud fusion and variable model |
CN111144284B (en) * | 2019-12-25 | 2021-03-30 | 支付宝(杭州)信息技术有限公司 | Method and device for generating depth face image, electronic equipment and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598879A (en) * | 2015-01-07 | 2015-05-06 | 东南大学 | Three-dimensional face recognition method based on face contour lines of semi-rigid areas |
CN107680158A (en) * | 2017-11-01 | 2018-02-09 | 长沙学院 | A kind of three-dimensional facial reconstruction method based on convolutional neural networks model |
WO2018040099A1 (en) * | 2016-08-31 | 2018-03-08 | 深圳市唯特视科技有限公司 | Three-dimensional face reconstruction method based on grayscale and depth information |
CN108510583A (en) * | 2018-04-03 | 2018-09-07 | 北京华捷艾米科技有限公司 | The generation method of facial image and the generating means of facial image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201023092A (en) * | 2008-12-02 | 2010-06-16 | Nat Univ Tsing Hua | 3D face model construction method |
-
2018
- 2018-09-11 CN CN201811056176.3A patent/CN109325994B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598879A (en) * | 2015-01-07 | 2015-05-06 | 东南大学 | Three-dimensional face recognition method based on face contour lines of semi-rigid areas |
WO2018040099A1 (en) * | 2016-08-31 | 2018-03-08 | 深圳市唯特视科技有限公司 | Three-dimensional face reconstruction method based on grayscale and depth information |
CN107680158A (en) * | 2017-11-01 | 2018-02-09 | 长沙学院 | A kind of three-dimensional facial reconstruction method based on convolutional neural networks model |
CN108510583A (en) * | 2018-04-03 | 2018-09-07 | 北京华捷艾米科技有限公司 | The generation method of facial image and the generating means of facial image |
Non-Patent Citations (1)
Title |
---|
自适应三维形变模型结合流形分析的人脸识别方法;王渐韬等;《计算机科学》;20170615;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109325994A (en) | 2019-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109325994B (en) | Method for enhancing data based on three-dimensional face | |
WO2020192568A1 (en) | Facial image generation method and apparatus, device and storage medium | |
CN107067429A (en) | Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced | |
Zhang et al. | Facial expression retargeting from human to avatar made easy | |
Zhong et al. | Towards practical sketch-based 3d shape generation: The role of professional sketches | |
CN111127668B (en) | Character model generation method and device, electronic equipment and storage medium | |
CN111950430A (en) | Color texture based multi-scale makeup style difference measurement and migration method and system | |
Zhu et al. | Facescape: 3d facial dataset and benchmark for single-view 3d face reconstruction | |
CN111028319A (en) | Three-dimensional non-photorealistic expression generation method based on facial motion unit | |
Ye et al. | 3d morphable face model for face animation | |
Qiu et al. | 3dcaricshop: A dataset and a baseline method for single-view 3d caricature face reconstruction | |
Zhang et al. | Portrait relief modeling from a single image | |
CN116958420A (en) | High-precision modeling method for three-dimensional face of digital human teacher | |
Dong et al. | A recognizable expression line portrait synthesis method in portrait rendering robot | |
Sun et al. | Cgof++: Controllable 3d face synthesis with conditional generative occupancy fields | |
CN111754622B (en) | Face three-dimensional image generation method and related equipment | |
CN102509345B (en) | Portrait art shadow effect generating method based on artist knowledge | |
Hu et al. | A dense point-to-point alignment method for realistic 3D face morphing and animation | |
Taheri et al. | Joint albedo estimation and pose tracking from video | |
CN113379890B (en) | Character bas-relief model generation method based on single photo | |
CN113762059A (en) | Image processing method and device, electronic equipment and readable storage medium | |
Aleksandrova et al. | 3D face model reconstructing from its 2D images using neural networks | |
Tang et al. | Global alignment for dynamic 3d morphable model construction | |
Zhang et al. | Monocular face reconstruction with global and local shape constraints | |
CN111612912A (en) | Rapid three-dimensional reconstruction and optimization method based on Kinect2 camera face contour point cloud model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |