CN111223175A - Three-dimensional face reconstruction method - Google Patents

Three-dimensional face reconstruction method Download PDF

Info

Publication number
CN111223175A
CN111223175A CN201910342345.8A CN201910342345A CN111223175A CN 111223175 A CN111223175 A CN 111223175A CN 201910342345 A CN201910342345 A CN 201910342345A CN 111223175 A CN111223175 A CN 111223175A
Authority
CN
China
Prior art keywords
dimensional
model
dimensional face
face
variable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910342345.8A
Other languages
Chinese (zh)
Other versions
CN111223175B (en
Inventor
吴炳飞
林俊贤
吴宜樵
吴秉璋
黄至正
钟孟良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spring Foundation of NCTU
Original Assignee
Spring Foundation of NCTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spring Foundation of NCTU filed Critical Spring Foundation of NCTU
Publication of CN111223175A publication Critical patent/CN111223175A/en
Application granted granted Critical
Publication of CN111223175B publication Critical patent/CN111223175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Abstract

The invention relates to a three-dimensional face reconstruction method, which comprises the following steps of firstly inputting a two-dimensional face image, processing the two-dimensional face image by using a neural network model, positioning two-dimensional feature points of the two-dimensional face image, positioning a plurality of two-dimensional feature point positions on the two-dimensional face image, and converting the two-dimensional feature points into a plurality of three-dimensional coordinates: converting the two-dimensional feature points into corresponding three-dimensional coordinates according to similarity calculation, finely adjusting an average three-dimensional face model through the three-dimensional coordinates, performing repeated multi-stage operation on one three-dimensional face shape of the first three-dimensional face model, respectively obtaining a second three-dimensional face model from low resolution to high resolution, performing face color compensation on the second three-dimensional face model to obtain a third three-dimensional face model, and finally outputting a three-dimensional face image according to the third three-dimensional face model.

Description

Three-dimensional face reconstruction method
Technical Field
The present invention relates to a three-dimensional face reconstruction method, and more particularly, to a three-dimensional face model for reconstructing a face image by inputting a two-dimensional face image, and more particularly, to a method for viewing various angle images of a three-dimensional face through a rotation model.
Background
In recent years, three-dimensional face modeling and reconstruction techniques have attracted more and more attention in the fields of computer vision and computer graphics, and in the past related technical development, most of the technical developments mostly propose an operation mode how to reconstruct a three-dimensional shape from a two-dimensional image so as to simulate modeling and reconstruction of a three-dimensional face, and are also the main technical development directions in the past in the field.
However, in the calculation methods of the prior art, a plurality of images or a plurality of video images are required to perform the process of initializing the three-dimensional face reconstruction, and in many applications, only one two-dimensional image is usually available. Some calculation simulation methods only use a single picture to reconstruct a three-dimensional face, but the obtained three-dimensional face cannot generate a realistic effect, and the obtained three-dimensional face can only have a specific angle (only provides a two-dimensional image), and further cannot provide images of the three-dimensional face at various angles.
In addition, there have been some studies in the past that proposed a more accurate three-dimensional face algorithm, in which the whole face is first fitted, and then specific regions, such as eyes, mouth, and nose, are fitted. However, it takes a long time to calculate and cannot generate an accurate fitting result, so it is difficult to put the method into practical use and cannot meet the industry requirements.
Therefore, there is a need for a three-dimensional face reconstruction method capable of reconstructing a three-dimensional face from a single two-dimensional image input, and reconstructing the three-dimensional face in a faster and more accurate manner with less time.
Disclosure of Invention
One objective of the present invention is to provide a three-dimensional face reconstruction method, which can reconstruct a three-dimensional face image by inputting a two-dimensional face image, and the three-dimensional face model can be further rotated at various angles to obtain three-dimensional face images at different angles.
One objective of the present invention is to provide a three-dimensional face reconstruction method, which utilizes two methods, i.e. feature point transformation three-dimensional coordinate, face steering estimation, shape fine adjustment and color compensation, to obtain a rotatable three-dimensional face image, i.e. a two-dimensional face image is input to reconstruct a three-dimensional face image capable of rotating at various angles.
The invention aims to achieve the aim, and particularly provides a three-dimensional face reconstruction method which comprises the steps of firstly inputting a two-dimensional face image, processing the two-dimensional face image by using a neural network model, positioning two-dimensional feature points of the two-dimensional face image, and positioning a plurality of two-dimensional feature point positions on the two-dimensional face image; converting the plurality of two-dimensional feature points into a plurality of three-dimensional coordinates: converting the two-dimensional feature points into corresponding three-dimensional coordinates according to similarity calculation, and forming a first (average) three-dimensional face model by the three-dimensional coordinates; the three-dimensional face shape of the first (i.e., average) three-dimensional face model is fine-tuned. That is, a second three-dimensional face model is obtained by repeating the operations from low resolution to high resolution in multiple stages: carrying out face color compensation on the second three-dimensional face model to obtain a third three-dimensional face model; and outputting a three-dimensional face image according to the third three-dimensional face model.
In an embodiment of the present invention, the third three-dimensional face model is a color three-dimensional face model. It is particularly noted that the three-dimensional face models at each level are color face models.
In the embodiment of the present invention, the first three-dimensional face variable model forms a different first three-dimensional face model by using a linear combination of a plurality of feature templates based on an average model, and the second three-dimensional face variable model forms a different second three-dimensional face model by using a linear combination of a plurality of feature templates based on the first three-dimensional face model.
In the embodiment of the present invention, the first (i.e. average) three-dimensional face variable model and the second three-dimensional face model can be divided into a multi-resolution three-dimensional face variable model, which is sequentially calculated from the lowest resolution to the highest resolution to obtain the first three-dimensional face variable model and the second three-dimensional face model.
In the embodiment of the present invention, after the step of converting the 2D feature points into the 3D feature points is completed, the shape of the average model is fine-tuned by the obtained 3D feature points, the average model here is formed into an average three-dimensional face variable model by using a three-dimensional face database through component extraction, and the shape fine tuning of the 3D face model is performed by using a linear combination of a plurality of feature templates to perform a simulation adjustment of the shape, and the steps are iterated from a low-resolution face model to a high-resolution face model, so as to reduce the computation amount and time.
In the embodiment of the present invention, the fine-tuning step of the three-dimensional face shape of the first three-dimensional face model is performed by using newton's method or other optimization methods to obtain the point of the first-level three-dimensional variable model projected on the two-dimensional plane, which is closest to a two-dimensional feature point, to obtain a three-dimensional rotation matrix, a two-dimensional deviation, a focal length, and a three-dimensional variable model parameter, so as to obtain the first three-dimensional face model of the first level.
In the embodiment of the invention, the fine tuning step is performed on the three-dimensional face shape of the first three-dimensional face model, when the resolution is in two levels, the three-dimensional rotation matrix, the two-dimensional deviation amount, the focal length and the three-dimensional variable model parameter are utilized, the fine tuning step can be applied to a second three-dimensional variable model, the error value of the point of the second three-dimensional variable model projected on the two-dimensional plane and the nearest image contour point is calculated, and the three-dimensional variable model parameter is adjusted by utilizing the error value.
In the embodiment of the present invention, the fine-tuning step of the three-dimensional face shape of the first three-dimensional face model includes calculating a point of the second three-dimensional variable model projected on the two-dimensional plane by using the three-dimensional rotation matrix, the two-dimensional deviation, the focal length, the error value of the image contour point, the two-dimensional feature point position, the error value of the color projection, and the three-dimensional variable model parameter set for the second three-dimensional variable model when the resolution is greater than two levels.
In the embodiment of the present invention, the fine-tuning step of the three-dimensional face shape of the first three-dimensional face model is performed by sequentially applying a three-dimensional rotation matrix, a two-dimensional deviation amount, a focal length and three-dimensional variable model parameters to the first three-dimensional variable model, the second three-dimensional variable model and the (N-1) th three-dimensional variable model when the resolution is at N level, circularly calculating an error value between a point where the (N-1) th three-dimensional variable model is projected on the two-dimensional plane and a nearest image contour point, and adjusting the three-dimensional variable model parameters by using the error value. The error value is required to be below the predetermined error value. Therefore, when the resolution is higher than the second level, the three-dimensional face shape is trimmed based on the above-mentioned error value of the image contour point, as well as the error values of the two-dimensional feature point position and the color projection.
In order to make the aforementioned and other objects, features and advantages of the invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a three-dimensional face reconstruction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional face reconstruction method according to an embodiment of the present invention;
FIG. 3 is a two-dimensional feature point location illustration; and
FIG. 4 is a five-level resolution diagram of the multi-resolution three-dimensional face variable model according to the present invention.
Detailed Description
The foregoing and other features, aspects and utilities of the present general inventive concept will be apparent from the following detailed description of a preferred embodiment of the present general inventive concept when read in conjunction with the accompanying drawings.
Referring to fig. 1 and fig. 2, fig. 1 is a three-dimensional face reconstruction method according to an embodiment of the invention, and fig. 2 is a schematic diagram of the three-dimensional face reconstruction method according to an embodiment of the invention. Referring to fig. 1, the three-dimensional face reconstruction method of the present invention includes the following steps:
referring to step S210 of fig. 1, a two-dimensional face image is input.
Referring to step S220 of fig. 1, the two-dimensional face image is located by the two-dimensional feature points 200. As shown in fig. 2, two-dimensional face key point positions (such as eyes, nose, mouth, etc.) are located by a face alignment method.
Please refer to fig. 3, which is a two-dimensional feature point positioning illustration. As shown in fig. 3, a two-dimensional face image is input into a convolution neural network model, the convolution neural network model processes the two-dimensional face image, and positions the two-dimensional face image and outputs a plurality of positions of two-dimensional feature points 200.
Referring to step S230 of fig. 1, the two-dimensional feature points 200 are transformed into three-dimensional coordinates: the two-dimensional feature points 200 are converted into corresponding three-dimensional coordinates according to similarity calculation, and the three-dimensional coordinates are combined into a first (average) three-dimensional face model.
In the embodiment of the present invention, please refer to the following formula for calculating the first (average) three-dimensional face model. It uses Newton method or other optimization method to optimize the following cost function (find the point projected on the two-dimensional plane by the first-level three-dimensional variable model, and the position is closest to the position of a two-dimensional feature point 200) to find the three-dimensional rotation matrix, two-dimensional deviation, focal length and three-dimensional variable model parameters, so as to obtain the first (average) three-dimensional face model of the first level.
Referring to step S240 of fig. 1, a three-dimensional face shape of the first (average) three-dimensional face model is fine-tuned to obtain a second three-dimensional face model. The parameters of the three-dimensional face variable model and the parameters of the model rotation, scaling, translation and the like are adjusted through the key points and the face image information, so that the three-dimensional face variable model and the two-dimensional face image are as consistent as possible.
Still referring to step S230 of fig. 1, in the step of converting the two-dimensional feature points into three-dimensional coordinates and performing a fine-tuning step on the three-dimensional face shape of the first three-dimensional face model, in step S240 of fig. 1, the first three-dimensional face model is formed by component extraction using a three-dimensional face database. The first three-dimensional face variable model takes an average model as a reference and utilizes the linear combination of a plurality of characteristic templates to form different first three-dimensional face models, and the second three-dimensional face variable model takes the average model as a reference and utilizes the linear combination of the plurality of characteristic templates to form different second three-dimensional face models. Therefore, in the embodiment of the present invention, the fine-tuning step is performed on the three-dimensional face shape of the first three-dimensional face model, and when the resolution is two levels, the three-dimensional rotation matrix, the two-dimensional deviation amount, the focal length, and the three-dimensional variable model parameter are used, and the fine-tuning step can be applied to a second three-dimensional variable model, calculate an error value between a point where the second three-dimensional variable model is projected on the two-dimensional plane and a nearest image contour point, and adjust the three-dimensional variable model parameter by using the error value. When the resolution is greater than two levels, the three-dimensional rotation matrix, the two-dimensional deviation, the focal length, the error value of the image contour point, the two-dimensional feature point position, the error value of the color projection and the three-dimensional variable model parameter set are used for a second-level three-dimensional variable model, and the point of the second-level three-dimensional variable model projected on the two-dimensional plane is calculated.
Referring to fig. 4, the first three-dimensional face variable model and the second three-dimensional face model are divided into a multi-resolution three-dimensional face variable model, and the calculation is sequentially performed from the lowest resolution to the highest resolution to obtain the first three-dimensional face variable model and the second three-dimensional face model. To speed up the computation, the multi-resolution three-dimensional face variable model is converted into a multi-resolution three-dimensional face variable model (depending on N), and the computation is performed from low resolution to high resolution. The low resolution is that the number of points of the first three-dimensional face model is less, and the high resolution is that the number of points of the first three-dimensional face model is more.
Referring to fig. 4, by the above loop, a three-dimensional face model similar to the two-dimensional face image can be obtained by processing the operation to the resolution of the last stage.
It should be noted that, in this embodiment, when the resolution is at the fifth level, the three-dimensional face model similar to the two-dimensional face image can be obtained by performing the above-mentioned loop processing operation until the resolution reaches the fifth level. That is, the multi-level resolution three-dimensional face variable model includes one-level to five-level resolution three-dimensional face variable models. And fine-tuning a three-dimensional face shape of the first three-dimensional face model, wherein the fine-tuning comprises the steps of performing repeated multi-stage operation on the three-dimensional face shape of the first three-dimensional face model from low resolution to high resolution respectively.
In another embodiment of the present invention, when the resolution is N-level, the three-dimensional rotation matrix, the two-dimensional deviation, the focal length and the three-dimensional variable model parameter are used, the first-level three-dimensional variable model, the second-level three-dimensional variable model and the (N-1) -level three-dimensional variable model are sequentially applied, and a loop processing operation is performed to project the (N-1) -level three-dimensional variable model onto a point on a two-dimensional plane and an error value of a nearest image contour point, and the error value is used to adjust the three-dimensional variable model parameter.
Referring to step S260 of fig. 1, a three-dimensional face image is output according to the third three-dimensional face model.
In the embodiment of the present invention, after the step of converting the 2D feature points into the 3D feature points is completed, the shape of the average model is fine-tuned by the obtained 3D feature points, the average model uses a three-dimensional face database to form an average three-dimensional face variable model after component extraction, and the shape fine tuning of the 3D face model uses linear combination of a plurality of feature templates to perform simulation adjustment of the shape, and the steps are iterated from a low-resolution face model to a high-resolution face model, so as to reduce the computation amount and time.
Therefore, the three-dimensional face reconstruction method provided by the invention can reconstruct a three-dimensional face image by inputting a two-dimensional face image, and the three-dimensional face model in the three-dimensional face image can rotate to various angles, thereby obtaining and displaying three-dimensional face images with different angles. That is, a three-dimensional face image capable of rotating at various angles is reconstructed by inputting a two-dimensional face image.
The above description is only a preferred embodiment of the present invention, and all equivalent variations applying the present invention as described in the specification and claims are intended to be included in the scope of the present invention.

Claims (20)

1. A three-dimensional face reconstruction method, comprising:
inputting a two-dimensional face image;
carrying out two-dimensional feature point positioning on the two-dimensional face image, and positioning a plurality of two-dimensional feature point positions on the two-dimensional face image;
converting the plurality of two-dimensional feature points into a plurality of three-dimensional coordinates, calculating the plurality of two-dimensional feature points according to similarity, converting the plurality of two-dimensional feature points into a plurality of corresponding three-dimensional coordinates, and forming the plurality of three-dimensional coordinates into a first three-dimensional face model;
fine-tuning a three-dimensional face shape of the first three-dimensional face model to obtain a second three-dimensional face model;
carrying out face color compensation on the second three-dimensional face model to obtain a third three-dimensional face model; and
and outputting a three-dimensional face image according to the third three-dimensional face model.
2. The method of claim 1, wherein the two-dimensional feature point localization method comprises a neural network model.
3. The method of claim 1, wherein the third three-dimensional face model is a color three-dimensional face model.
4. The method of claim 1, wherein the step of converting the two-dimensional feature points into three-dimensional coordinates and the step of fine-tuning a three-dimensional face shape of the first three-dimensional face model, the first three-dimensional face model is subjected to principal component analysis by using a three-dimensional face database to form a first variable three-dimensional face model, and the second three-dimensional face model is subjected to principal component analysis by using the three-dimensional face database to form a second variable three-dimensional face model.
5. The method of claim 4, wherein the first three-dimensional face variable model is based on an average model and uses linear combinations of a plurality of feature templates to form different first three-dimensional face models, and the second three-dimensional face variable model is based on the average model and uses linear combinations of the plurality of feature templates to form different second three-dimensional face models.
6. The method of claim 5, wherein the first three-dimensional face variable model and the second three-dimensional face model are a multi-resolution three-dimensional face variable model, and sequentially operated from lowest resolution to highest resolution to obtain the second three-dimensional face variable model and the third three-dimensional face model.
7. The method of claim 6, wherein the multi-resolution three-dimensional face-changeable model comprises one to five levels of resolution three-dimensional face-changeable models.
8. The method of claim 1, wherein when compensating the color of the face of the second three-dimensional face model, any three points on the second three-dimensional face model are defined as a face of a triangle, and whether the normal vector of each triangle faces outward is calculated to determine whether to fill the face with color.
9. The method as claimed in claim 8, wherein when the normal vector of the triangle is outward, it is determined that the three points are not hidden and have visibility, the second three-dimensional face model is projected onto the two-dimensional plane, the corresponding color value is found according to the coordinate position of the two-dimensional plane, and the color value is pasted back to the second three-dimensional face model.
10. The method of claim 8, wherein the three points are determined to be obscured when the normal vector of the triangle is facing inward, the three points being back faces.
11. The method of claim 9, wherein after filling colors, the average value and the standard deviation of the colors are calculated, and if the distance average value exceeds a predetermined value, the colors corresponding to the two-dimensional plane are not considered as the colors of the three points, and the colors obtained by interpolating the surrounding colors are filled.
12. The method of claim 1, wherein the step of transforming the two-dimensional feature points into three-dimensional coordinates comprises using Newton's method to determine a point on the two-dimensional plane where a first-level three-dimensional variable model is projected, and the closest position to a two-dimensional feature point, so as to determine a three-dimensional rotation matrix, a two-dimensional offset, a focal length, and a three-dimensional variable model parameter, so as to obtain the first three-dimensional face model of the first level.
13. The method of claim 12, wherein the fine-tuning of the three-dimensional face shape of the first three-dimensional face model comprises using the three-dimensional rotation matrix, the two-dimensional offset, the focal length, and the three-dimensional variable model parameter set for a second-level three-dimensional variable model when the resolution is two-level, calculating an error value between a point on the two-dimensional plane where the second-level three-dimensional variable model is projected and a nearest image contour point, and using the error value to adjust the three-dimensional variable model parameters.
14. The method of claim 12, wherein the fine-tuning of the three-dimensional face shape of the first three-dimensional face model comprises computing a point on the second three-dimensional variable model projected onto the two-dimensional plane using the three-dimensional rotation matrix, the two-dimensional deviation, the focal length, the error value of the image contour point, the position of the two-dimensional feature point, the error value of the color projection, and the set of three-dimensional variable model parameters for the second three-dimensional variable model when the resolution is greater than two levels.
15. The method of claim 12, wherein the fine-tuning step of the three-dimensional face shape of the first three-dimensional face model sequentially applies the first three-dimensional variable model, the second three-dimensional variable model, and the nth-1 three-dimensional variable model to the N-1 th three-dimensional variable model by using the three-dimensional rotation matrix, the two-dimensional offset, the focal distance, and the three-dimensional variable model parameters when the resolution is N levels, and circularly calculates an error value between a point of the N-1 th three-dimensional variable model projected on the two-dimensional plane and a nearest image contour point, and uses the error value to adjust the three-dimensional variable model parameters.
16. The method as claimed in claim 13 or 15, wherein the error value is below a predetermined error value.
17. The method of claim 1, wherein the fine-tuning of the three-dimensional face shape of the first three-dimensional face model comprises repeating the multi-stage operation of the three-dimensional face shape of the first three-dimensional face model from a low resolution to a high resolution.
18. The method of claim 17, wherein the low resolution is a smaller number of points for the first three-dimensional face model.
19. The method of claim 17, wherein the first three-dimensional face model has a higher resolution.
20. The method of claim 1, wherein the first three-dimensional face model is an average face model.
CN201910342345.8A 2018-11-27 2019-04-26 Three-dimensional face reconstruction method Active CN111223175B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW107142148A TWI712002B (en) 2018-11-27 2018-11-27 A 3d human face reconstruction method
TW107142148 2018-11-27

Publications (2)

Publication Number Publication Date
CN111223175A true CN111223175A (en) 2020-06-02
CN111223175B CN111223175B (en) 2023-07-04

Family

ID=70770051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910342345.8A Active CN111223175B (en) 2018-11-27 2019-04-26 Three-dimensional face reconstruction method

Country Status (3)

Country Link
US (1) US10803654B2 (en)
CN (1) CN111223175B (en)
TW (1) TWI712002B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739168A (en) * 2020-06-30 2020-10-02 华东交通大学 Large-scale three-dimensional face synthesis method with suppressed sample similarity

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102054291A (en) * 2009-11-04 2011-05-11 厦门市美亚柏科信息股份有限公司 Method and device for reconstructing three-dimensional face based on single face image
US8644596B1 (en) * 2012-06-19 2014-02-04 Google Inc. Conversion of monoscopic visual content using image-depth database
TW201516375A (en) * 2013-10-21 2015-05-01 Univ Nat Taiwan Science Tech Method and system for three-dimensional data acquisition
CN106780713A (en) * 2016-11-11 2017-05-31 吴怀宇 A kind of three-dimensional face modeling method and system based on single width photo
CN107316340A (en) * 2017-06-28 2017-11-03 河海大学常州校区 A kind of fast human face model building based on single photo

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7956823B2 (en) * 2001-05-30 2011-06-07 Sharp Kabushiki Kaisha Color display device, color compensation method, color compensation program, and storage medium readable by computer
TWM364920U (en) * 2009-04-10 2009-09-11 Shen-Jwu Su 3D human face identification device with infrared light source
TWI419058B (en) * 2009-10-23 2013-12-11 Univ Nat Chiao Tung Image recognition model and the image recognition method using the image recognition model
TWI553565B (en) * 2014-09-22 2016-10-11 銘傳大學 Utilizing two-dimensional image to estimate its three-dimensional face angle method, and its database establishment of face replacement and face image replacement method
CN104966316B (en) * 2015-05-22 2019-03-15 腾讯科技(深圳)有限公司 A kind of 3D facial reconstruction method, device and server
JP6754619B2 (en) * 2015-06-24 2020-09-16 三星電子株式会社Samsung Electronics Co.,Ltd. Face recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102054291A (en) * 2009-11-04 2011-05-11 厦门市美亚柏科信息股份有限公司 Method and device for reconstructing three-dimensional face based on single face image
US8644596B1 (en) * 2012-06-19 2014-02-04 Google Inc. Conversion of monoscopic visual content using image-depth database
TW201516375A (en) * 2013-10-21 2015-05-01 Univ Nat Taiwan Science Tech Method and system for three-dimensional data acquisition
CN106780713A (en) * 2016-11-11 2017-05-31 吴怀宇 A kind of three-dimensional face modeling method and system based on single width photo
CN107316340A (en) * 2017-06-28 2017-11-03 河海大学常州校区 A kind of fast human face model building based on single photo

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739168A (en) * 2020-06-30 2020-10-02 华东交通大学 Large-scale three-dimensional face synthesis method with suppressed sample similarity
CN111739168B (en) * 2020-06-30 2021-01-29 华东交通大学 Large-scale three-dimensional face synthesis method with suppressed sample similarity

Also Published As

Publication number Publication date
US10803654B2 (en) 2020-10-13
TW202020813A (en) 2020-06-01
CN111223175B (en) 2023-07-04
TWI712002B (en) 2020-12-01
US20200167990A1 (en) 2020-05-28

Similar Documents

Publication Publication Date Title
Samaras et al. Incorporating illumination constraints in deformable models for shape from shading and light direction estimation
CN112215050A (en) Nonlinear 3DMM face reconstruction and posture normalization method, device, medium and equipment
US8988435B1 (en) Deforming a skin representation using muscle geometries
CN104157010A (en) 3D human face reconstruction method and device
CN110060329B (en) Mobile terminal human body model reconstruction method based on color depth video stream data
US11568601B2 (en) Real-time hand modeling and tracking using convolution models
CN113962858A (en) Multi-view depth acquisition method
US20230267686A1 (en) Subdividing a three-dimensional mesh utilizing a neural network
US11403807B2 (en) Learning hybrid (surface-based and volume-based) shape representation
CN116109757A (en) Hash coding dynamic three-dimensional human body rendering synthesis method based on inner hidden coordinates
CN116416376A (en) Three-dimensional hair reconstruction method, system, electronic equipment and storage medium
CN111223175B (en) Three-dimensional face reconstruction method
CN116912148B (en) Image enhancement method, device, computer equipment and computer readable storage medium
CN113888694A (en) SDF field micro-renderable-based transparent object reconstruction method and system
Vyatkin Method of binary search for image elements of functionally defined objects using graphics processing units
KR20210147626A (en) Apparatus and method for synthesizing 3d face image using competitive learning
Schaurecker et al. Super-resolving Dark Matter Halos using Generative Deep Learning
US20230145498A1 (en) Image reprojection and multi-image inpainting based on geometric depth parameters
CN115409949A (en) Model training method, visual angle image generation method, device, equipment and medium
Xiao et al. 3d face reconstruction via feature point depth estimation and shape deformation
CN115082636B (en) Single image three-dimensional reconstruction method and device based on mixed Gaussian network
US20220383573A1 (en) Frame interpolation for rendered content
CN117292041B (en) Semantic perception multi-view three-dimensional human body reconstruction method, device and medium
CN114219900B (en) Three-dimensional scene reconstruction method, reconstruction system and application based on mixed reality glasses
US20220165029A1 (en) Computer Vision Systems and Methods for High-Fidelity Representation of Complex 3D Surfaces Using Deep Unsigned Distance Embeddings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant