CN110717978B - Three-dimensional head reconstruction method based on single image - Google Patents

Three-dimensional head reconstruction method based on single image Download PDF

Info

Publication number
CN110717978B
CN110717978B CN201911098677.2A CN201911098677A CN110717978B CN 110717978 B CN110717978 B CN 110717978B CN 201911098677 A CN201911098677 A CN 201911098677A CN 110717978 B CN110717978 B CN 110717978B
Authority
CN
China
Prior art keywords
hair
model
loss
face
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911098677.2A
Other languages
Chinese (zh)
Other versions
CN110717978A (en
Inventor
齐越
程利刚
杜文祥
包永堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Research Institute Of Beihang University
Beihang University
Original Assignee
Qingdao Research Institute Of Beihang University
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Research Institute Of Beihang University, Beihang University filed Critical Qingdao Research Institute Of Beihang University
Publication of CN110717978A publication Critical patent/CN110717978A/en
Application granted granted Critical
Publication of CN110717978B publication Critical patent/CN110717978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional head reconstruction method based on a single image, which belongs to the field of computer vision, and uses a single front face photo as input, and carries out three-dimensional face reconstruction from the face photo by using a deep learning method to return characteristic information; the face model is supplemented to obtain a complete head model; segmentation of the hair region using a deep learning method; acquiring a pattern of the hair area part; regression of hairline information is performed by using a deep learning method; matching a hairline database according to the hairline information obtained by regression, thereby completing hair reconstruction; and aligning the hair model coordinate system with the hair model coordinate system to obtain the complete three-dimensional head model. Compared with the prior three-dimensional head reconstruction method, the three-dimensional head reconstruction method can achieve automatic three-dimensional face and hair reconstruction, achieves more realistic reconstruction effect, and can make the three-dimensional model more abundantly applied based on the reconstruction effect.

Description

Three-dimensional head reconstruction method based on single image
Technical Field
The invention belongs to the field of computer vision.
Background
With the development of virtual reality technology, human body reconstruction has become a hotspot in the fields of computer graphics and animation. The human head has more expression and hairstyle than other human body parts, so three-dimensional head reconstruction is also a difficulty in the fields of computer graphics and animation. While most three-dimensional models used at the present stage are still modeled manually by artists, although the models obtained by the method are finer and can achieve lifelike effects, the defects are obvious, and the whole modeling process needs to consume a great deal of manpower and time. Therefore, it is very important to find a human head reconstruction method which has strong applicability, little dependence on environment and high automation degree.
In recent years, many researchers have explored methods for image-based reconstruction, and many excellent results have emerged, one of which is the use of image-based methods for reconstruction. Compared with the common reconstruction method, the cost required by modeling based on the photo is relatively smaller, so that the method has an increasing position in game making, film and television making and scene reproduction. At present, a plurality of more mature facial reconstruction and hair reconstruction methods based on photos, such as facial deformation model (3D Morphable Model, abbreviated as 3DMM hereinafter) and the like, exist. Compared with other methods, the method for reconstructing the image has great research value and applicability because the image is easy to obtain and has less requirements on software and hardware.
In the existing method for reconstructing a three-dimensional face based on a single image, one method is to detect 68 face feature points of an input face image, solve a projection equation by corresponding feature points in a standard face deformation model (3 DMM), and finally solve a 3DMM coefficient by a method for solving an optimized energy function to complete the face reconstruction. However, due to the complexity of the face structure, occlusion exists in the photo inevitably, and especially the feature points of the side face part are often occluded, so that errors exist on the detected face feature points, the finally solved 3DMM coefficient is inaccurate, and an accurate face model is not obtained.
In addition, in the existing method for reconstructing hair based on a single image, due to the complexity of the hair structure, more information about the hair growth direction and structure cannot be obtained from the image alone. In order to solve the problem, a reconstruction method is to reconstruct the hair by adding additional auxiliary information (such as hair growth information and hairline structure information), and although the method can better recover the structure of the hair, the reconstruction effect is greatly dependent on the additional auxiliary information, and because the auxiliary information needs to be added, the automatic effect cannot be realized in the aspect of reconstruction, and the reconstruction efficiency has defects.
Meanwhile, there is also a method for performing automatic hair reconstruction based on deep learning, in which, in order to ensure that most hairstyles can be covered when training a neural network, the method performs training of the neural network by constructing a very large-scale hair database, then directly reconstructing a hair model end to end, and finally enabling the hair model to be more thickened by interpolation and other methods. However, due to the complexity of the hair structure and the characteristics of the neural network, even if a very large-scale hair database is constructed, the characteristics of all hair models are difficult to cover, and the hair models obtained directly from the neural network can only meet the reconstruction requirement on the overall outline, and errors still exist in local areas of the hair, so that the effect similar to that in an input image cannot be achieved.
Disclosure of Invention
Aiming at the problems that the existing three-dimensional face reconstruction method based on a single image has large face model error and cannot realize the automatic effect, the invention provides a method which can automatically and completely reconstruct a more accurate three-dimensional head model from a single Zhang Ren face photo and is realized by adopting the following technical scheme:
a three-dimensional head reconstruction method based on a single image comprises the following steps:
step A, cutting an input photo;
step B, designing and building an R3M (ResNet-3 DMM) neural network;
c, inputting the face photo obtained in the step A into the trained network in the step B, and regressing to obtain coefficients of a face deformation model, so as to finish reconstruction of the three-dimensional face, and supplementing the three-dimensional face model by using a grid supplementing algorithm to generate a complete head model;
step D, designing and building a PSP-HairNet convolutional neural network;
e, after the original photo is subjected to size modification, inputting the PSP-HairNet neural network trained in the step D to obtain an image of the hair region;
and F, obtaining the direction information of each pixel in the hair area, and generating a direction diagram.
G, obtaining a USC-P Hair model database of more samples, generating a pattern-Hair model data set by using the USC-P Hair model database, designing and constructing a Hair-Re convolutional neural network, and training the network by using the generated pattern-Hair model data set;
step H, inputting the obtained directional diagram in the step F into the trained Hair-Re neural network in the step G, and carrying out regression to obtain a Hair model;
step I, clustering the hair models obtained in the step H to obtain key hair wires of the hair models, and obtaining the matched hair models by matching the key hair wires with the hair wires in the hair database;
step J, constructing a three-dimensional direction field by utilizing the hair model obtained in the step I, fusing the direction information of the hair model, and growing a final three-dimensional hair model;
and K, unifying the three-dimensional head model obtained in the step C and the hair model obtained in the step J to the same coordinate system, and completing rendering display.
Further, the network structure used for regressing the 3DMM coefficients in the step B, and the loss function used in the network structure are as follows:
Loss R3M =a·Loss 3DMM +b·Loss landmarks
wherein, loss 3DMM As a Loss function with respect to 3DMM coefficients, loss landmark 68 face features for reconstructing face model from 3DMM coefficientsThe sign point Loss functions, a and b are weights occupied by the two Loss functions;
Loss 3DMM =||(α predGT )·w 1 || 2
Loss landmark =||(v pred -v GT )·w 2 || 2
wherein alpha is pred Representing the predicted 3DMM coefficient, alpha GT 3DMM coefficient, w, representing the reality of the sample 1 Representing that each coefficient is weighted differently, v pred Is expressed in terms of obtaining a 3DMM coefficient alpha pred When the face feature points of 68 of the 3DMM model are reconstructed according to the coefficients, v GT 68 face feature points representing the reality of the sample, w 2 The representation takes different weights for each feature point.
Further, the network structure used for hair area detection in step D, and the loss function used in the network structure are as follows:
Loss mask =||Mask pred -Mask GT || 2
wherein, mask pred Is to predict the hair area, mask GT Is a real hair area.
Further, the network structure used for regressing the hair model in step G, and the loss function used for the same:
Loss hair =||(S pred -S GT )·w|| 2
wherein S is pred Is to predict and obtain hair model information S GT Is the actual hair model information.
Further, the following distance formula is adopted when the hair is clustered in the step I:
d 1 =α·H(s 1 ,s 2 )+β·E(s 1 ,s 2 )
wherein H(s) 1 ,s 2 ) Is two hairlines s 1 、s 2 Is a Haosdorf distance, E (s 1 ,s 2 ) Is two hairlines s 1 、s 2 Is the Euclidean distance of (a), alpha and beta are two distance valuesIs a weight of (2).
Further, in the network structure for regression of 3DMM coefficients, the weight coefficient a takes a value of 1 and the weight coefficient b takes a value of 1.
Further, in the distance formula, the weight coefficient α takes a value of 0.5, and the weight coefficient β takes a value of 0.5.
Compared with the prior art, the invention has the advantages and positive effects that:
compared with the existing face reconstruction technology for solving the 3DMM coefficient by detecting the face feature points, the face reconstruction method uses a deep learning method to build a convolutional neural network to extract the face features, uses a large-scale face database to train, and uses the deep learning method to solve the 3DMM coefficient, so that the reconstructed face is more accurate than the face reconstructed by the prior art.
Compared with the existing hair reconstruction technology based on images, the method completes automatic hair reconstruction and achieves more accurate reconstruction results.
Drawings
FIG. 1 is a schematic flow chart of a three-dimensional head reconstruction method based on a single image according to an embodiment of the invention;
fig. 2 is a schematic flow chart of a face reconstruction module according to an embodiment of the present invention;
FIG. 3 is a flow chart of a hair restoration module according to an embodiment of the invention;
FIG. 4 is an input image of face feature points obtained according to an embodiment of the present invention;
FIG. 5 is a representation of hair regions derived from an input image by deep learning in accordance with an embodiment of the present invention;
FIG. 6 is a pattern extracted from a hair area in accordance with an embodiment of the present invention;
fig. 7 shows key hairline information extracted from a pattern and clustered by deep learning according to an embodiment of the present invention.
FIG. 8 is a graph showing the results of rendering an automatically generated head model and hair model in accordance with an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
Fig. 1 shows a flow chart of three-dimensional head reconstruction based on a single image according to an embodiment of the present invention, and the present invention is further described below with reference to specific embodiments.
The embodiment provides a three-dimensional head reconstruction method based on a single image, which comprises the following implementation processes:
1. face reconstruction based on single photo
The overall reconstruction flow of the face reconstruction module is shown in fig. 2:
(1.) 68 face feature point information in the image is obtained using dlib library or the like for the input face image (this step is only used for rough calibration of face position, and other face feature point detection methods are also possible). The image is cropped for the feature point information, and the face image is cropped to a size of 256x256 (as shown in fig. 3).
(2.) referring to the existing three-dimensional face database 300-LP, an R3M (Res net-50-3 DMM) neural network as shown in the figure is designed and built, the convolutional neural network uses Res50 to extract face features, then a deconvolution layer is connected to extract face features, finally a full connection layer is connected, and 3DMM coefficients are output. In the training process, the los function used is as follows:
Loss R3M =a·Loss 3DMM +b·Loss landmarks (1)
wherein, loss 3DMM As a Loss function with respect to 3DMM coefficients, loss landmark And reconstructing a 68 human face characteristic point Loss function of the human face model according to the 3DMM coefficient. a, b are the weights occupied by two Loss functions.
Loss 3DMM =||(α predGT )·w 1 || 2 (2)
Loss landmark =||(v pred -v GT )·w 2 || 2 (3)
Wherein alpha is pred Representing the predicted 3DMM coefficient, alpha GT 3DMM coefficient, w, representing the reality of the sample 1 Representation ofDifferent weights are adopted corresponding to each coefficient, v pred Is expressed in terms of obtaining a 3DMM coefficient alpha pred When the face feature points of 68 of the 3DMM model are reconstructed according to the coefficients, v GT 68 face feature points representing the reality of the sample, w 2 The representation takes different weights for each feature point.
And (3) after training of the neural network is completed in the step (2), taking the face image obtained in the step (1) as input of the network to obtain a 3DMM coefficient corresponding to the input image, and completing reconstruction of the three-dimensional face according to the 3DMM coefficient.
And (4) after the reconstructed face model is obtained in the step (3), the face model is supplemented by using a grid supplementing method, and a complete head model is obtained.
Hair reconstruction based on single photo
The flow of the hair reconstruction module is shown in fig. 3:
(1.) design and build a network PSP-HairNet for segmenting hair regions, the network structure is shown, training is performed using the existing Figaro hair image database, using the Loss function:
Loss mask =||Mask pred -Mask GT || 2 (4)
wherein, mask pred Is to predict the hair area, mask GT Is a real hair area.
(2.) since the PSP-HairNet network obtained by training needs to be input by using an image with a fixed size (the size of the input image is fixed to 256x256 pixels in the experiment), the original image needs to be reduced or enlarged in size according to the original width and height ratio so as to meet the requirement of network input. After the original photo is adjusted to a fixed size, the original photo is used as input of a PSP-HairNet network to obtain a complete hair region image (shown in figure 5), and then a Gabor filter kernel is used for filtering the hair region to obtain direction information of each pixel in the hair region, and a direction diagram is generated (shown in figure 6).
(3.) expanding on the existing USC-hair database, generating new hair bundles by using rotation, translation and other methods by taking the hair bundles as units, constructing a larger-scale database, and obtaining a new hair model database USC-P with the sample number reaching 2160. And generating a pattern-hair model dataset using the USC-P hair model database. Designing and building a Hair-Re convolutional neural network, training the network by using the generated pattern-Hair model data set, and using the Loss function as follows:
Loss hair =||(S pred -S GT )·w|| 2 (5)
wherein S is pred Is to predict and obtain hair model information S GT Is the actual hair model information.
(4.) clustering the hairlines to obtain key hairlines by using the hairlines generated by the network as a reference in actual use, wherein the used distance function is as follows:
d 1 =α·H(s 1 ,s 2 )+β·E(s 1 ,s 2 ) (6)
wherein H(s) 1 ,s 2 ) Is two hairlines s 1 、s 2 Is a Haosdorf distance, E (s 1 ,s 2 ) Is two hairlines s 1 、s 2 Is the weight that two distance values occupy.
By clustering the generated hair, key hair that meets the overall structure hair information is obtained, as shown in fig. 7.
And (5) matching the key hair obtained through clustering with the hair in the USC-P hair model database, and obtaining the hair model with the closest hair distance between the key hair and the hair in the hair model database by adopting the formula (6) as a distance formula. Finally, 3 more suitable hair models were selected.
And (6) constructing a three-dimensional directional field by using the hair model obtained in the step (6), and fusing the directional information of the hair model to grow a final three-dimensional hair model.
3. Model assembly and rendering
(1.) after the reconstruction is completed in step 1 and step 2 to obtain the face model and the hair model, the dimensions of the reconstructed three-dimensional head model and the three-dimensional hair model are not consistent, so that the coordinate system of the model needs to be unified and aligned. In this step, since the hair model in USC-P depends on its standard head model, only 68 face feature points in its standard head model need to be matched with 68 face feature points of the three-dimensional head reconstructed in step 1, and a projection matrix is calculated, so that the head model and the hair model can be unified under the same coordinate system.
(2.) after the head model and the hair model are prepared in the step (1), rendering the head model and the hair model by using OpenGL, selecting texture information for the hair model, and carrying out illumination treatment by using a von Willebrand illumination model to obtain a relatively real rendering result which is close to an input image finally.
In the experimental process, the equipment used in the experiment is: NVIDIA GeForce GTX1080, intel (R) Core (TM) i7-6700CPU (3.40 GHz,4 cores) and 32GB RAM, run on Windows 10-64 bit systems.
The invention uses a single front face photo as input, and carries out three-dimensional face reconstruction by using a deep learning method to return characteristic information from the face photo; the face model is supplemented to obtain a complete head model; segmentation of the hair region using a deep learning method; acquiring a pattern of the hair area part; regression of hairline information is performed by using a deep learning method; matching a hairline database according to the hairline information obtained by regression, thereby completing hair reconstruction; and aligning the hair model coordinate system with the hair model coordinate system to obtain the complete three-dimensional head model. Compared with the prior three-dimensional head reconstruction method, the three-dimensional head reconstruction method can achieve automatic three-dimensional face and hair reconstruction, achieves more realistic reconstruction effect, and can make the three-dimensional model more abundantly applied based on the reconstruction effect.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention. The present invention is not limited to the above-mentioned embodiments, and any equivalent embodiments which can be changed or modified by the technical content disclosed above can be applied to other fields, but any simple modification, equivalent changes and modification made to the above-mentioned embodiments according to the technical substance of the present invention without departing from the technical content of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (6)

1. A three-dimensional head reconstruction method based on a single image is characterized by comprising the following steps:
step A, cutting an input photo;
step B, designing and building an R3M (ResNet-3 DMM) neural network;
c, inputting the face photo obtained in the step A into the trained network in the step B, and regressing to obtain coefficients of a face deformation model, so as to finish reconstruction of the three-dimensional face, and supplementing the three-dimensional face model by using a grid supplementing algorithm to generate a complete head model;
step D, designing and building a PSP-HairNet convolutional neural network;
e, after the original photo is subjected to size modification, inputting the PSP-HairNet neural network trained in the step D to obtain an image of the hair region;
f, obtaining the direction information of each pixel in the hair area, and generating a direction diagram;
g, obtaining a USC-P Hair model database of more samples, generating a pattern-Hair model data set by using the USC-P Hair model database, designing and constructing a Hair-Re convolutional neural network, and training the network by using the generated pattern-Hair model data set;
step H, inputting the obtained directional diagram in the step F into the trained Hair-Re neural network in the step G, and carrying out regression to obtain a Hair model;
step I, clustering the hair models obtained in the step H to obtain key hair wires of the hair models, and obtaining the matched hair models by matching the key hair wires with the hair wires in the hair database;
step J, constructing a three-dimensional direction field by utilizing the hair model obtained in the step I, fusing the direction information of the hair model, and growing a final three-dimensional hair model;
step K, unifying the three-dimensional head model obtained in the step C and the hair model obtained in the step J to the same coordinate system, and completing rendering display;
the network structure used for regressing the 3DMM coefficients in the step B and the loss function used in the network structure are as follows:
Loss R3M =a·Loss 3DMM +b·Loss landmarks
wherein, loss 3DMM As a Loss function with respect to 3DMM coefficients, loss landmark Reconstructing 68 face feature point Loss functions of a face model according to the 3DMM coefficients, wherein a and b are weights occupied by the two Loss functions;
Loss 3DMM =||(α predGT )·w 1 || 2
Loss landmark =||(v pred -v GT )·w 2 || 2
wherein alpha is pred Representing the predicted 3DMM coefficient, alpha GT 3DMM coefficient, w, representing the reality of the sample 1 Representing that each coefficient is weighted differently, v pred Is expressed in terms of obtaining a 3DMM coefficient alpha pred When the face feature points of 68 of the 3DMM model are reconstructed according to the coefficients, v GT 68 face feature points representing the reality of the sample, w 2 The representation takes different weights for each feature point.
2. The three-dimensional head reconstruction method based on single images according to claim 1, wherein the network structure for hair region detection in step D, and the loss function used therefor, are as follows:
Loss mask =||Mask pred -Mask GT || 2
wherein, mask pred Is to predict the hair area, mask GT Is a real hair area.
3. The three-dimensional head reconstruction method based on single images according to claim 1, wherein the network structure for regression of the hair model in step G, and the loss function used therefor:
Loss hair =||(S pred -S GT )·w|| 2
wherein S is pred Is to predict and obtain hair model information S GT Is the actual hair model information.
4. The three-dimensional head reconstruction method based on single images according to claim 1, wherein the clustering of hair in step I uses the following distance formula:
d 1 =α·H(s 1 ,s 2 )+β·E(s 1 ,s 2 )
wherein H(s) 1 ,s 2 ) Is two hairlines s 1 、s 2 Is a Haosdorf distance, E (s 1 ,s 2 ) Is two hairlines s 1 、s 2 Is the weight that two distance values occupy.
5. The three-dimensional head reconstruction method based on single images according to claim 2, wherein in the network structure for regression of 3DMM coefficients, the weight coefficient a takes a value of 1 and the weight coefficient b takes a value of 1.
6. The three-dimensional head reconstruction method based on a single image according to claim 4, wherein the weight coefficient α has a value of 0.5 and β has a value of 0.5 in the distance formula.
CN201911098677.2A 2019-07-16 2019-11-12 Three-dimensional head reconstruction method based on single image Active CN110717978B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910641611.7A CN110379003A (en) 2019-07-16 2019-07-16 Three-dimensional head method for reconstructing based on single image
CN2019106416117 2019-07-16

Publications (2)

Publication Number Publication Date
CN110717978A CN110717978A (en) 2020-01-21
CN110717978B true CN110717978B (en) 2023-07-18

Family

ID=68253476

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910641611.7A Pending CN110379003A (en) 2019-07-16 2019-07-16 Three-dimensional head method for reconstructing based on single image
CN201911098677.2A Active CN110717978B (en) 2019-07-16 2019-11-12 Three-dimensional head reconstruction method based on single image

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910641611.7A Pending CN110379003A (en) 2019-07-16 2019-07-16 Three-dimensional head method for reconstructing based on single image

Country Status (1)

Country Link
CN (2) CN110379003A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129347B (en) * 2021-04-26 2023-12-12 南京大学 Self-supervision single-view three-dimensional hairline model reconstruction method and system
CN113538114B (en) * 2021-09-13 2022-03-04 东莞市疾病预防控制中心 Mask recommendation platform and method based on small programs
CN114723888B (en) * 2022-04-08 2023-04-07 北京百度网讯科技有限公司 Three-dimensional hair model generation method, device, equipment, storage medium and product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844706A (en) * 2016-04-19 2016-08-10 浙江大学 Full-automatic three-dimensional hair modeling method based on single image
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN108765550A (en) * 2018-05-09 2018-11-06 华南理工大学 A kind of three-dimensional facial reconstruction method based on single picture
CN108805977A (en) * 2018-06-06 2018-11-13 浙江大学 A kind of face three-dimensional rebuilding method based on end-to-end convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN105844706A (en) * 2016-04-19 2016-08-10 浙江大学 Full-automatic three-dimensional hair modeling method based on single image
CN108765550A (en) * 2018-05-09 2018-11-06 华南理工大学 A kind of three-dimensional facial reconstruction method based on single picture
CN108805977A (en) * 2018-06-06 2018-11-13 浙江大学 A kind of face three-dimensional rebuilding method based on end-to-end convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
包永堂 ; 齐越.基于图像的头发建模技术综述.《计算机研究与发展》.2018,全文. *

Also Published As

Publication number Publication date
CN110379003A (en) 2019-10-25
CN110717978A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN110458939B (en) Indoor scene modeling method based on visual angle generation
Yang et al. DRFN: Deep recurrent fusion network for single-image super-resolution with large factors
Rematas et al. Novel views of objects from a single image
CN106803267B (en) Kinect-based indoor scene three-dimensional reconstruction method
Hu et al. Robust hair capture using simulated examples
Wang et al. High resolution acquisition, learning and transfer of dynamic 3‐D facial expressions
CN110717978B (en) Three-dimensional head reconstruction method based on single image
CN109544677A (en) Indoor scene main structure method for reconstructing and system based on depth image key frame
CN107358576A (en) Depth map super resolution ratio reconstruction method based on convolutional neural networks
CN111583384A (en) Hair reconstruction method based on adaptive octree hair convolutional neural network
CN112183541B (en) Contour extraction method and device, electronic equipment and storage medium
CN113762147B (en) Facial expression migration method and device, electronic equipment and storage medium
CN116310076A (en) Three-dimensional reconstruction method, device, equipment and storage medium based on nerve radiation field
CN112837215B (en) Image shape transformation method based on generation countermeasure network
CN110176079A (en) A kind of three-dimensional model deformation algorithm based on quasi- Conformal
CN115330947A (en) Three-dimensional face reconstruction method and device, equipment, medium and product thereof
CN110175529A (en) A kind of three-dimensional face features' independent positioning method based on noise reduction autoencoder network
Liu et al. High-quality textured 3D shape reconstruction with cascaded fully convolutional networks
Kang et al. Competitive learning of facial fitting and synthesis using uv energy
CN111402403B (en) High-precision three-dimensional face reconstruction method
Yeh et al. 2.5 D cartoon hair modeling and manipulation
CN111524226A (en) Method for detecting key point and three-dimensional reconstruction of ironic portrait painting
CN115428027A (en) Neural opaque point cloud
Wang et al. Paccdu: pyramid attention cross-convolutional dual unet for infrared and visible image fusion
Sheng et al. Facial geometry parameterisation based on partial differential equations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant