CN103366400B - A kind of three-dimensional head portrait automatic generation method - Google Patents

A kind of three-dimensional head portrait automatic generation method Download PDF

Info

Publication number
CN103366400B
CN103366400B CN201310312500.4A CN201310312500A CN103366400B CN 103366400 B CN103366400 B CN 103366400B CN 201310312500 A CN201310312500 A CN 201310312500A CN 103366400 B CN103366400 B CN 103366400B
Authority
CN
China
Prior art keywords
mtd
msubsup
hair
mtr
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310312500.4A
Other languages
Chinese (zh)
Other versions
CN103366400A (en
Inventor
林金杰
苏琪
龚文勇
叶丰平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huachuang Zhenxin Technology Development Co Ltd
Original Assignee
Shenzhen Huachuang Zhenxin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huachuang Zhenxin Technology Development Co Ltd filed Critical Shenzhen Huachuang Zhenxin Technology Development Co Ltd
Priority to CN201310312500.4A priority Critical patent/CN103366400B/en
Publication of CN103366400A publication Critical patent/CN103366400A/en
Application granted granted Critical
Publication of CN103366400B publication Critical patent/CN103366400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention relates to a kind of three-dimensional head portrait automatic generation method, process includes:Three-dimensional face storehouse;Collect three-dimensional hair style storehouse;To the front face photo of input, user's face detection algorithm detection face, and use active shape model locating human face positive feature point;Human face photo and face characteristic point coordinates based on three-dimensional face storehouse, input, three-dimensional face model is generated with deformation model method;To the front face photo of input, with the hair method segmenting hair based on markov random file;According to hair segmentation result, hair texture is extracted;Obtain the Hair model finally matched;Faceform is synthesized with Hair model.By means of above-mentioned technical proposal, the avatar model of generation is simultaneously comprising human face region and hair zones, it is to avoid manually add hair style;Modeling to hair portion, replaces direct three-dimensionalreconstruction using search technique, can improve efficiency.Higher fidelity is ensure that in the case where hair style storehouse is abundant enough, because the multiplicity of mankind's hair style is high.

Description

A kind of three-dimensional head portrait automatic generation method
Technical field
The invention belongs to technical field of computer vision, more particularly to a kind of three-dimensional head portrait automatic generation method.
Background technology
Three-dimensional head portrait modeling is current computer graphics and computer vision field is most basic one of studies a question.Three Dimension headform is respectively provided with terms of identity discriminating, medical auxiliary, production of film and TV, game making, digital art widely should With.
The outward appearance of head portrait mainly includes face and hair two parts.To this two parts, mainly there is following several method at present: 1) means based on laser scanning, i.e., obtained the depth information of object using laser scanner, be then reconstructed;2) based on knot The method of structure light, i.e., build data acquisition platform, the light of a variety of width of projector projects with equipment such as projecting apparatus, camera, LEDs Grid are captured by a camera, the different coding according to representated by different gratings can calculate object after optical grating reflection to body surface Three-dimensional expression;3) method based on multiple pictures or video sequence, the i.e. photo based on multiple different angle shots, with solid Visual theory calculates the three-dimensional expression of destination object;4) method based on single photo, this kind of method is general from three-dimensional face number According to useful priori is extracted in storehouse, it is then based on single photo and goes to speculate the threedimensional model in photo corresponding to face. In three-dimensional face modeling method based on single photo, foremost algorithm is deformation model (Morphable model) method (with reference to the mode in following documents:B.Volker, V.Thomas.A Morphable Model For The Synthesis Of 3D Faces.SIGGRAPH, 1999.).
Existing method is each has something to recommend him:
Method major defect based on laser scanning is that short range scanning is carried out to entity, when scanning needs to spend a lot Between, and the number of people will be remained stationary as in scanning process, so practicality is very poor.Further, since black hair, which has, absorbs laser Property, the method can not be used for the reconstruct of hair portion.
Method based on structure light and based on multiple pictures requires higher to photo registration, and existing algorithm computational efficiency is not Enough height, so this two classes method is mainly used in laboratory environment, without being suitably applied daily life.
It is due to its ease of use and higher although the reconfiguration technique precision based on single photo has been short of Computational efficiency, with larger practicality, welcomes compared with by public users.But, due between different hair styles shape difference away from larger, Method based on database priori is not easily accomplished the reconstruct to hair, thus prior art mainly stresses human face region Generation.Therefore, many systems can only automatically generate " shaven head image ", and hair style will lean on art designing personnel to manually add substantially.In addition, During three-dimensional face model is generated, the manual locating human face's characteristic point of many system requirements users not reaches complete Automatically.
The content of the invention
The invention provides a kind of three-dimensional head portrait Auto scheme, support to input individual front face photo, root According to photo content, by completely automatic processing, corresponding three-dimensional head model is exported, output model is not only comprising three-dimensional people Face, also comprising three-dimensional hair style.
Technical scheme comprises the following steps:
Step 1:Three-dimensional face storehouse;
Step 2:Collect three-dimensional hair style storehouse;
Step 3:To the front face photo of input, user's face detection algorithm detection face, and use active shape mould Type locating human face positive feature point;
Step 4:Human face photo and face characteristic point coordinates based on three-dimensional face storehouse, input, are given birth to deformation model method Into three-dimensional face model;
Step 5:To the front face photo of input, with the hair method segmenting hair based on markov random file;
Step 6:According to hair segmentation result, hair texture is extracted;
Step 7:Obtain the Hair model finally matched;
Step 8:Faceform is synthesized with Hair model.
By means of above-mentioned technical proposal, the avatar model of generation is simultaneously comprising human face region and hair zones, it is to avoid by hand Add hair style;Modeling to hair portion, replaces direct three-dimensionalreconstruction using search technique, can improve efficiency.In hair style Storehouse ensure that higher fidelity in the case of enriching enough, because the multiplicity of mankind's hair style is high.
Brief description of the drawings
The techniqueflow chart of Fig. 1 present invention;
Embodiment
The present invention is described further below in conjunction with the accompanying drawings.
Fig. 1 presents the band hair three-dimensional head portrait product process figure and each step that photo is inputted based on individual of the present invention Rapid intermediate result.Wherein, the collection in three-dimensional face storehouse and three-dimensional hair style storehouse and it is processed as off-line procedure.HpIt is that three-dimensional hair style exists The binary map that positive direction projection is obtained, HdIt is the corresponding textural characteristics expression vector of each model.IpTo input the hair of photo Shape graph (binary map), IdIt is the corresponding hair textural characteristics expression vector of input photo.
Mainly comprising following seven operations:
1. three-dimensional face storehouse is collected.300 three-dimensional face models are collected, each faceform there are 100,000 summits.These Faceform passes through dimension normalization so that the pupil of different two eyes of faceform is located at uniform location.To each model, 15 control points are specified by hand in skull position.One off-line process of this step.
2. three-dimensional hair style storehouse is collected.100 three-dimensional Hair models are collected, these hair styles cover daily seen hair substantially Type.Each Hair model is by a shape vectorWith a two-dimensional texture map HtGroup Into.Wherein nhExcursion be 2500 to 6000, (xi, yi, zi) represent i-th of summit three-dimensional coordinate.To each model, Specify 15 control points by hand in skull position, these control points have position is corresponding to close with 15 control points of faceform System.Each Hair model is projected towards positive face direction, a binary map H is obtainedp, i.e. hair zones pixel value is 1, other Area pixel value is 0.To every texture maps Ht, its texture table is obtained up to H according to Gabor transformation and bag of wordsd(the side of acquisition The mode that formula belongs in very common mode, such as following documents in the prior art:M.Eitz et al.Sketch-Based Shape Retrieval.SIGGRAPH 2012.).Wherein HdIt is the vector of one 1000 dimension.Finally, to 100 three-dimensional hairs Model, accordingly obtains 100 textural characteristics expressionWith 100 projection binary maps, i=1 ..., 100.This step one from Line processing procedure.
3. Face datection and positioning feature point.To the front face photo I of input, with the Face datection based on Boosting (the Face datection algorithm based on Boosting belongs in the very known algorithm in this area, such as following documents algorithm detection face Mode:P.Viola, M.Jones.Rapid object detection using a boosted cascade of Simple features.Computer Vision and Pattern Recognition (CVPR), 2001.), then with master (active shape model falls within the mould well known to the locating human face's characteristic point of this area to dynamic shape locating human face positive feature point Mode in type, such as following documents:S.Milborrow and F.Nicolls.Locating Faciai Features With an Extended Active Shape Model.ECCV, 2008.).
4. three-dimensional face is generated.Human face photo and face characteristic point coordinates based on three-dimensional face storehouse, input, use deformation mould (deformation model method belongs to the method well known to three-dimensional face model generation to type method generation three-dimensional face model, such as following Mode in document:B.Volker, V.Thomas.A Morphable Model For The Synthesis Of3D Faces.SIGGRAPH, 1999.).The three-dimensional face model of generation shape vector Fs=(x1, y1, z1..., xn, yn, zn) with And texture image FtTo represent, n=100000.Wherein, (xi, yi, zi) represent i-th of summit three-dimensional coordinate.Due to model library In each faceform have specified that 15 control points, correspondingly, the faceform of generation also has 15 control points.
5. hair is split.To the front face photo I of input, with the hair dividing method point based on markov random file Cut the hair (mode that the dividing method of markov random file belongs in the method well known to hair segmentation, such as following documents: K.-C.Lee, D.Anguelov, B.Sumengen, S.B.Gokturk.Markov random field models for Hair and face segmentation.Automatic Face & Gesture Recognition, 2008.).Segmentation knot Fruit is (to be designated as I with I bianry images of a sizep), wherein hair zones pixel value is 1, and other area pixel values are 0.
6. hair texture blending.The hair texture maps I of input photo is generated according to hair segmentation resultt, i.e., to each picture Plain position (x, y), calculates:
It(x, y)=I (x, y) Ip(x, y) (1)
Enemy's hair texture maps It, its texture table is obtained up to I according to Gabor transformation and bag of wordsd(Gabor transformation and bag of words Model belongs to the mode in the very known mode for obtaining unity and coherence in writing expression, such as following documents:M.Eitz et al.Sketch-Based Shape Retrieval.SIGGRAPH 2012.).Wherein IdIt is the vector of one 1000 dimension.
7. hair style is matched.I is calculated respectivelypWithI=1 ..., the Hausdorff distances in 100 between every image, And find out wherein 10 minimum images of distanceI '=1 ..., 10 (Hausdorff belongs to a kind of known distance and calculated Listed mode in mode, such as following documents:R.T.Rockafellar, R.J.B.Wets.Variational Analysis, Springer-Verlag, 2005, ISBN 3-540-62772-3, ISBN 978-3-540-62772-2, pg.117.)。
NoteI '=1 ..., 10 in hair database corresponding texture table be respectively up to vectorI '= 1 ..., 10, then I is calculated respectivelydWithI '=1 ..., the Euclidean distance between 10, and find out the corresponding model of minimum range Subscript i*, i.e.,
Then i-th in three-dimensional hair style storehouse*Individual model is final Matching Model.
8. faceform synthesizes with Hair model.To the three-dimensional face model of generation, it is assumed that its 15 in skull position The three-dimensional coordinate at control point is respectivelyI=1 ... 15, and their corresponding control point difference in Hair model ForI=1 ... 15.Affine transformation matrix T=[A b] is obtained by solving below equation:
Wherein A is the matrix of 3 × 3 sizes, and b is the vector of 3 × 1 sizes, i.e. T is the matrix of 3 × 4 sizes.Then, to hair All summits carry out following affine transformation in pattern type:
By so converting, the faceform of Hair model and generation has compared coordination in size and relative position, no longer Need manual setting.Remember that the Hair model shape vector after conversion is, The head model that then this method is ultimately generated includes the shape vector H ' of hair portions, face part shape vector FsAnd it Corresponding texture maps HtAnd Ft。Ht
This programme passes through emulation experiment.To single photo, the processing time on customary personal computer is about 20 seconds, Whole-process automatic, the three-dimensional head portrait model of generation is quite true to nature, can meet many application requests.

Claims (7)

1. a kind of three-dimensional head portrait automatic generation method, it is characterised in that comprise the following steps:
Step 1:Collect three-dimensional face storehouse;
Step 2:Collect three-dimensional hair style storehouse;
Step 3:To the front face photo I of input, user's face detection algorithm detection face, and determined using active shape model Position face positive feature point;
Step 4:Based on the coordinate in the three-dimensional face storehouse, the human face photo I of the input and the human face characteristic point, deformation is used Model method generates three-dimensional face model;
Step 5:To the front face photo I of input, with the hair automatic Segmentation hair based on markov random file;
Step 6:According to hair segmentation result, hair texture is extracted;
Step 7:Obtain the Hair model finally matched;
Step 8:Faceform is synthesized with Hair model;
Wherein, the step 1 is specifically included:300 three-dimensional face models are collected, each faceform there are 100,000 summits, this A little faceforms pass through dimension normalization so that the pupil of different two eyes of faceform is located at uniform location;To each mould Type, 15 control points are specified in skull position by hand;
Wherein, the step 2 is specifically included:100 three-dimensional Hair models are collected, each Hair model is by a shape vectorWith a two-dimensional texture map HtComposition, wherein nhExcursion For 2500 to 6000, (xi, yi, zi) represent i-th of summit three-dimensional coordinate;To each model, 15 are specified by hand in skull position 15 control points of individual control point, these control points and faceform have position corresponding relation;By each Hair model towards just Face direction is projected, and obtains a binary map Hp, i.e. hair zones pixel value is 1, and other area pixel values are 0;To every Texture maps Ht, its texture table is obtained up to H according to Gabor transformation and bag of wordsd, wherein HdIt is the vector of one 1000 dimension;Most Eventually, to 100 three-dimensional Hair models, 100 textural characteristics expression are accordingly obtainedWith 100 projection binary maps
2. three-dimensional head portrait automatic generation method according to claim 1, it is characterised in that:In the step 4:The three of generation Dimension faceform uses shape vector Fs=(x1, y1, z1..., xn, yn, zn) and texture image FtTo represent, n=100000, its In, (xi, yi, zi) represent i-th of summit three-dimensional coordinate;Because each faceform has specified that 15 in model library Also there are 15 control points at control point, correspondingly, the faceform of generation.
3. three-dimensional head portrait automatic generation method according to claim 2, it is characterised in that:In the step 5:Segmentation result With I bianry images of a size, to be designated as Ip, wherein hair zones pixel value is 1, and other area pixel values are 0.
4. three-dimensional head portrait automatic generation method according to claim 3, it is characterised in that:The step 6 is specially:
The hair texture maps I of input photo is generated according to hair segmentation resultt, i.e., to each location of pixels (x, y), calculate
It(x, y)=I (x, y) Ip(x, y) (1)
Enemy's hair texture maps It, its texture table is obtained up to I according to Gabor transformation and bag of wordsd, wherein IdIt is one 1000 dimension Vector.
5. three-dimensional head portrait automatic generation method according to claim 4, it is characterised in that:The step 7 is specially:
I is calculated respectivelypWithIn Hausdorff distances between every image, and find out its middle-range From 10 minimum images
NoteCorresponding texture table is respectively up to vector in hair databaseThen I is calculated respectivelydWithBetween Euclidean distance, and find out the corresponding model subscript i of minimum range*, i.e.,
<mrow> <msup> <mi>i</mi> <mo>*</mo> </msup> <mo>=</mo> <mi>arg</mi> <mi> </mi> <msub> <mi>min</mi> <msup> <mi>i</mi> <mo>&amp;prime;</mo> </msup> </msub> <mi>D</mi> <mrow> <mo>(</mo> <msup> <mi>I</mi> <mi>d</mi> </msup> <mo>,</mo> <msubsup> <mi>H</mi> <msup> <mi>i</mi> <mo>&amp;prime;</mo> </msup> <mi>d</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Then i-th in three-dimensional hair style storehouse*Individual model is final Matching Model.
6. three-dimensional head portrait automatic generation method according to claim 5, it is characterised in that:The step 8 is specially:
To the three-dimensional face model of generation, it is assumed that its three-dimensional coordinate at 15 control points of skull position is respectivelyAnd their corresponding control points in Hair model are respectively Affine transformation matrix T=[Ab] is obtained by solving below equation:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mn>1</mn> <mi>f</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>x</mi> <mn>2</mn> <mi>f</mi> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>x</mi> <mn>15</mn> <mi>f</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mn>1</mn> <mi>f</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>y</mi> <mn>2</mn> <mi>f</mi> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>y</mi> <mn>15</mn> <mi>f</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <mn>1</mn> <mi>f</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>z</mi> <mn>2</mn> <mi>f</mi> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>z</mi> <mn>15</mn> <mi>f</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>A</mi> </mtd> <mtd> <mi>b</mi> </mtd> </mtr> <mtr> <mtd> <mn>0...0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mn>1</mn> <mi>h</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>x</mi> <mn>2</mn> <mi>h</mi> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>x</mi> <mn>15</mn> <mi>h</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mn>1</mn> <mi>h</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>y</mi> <mn>2</mn> <mi>h</mi> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>y</mi> <mn>15</mn> <mi>h</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <mn>1</mn> <mi>h</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>z</mi> <mn>2</mn> <mi>h</mi> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>z</mi> <mn>15</mn> <mi>h</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Wherein A is the matrix of 3 × 3 sizes, and b is the vector of 3 × 1 sizes, i.e. T is the matrix of 3 × 4 sizes;
Then, following affine transformation is carried out to all summits in hair style model:
<mrow> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <msub> <msup> <mi>x</mi> <mo>&amp;prime;</mo> </msup> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <msup> <mi>y</mi> <mo>&amp;prime;</mo> </msup> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <msup> <mi>z</mi> <mo>&amp;prime;</mo> </msup> <mi>i</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mi>A</mi> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mi>b</mi> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>n</mi> <mi>h</mi> </msub> </mrow>
By so converting, the faceform of Hair model and generation has compared coordination in size and relative position, no Manual setting is needed again;
Remember that the Hair model shape vector after conversion is The head model that then this method is ultimately generated includes the shape vector H ' of hair portions, face part shape vector FsAnd it Corresponding texture maps HtAnd Ft
7. the three-dimensional head portrait automatic generation method according to one of claim 1-6, it is characterised in that:The step 1 and step Rapid 2 be off-line process.
CN201310312500.4A 2013-07-24 2013-07-24 A kind of three-dimensional head portrait automatic generation method Active CN103366400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310312500.4A CN103366400B (en) 2013-07-24 2013-07-24 A kind of three-dimensional head portrait automatic generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310312500.4A CN103366400B (en) 2013-07-24 2013-07-24 A kind of three-dimensional head portrait automatic generation method

Publications (2)

Publication Number Publication Date
CN103366400A CN103366400A (en) 2013-10-23
CN103366400B true CN103366400B (en) 2017-09-12

Family

ID=49367665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310312500.4A Active CN103366400B (en) 2013-07-24 2013-07-24 A kind of three-dimensional head portrait automatic generation method

Country Status (1)

Country Link
CN (1) CN103366400B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102154470B1 (en) * 2018-09-30 2020-09-09 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. 3D Human Hairstyle Generation Method Based on Multiple Feature Search and Transformation

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955962B (en) * 2014-04-21 2018-03-09 华为软件技术有限公司 A kind of device and method of virtual human hair's generation
CN105719326A (en) * 2016-01-19 2016-06-29 华中师范大学 Realistic face generating method based on single photo
CN105676708A (en) * 2016-04-15 2016-06-15 深圳市金乐智能健康科技有限公司 Control method and system of intelligent haircut device
CN107615337B (en) * 2016-04-28 2020-08-25 华为技术有限公司 Three-dimensional hair modeling method and device
CN107451950A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Face image synthesis method, human face recognition model training method and related device
WO2018094653A1 (en) * 2016-11-24 2018-05-31 华为技术有限公司 User hair model re-establishment method and apparatus, and terminal
CN106652025B (en) * 2016-12-20 2019-10-01 五邑大学 A kind of three-dimensional face modeling method and printing equipment based on video flowing Yu face multi-attribute Matching
CN107527318B (en) * 2017-07-17 2021-06-04 复旦大学 Hair style replacement method based on generation countermeasure network model
CN107622227B (en) * 2017-08-25 2021-04-13 深圳依偎控股有限公司 3D face recognition method, terminal device and readable storage medium
CN108764048B (en) * 2018-04-28 2021-03-16 中国科学院自动化研究所 Face key point detection method and device
CN110910487B (en) * 2018-09-18 2023-07-25 Oppo广东移动通信有限公司 Construction method, construction device, electronic device, and computer-readable storage medium
CN109816764B (en) * 2019-02-02 2021-06-25 深圳市商汤科技有限公司 Image generation method and device, electronic equipment and storage medium
CN112156464B (en) * 2020-10-22 2023-03-14 腾讯科技(深圳)有限公司 Two-dimensional image display method, device and equipment of virtual object and storage medium
KR102555414B1 (en) * 2022-01-18 2023-07-17 주식회사 스칼라웍스 Apparatus for generating personal 3d face model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Capture of Hair Geometry from Multiple Images;Sylvain Paris 等;《ACM Transactions on Graphics》;20040630;第23卷(第3期);第712-719页 *
匹配图像与3维模型特征点的真实感3维头重建;林源 等;《中国图象图形学报》;20111031;第16卷(第10期);第1876-1882页 *
用于个性化人脸动漫生成的自动头发提取方法;沈晔湖 DENG;《计算机辅助设计与图形学学报》;20101130;第22卷(第11期);第1880-1886页 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102154470B1 (en) * 2018-09-30 2020-09-09 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. 3D Human Hairstyle Generation Method Based on Multiple Feature Search and Transformation

Also Published As

Publication number Publication date
CN103366400A (en) 2013-10-23

Similar Documents

Publication Publication Date Title
CN103366400B (en) A kind of three-dimensional head portrait automatic generation method
US8624901B2 (en) Apparatus and method for generating facial animation
KR101635730B1 (en) Apparatus and method for generating montage, recording medium for performing the method
CN108573527B (en) Expression picture generation method and equipment and storage medium thereof
Shi et al. Automatic acquisition of high-fidelity facial performances using monocular videos
Bao et al. High-fidelity 3d digital human head creation from rgb-d selfies
WO2014117447A1 (en) Virtual hairstyle modeling method of images and videos
Ward et al. Depth director: A system for adding depth to movies
CN105844706A (en) Full-automatic three-dimensional hair modeling method based on single image
WO2006034256A2 (en) System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
WO2006049147A1 (en) 3d shape estimation system and image generation system
CN103745209B (en) A kind of face identification method and system
CN113628327A (en) Head three-dimensional reconstruction method and equipment
Li et al. Spa: Sparse photorealistic animation using a single rgb-d camera
Xiao et al. Enhanced 3-D modeling for landmark image classification
Liao et al. Rapid 3D face reconstruction by fusion of SFS and Local Morphable Model
CN110648394A (en) Three-dimensional human body modeling method based on OpenGL and deep learning
Chen et al. Character animation creation using hand-drawn sketches
Shin et al. A morphable 3D-model of Korean faces
Moeini et al. Expression-invariant three-dimensional face reconstruction from a single image by facial expression generic elastic models
Johnston et al. Single View 3D Point Cloud Reconstruction using Novel View Synthesis and Self-Supervised Depth Estimation
Li et al. Example-based 3D face reconstruction from uncalibrated frontal and profile images
Zhang et al. SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction
Sucontphunt et al. 3D facial surface and texture synthesis using 2D landmarks from a single face sketch
Liang et al. Fusing deep convolutional network with SFM for 3D face reconstruction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 Shenzhen, Nanshan District Tea Light Road, No. 1063, an e-commerce Industrial Park 9B

Applicant after: Shenzhen Huachuang Zhenxin Technology Development Co., Ltd.

Address before: 518000 Guangdong city in Shenzhen Province, Nanshan District City Xili university creative park two B411

Applicant before: Shenzhen Huachuang Zhenxin Technology Development Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant