CN110189404A - Virtual facial modeling method based on real human face image - Google Patents
Virtual facial modeling method based on real human face image Download PDFInfo
- Publication number
- CN110189404A CN110189404A CN201910469527.1A CN201910469527A CN110189404A CN 110189404 A CN110189404 A CN 110189404A CN 201910469527 A CN201910469527 A CN 201910469527A CN 110189404 A CN110189404 A CN 110189404A
- Authority
- CN
- China
- Prior art keywords
- face
- facial
- expression
- animation
- head model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
Abstract
The invention discloses a kind of virtual facial modeling methods based on real human face image, comprising steps of one) making head model;Two) production facial expression acts textures collection;Three) in Unity 3D, the various facial expressions and acts textures collection of acquisition are fabricated to corresponding expression animation using the frame animation machine that Unity 3D is carried;Four) expression animation controller is made;Five) decorations are transported to 3D head model decoration top, face is modified;Six) displacement for writing the 3D head model of Script controlling than the face makes it that face displacement be followed to do corresponding movement;The present invention modifies the face of the established virtual head mould of 3D using the facial expression image of true people, so that the face for the virtual portrait head model established is high with the facial phase knowledge and magnanimity of true man;And modeling process does not need to carry out complicated spatial design as when the existing software modeling using 3D, modeling is time-consuming shorter, and cost is lower.
Description
Technical field
The present invention relates to dimensional Modeling Technology field, in particular to a kind of virtual facial modeling based on real human face image
Method.
Background technique
With the continuous development of science and technology, people have higher pursuit for quality of life, exchange the appearance of humanoid robot
Make people scintillating.And the rise of VR technology, then so that it is had more experience sense and technology sense, experiencer after wearing VR equipment,
It can be interacted in the simulated environment that computer generates with virtual robot cordiality.It has opened the future market of virtual robot
Begin to industry is presided over, welcome's industry etc. is permeated, it is contemplated that in the near future, virtual robot will be widely used in all kinds of services
In industry.
Unity 3D is that the platform of the most widely used support virtual interacting imports Unity to keep interaction truer
The apery performance of person model in 3D also becomes an emphasis concerned by people.And wherein most critical is then D facial modelling,
Only 3D model is high to the reduction degree of real person's face, and identification is high, just experiencer can be made to have during virtual interacting
Indulge in sense and satisfaction.
Currently, the D facial modelling of all kinds of 3D person models is all by 3Ds Max, the modeling softwares such as Zbrush are completed, institute
The problems such as face for establishing person model is higher with the facial phase margin of true man, and that there are elapsed times is long for modeling, at high cost.
Summary of the invention
In view of this, the object of the present invention is to provide a kind of virtual facial modeling method based on real human face image, with
The face for establishing virtual portrait head model using 3D software in the prior art is solved, there is built virtual facial and the face of true man
Portion's phase margin is higher, models the technical problems such as time-consuming.
The present invention is based on the virtual facial modeling methods of real human face image, comprising the following steps:
One) head model is made, is included the following steps: again
A) full face for acquiring real, is converted to 3D head for the full face of real using Insight 3D
Portion's model;
B) human face of 3D head model point is separated with the rest part of 3D head model in Insight 3D, and will
The human face isolated point smooths, and is made into ellipsoid;
Two) production facial expression acts textures collection, specific steps are as follows:
1) real's facial expressions and acts video corresponding with facial expression needed for virtual portrait is shot;
2) Video editing software is utilized, facial expressions and acts video is intercepted frame by frame, obtains expression picture;
3) exhibition UV operation is carried out to the ellipsoid separated in step b) in Insight 3D, by three dimensional face model
Be converted to plane picture;
4) the facial UV expanded view obtained in step 3) is imported into PhotoShop software, then will be obtained in step 2)
All expression pictures of certain facial expressions and acts video be fitted in facial UV expanded view one by one, then export textures, obtain one
Kind facial expressions and acts textures collection;
5) it repeats step 4) and obtains all facial expressions and acts textures collection of facial expression needed for virtual portrait;
Three) in Unity 3D, the frame animation machine carried using Unity 3D is by the various facial expressions and acts textures collection of acquisition
It is fabricated to corresponding expression animation;
Four) expression animation controller is made, the specific steps are as follows:
A) by step 1) in human face obtained point with the 3D head model that rest part mutually separates imported into Unity 3D
In, the mask for the ellipsoid planar separated is chosen, Create addition AnimatorController animation control is selected
Device;
B) the sub- option From New Blend Tree creation state in Create State is selected in animation controller
Tree, is changed to 2D Freeform Directional for the state Blend Type of State Tree, and its Parameters is changed to X
And Y;
C) Motion equal with the expression animation quantity made is added in selection in the Motion of Blend Tree, and
Expression animation is imported one by one;Wherein X, Y are the coordinate position of each expression animation.
Five) decorations are transported to 3D head model decoration top, face is modified.
Six) displacement for writing the 3D head model of Script controlling than the face makes it that face displacement be followed to do corresponding
Movement, the specific steps are as follows:
A) the substantially angle for determining each frame picture face action in each expression animation, including face around the upper backspin of X-axis
Rotational angle thetakWith face around the left-right rotary corner of Y-axis
B) script is write, two containers are defined in script, are respectively used to store the substantially angle of each frame picture face action
Spend θkWith
C) the 3D head model rotation of Script controlling than the face is write, it is made to rotate each frame in angle and expression animation
Picture face action angle, θkWithIt is consistent.
Beneficial effects of the present invention:
The present invention is based on the virtual facial modeling methods of real human face image, and the facial expression image of true people is used to repair
The face of the established virtual head mould of 3D is adornd, so that the face for the virtual portrait head model established and the face of true man are known each other
It spends high;And modeling process does not need to carry out complicated spatial design as when the existing software modeling using 3D, modeling is time-consuming
Shorter, cost is lower.
Detailed description of the invention
Fig. 1 is emotional state Blend Tree structure chart;
Fig. 2 is 3D head model and the schematic diagram for separating human face point from 3D head model;
Fig. 3 is that the human face of ellipsoid planar divides its UV expanded view;
Fig. 4 is the successive frame textures effect of facial expressions and acts.
Specific embodiment
The invention will be further described with reference to the accompanying drawings and examples.
Virtual facial modeling method of the present embodiment based on real human face image, comprising the following steps:
One) head model is made, is included the following steps: again
A) full face for acquiring real, is converted to 3D head for the full face of real using Insight 3D
Portion's model;
B) human face of 3D head model point is separated with the rest part of 3D head model in Insight 3D, and will
The human face isolated point smooths, and is made into ellipsoid;Specific operation process of this step in Insight 3D are as follows: first
Head model is chosen, is editable polygon by model conversion, then choose a mould facial parts, in the option of EDIT GEOMETRY
Selection separation option, so far face is convenient for separating with head mould rest part;Then facial parts are chosen again, and being equally converted to can compile
Polygon is collected, and it is smoothly turned into ellipsoid shape.
Two) production facial expression acts textures collection, specific steps are as follows:
1) real's facial expressions and acts video corresponding with facial expression needed for virtual portrait, such as surprised facial expressions and acts are shot
Video, laugh facial expressions and acts video, facial expressions and acts of speaking video etc.;
2) Video editing software is utilized, facial expressions and acts video is intercepted frame by frame, obtains expression picture;In the present embodiment
Expression picture is specifically obtained using Adobe Premiere Video editing software, also can be used in different embodiments certainly other
Existing Video editing software;
3) exhibition UV operation is carried out to the ellipsoid separated in step b) in Insight 3D, by three dimensional face model
Be converted to plane picture;Specific operation process of this step in Insight 3D are as follows: choose mask, being converted into can
Polygon is edited, UVW expansion is then selected in modifier list;Then fur textures, Jin Erxuan are selected in " stripping " option
Fur UV is selected, starts to relax, is planar graph until being relaxed, obtains UV expanded view and save;
4) the facial UV expanded view obtained in step 3) is imported into PhotoShop software, then will be obtained in step 2)
All expression pictures of certain facial expressions and acts video be fitted in facial UV expanded view one by one, then export textures, obtain one
Kind facial expressions and acts textures collection;
5) it repeats step 4) and obtains all facial expressions and acts textures collection of facial expression needed for virtual portrait.
Three) in Unity 3D, the frame animation machine carried using Unity 3D is by the various facial expressions and acts textures collection of acquisition
It is fabricated to corresponding expression animation.
Four) expression animation controller is made, the specific steps are as follows:
A) by step 1) in human face obtained point with the 3D head model that rest part mutually separates imported into Unity 3D
In, the mask for the ellipsoid planar separated is chosen, Create addition AnimatorController animation control is selected
Device;
B) the sub- option From New Blend Tree creation state in Create State is selected in animation controller
Tree, is changed to 2D Freeform Directional for the state Blend Type of State Tree, and its Parameters is changed to X
And Y;
C) Motion equal with the expression animation quantity made is added in selection in the Motion of Blend Tree, and
Expression animation is imported one by one;Wherein X, Y are the coordinate position of each expression animation.For example, sad expression animation coordinate position is
(0,0), laugh expression animation are (0,1), when the coordinate position of expression animation is mobile to (0,1) by (0,0), person model table
Feelings will be gradually converted into laugh by sad.
Five) decorations are transported to 3D head model decoration top, face is modified.
Six) displacement for writing the 3D head model of Script controlling than the face makes it that face displacement be followed to do corresponding
Movement, the specific steps are as follows:
A) the substantially angle for determining each frame picture face action in each expression animation, including face around the upper backspin of X-axis
Rotational angle thetakWith face around the left-right rotary corner of Y-axisThe transverse direction of face is parallel with X-axis, and facial longitudinal direction is parallel with Y-axis;
B) script is write, two containers are defined in script, are respectively used to store the substantially angle of each frame picture face action
Spend θkWith
C) the 3D head model rotation of Script controlling than the face is write, it is made to rotate each frame in angle and expression animation
Picture face action angle, θkWithIt is consistent.
It in specific implementation, can also be by writing the head voice control script realization voice control 3D to dummy human system
The facial expression of model, specific implementation are as follows:
1) a voice input plug-in unit AudioRecorder is defined in Unity 3D control script, passes through voice input
Plug-in unit AudioRecorder judges whether external microphone wind has voice to input to dummy human system;
2) it when there is voice to input dummy human system, is identified by voice of Baidu's speech recognition interface to input,
External input voice is converted into textual value;
3) textual value obtained in step 2) is passed into Baidu's understanding and interaction technique platform (UNIT), it is preset by UNIT
Natural language understanding function, incoming textual value is disposed, and return it is corresponding reply content, returned content format is still
For textual form.
4) emotional state is determined according to the content of text of return, and exports control instruction, animation control to animation controller
Device processed executes corresponding facial expressions and acts.Such as comprising character string " laugh " in the content of text of return, then voice control script
Control command will be issued to animation controller, animation controller just plays " laugh " expression animation made, virtually
Character face accordingly makes " laugh " facial expressions and acts.
Simultaneously in specific implementation, it can also be spoken expression by writing control Script controlling face to dummy human system.Tool
Body embodiment is that the included voice player plug-in AudioSource of a Unity 3D is defined in control script, external defeated
Enter voice by it is above-mentioned 1), 2), 3) after step, dummy human system can return to the answer content an of text formatting, by Baidu
After the synthesis of speech synthesis interface, this is replied into content, speech form is converted to by textual form, and broadcast by AudioSource control
It puts, the expression animation that the state of AudioSource and visual human are spoken is bound, and when voice plays, that is, works as AudioSource
When state in IsPlaying, animation controller plays expression animation of speaking simultaneously, and virtual human face generates expression of speaking
Variation.
Virtual facial modeling method of the present embodiment based on real human face image, uses the facial expression image of true people
The face of the established virtual head mould of 3D is modified, so that the face and the facial phase of true man of the virtual portrait head model established
Knowledge and magnanimity are high;And modeling process does not need to carry out complicated spatial design as when the existing software modeling using 3D, modeling consumption
When it is shorter, cost is lower.
Finally, it is stated that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although referring to compared with
Good embodiment describes the invention in detail, those skilled in the art should understand that, it can be to skill of the invention
Art scheme is modified or replaced equivalently, and without departing from the objective and range of technical solution of the present invention, should all be covered at this
In the scope of the claims of invention.
Claims (1)
1. the virtual facial modeling method based on real human face image, it is characterised in that: the following steps are included:
One) head model is made, is included the following steps: again
A) full face for acquiring real, is converted to the head 3D mould for the full face of real using Insight3D
Type;
B) human face of 3D head model point is separated with the rest part of 3D head model in Insight3D, and will separation
Human face out point smooths, and is made into ellipsoid;
Two) production facial expression acts textures collection, specific steps are as follows:
1) real's facial expressions and acts video corresponding with facial expression needed for virtual portrait is shot;
2) Video editing software is utilized, facial expressions and acts video is intercepted frame by frame, obtains expression picture;
3) exhibition UV operation is carried out to the ellipsoid separated in step b) in Insight3D, three dimensional face model is converted
For plane picture;
4) the facial UV expanded view obtained in step 3) is imported into PhotoShop software, then by obtained in step 2) certain
All expression pictures of kind facial expressions and acts video are fitted in facial UV expanded view one by one and then export textures, obtain a kind of table
Feelings act textures collection;
5) it repeats step 4) and obtains all facial expressions and acts textures collection of facial expression needed for virtual portrait;
Three) in Unity3D, the various facial expressions and acts textures collection of acquisition are fabricated to using the frame animation machine that Unity3D is carried
Corresponding expression animation;
Four) expression animation controller is made, the specific steps are as follows:
A) by step 1) in human face obtained point imported into Unity3D with the 3D head model that rest part mutually separates, choosing
In the mask of ellipsoid planar separated, select Create to add Animator Controller animation controller;
B) the sub- option From New Blend Tree creation state tree in Create State is selected in animation controller,
The state Blend Type of State Tree is changed to 2D Freeform Directional, and its Parameters is changed to X and Y;
C) Motion equal with the expression animation quantity made is added in selection in the Motion of Blend Tree, and one by one
Import expression animation;Wherein X, Y are the coordinate position of each expression animation;
Five) decorations are transported to 3D head model decoration top, face is modified;
Six) displacement for writing the 3D head model of Script controlling than the face makes it that face displacement be followed to do corresponding shifting
It is dynamic, the specific steps are as follows:
A) the substantially angle for determining each frame picture face action in each expression animation is rotated up and down angle θ around X-axis including facek
With face around the left-right rotary corner of Y-axis
B) script is write, two containers are defined in script, are respectively used to store the substantially angle, θ of each frame picture face actionkWith
C) the 3D head model rotation of Script controlling than the face is write, it is made to rotate each frame picture in angle and expression animation
Face action angle, θkWithIt is consistent.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910469527.1A CN110189404B (en) | 2019-05-31 | 2019-05-31 | Virtual face modeling method based on real face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910469527.1A CN110189404B (en) | 2019-05-31 | 2019-05-31 | Virtual face modeling method based on real face image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110189404A true CN110189404A (en) | 2019-08-30 |
CN110189404B CN110189404B (en) | 2023-04-07 |
Family
ID=67719502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910469527.1A Active CN110189404B (en) | 2019-05-31 | 2019-05-31 | Virtual face modeling method based on real face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110189404B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111124386A (en) * | 2019-12-23 | 2020-05-08 | 上海米哈游天命科技有限公司 | Unity-based animation event processing method, device, equipment and storage medium |
CN114723860A (en) * | 2022-06-08 | 2022-07-08 | 深圳智华科技发展有限公司 | Method, device and equipment for generating virtual image and storage medium |
CN115908655A (en) * | 2022-11-10 | 2023-04-04 | 北京鲜衣怒马文化传媒有限公司 | Virtual character facial expression processing method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1920886A (en) * | 2006-09-14 | 2007-02-28 | 浙江大学 | Video flow based three-dimensional dynamic human face expression model construction method |
CN101739712A (en) * | 2010-01-25 | 2010-06-16 | 四川大学 | Video-based 3D human face expression cartoon driving method |
US20100259538A1 (en) * | 2009-04-09 | 2010-10-14 | Park Bong-Cheol | Apparatus and method for generating facial animation |
WO2017141223A1 (en) * | 2016-02-20 | 2017-08-24 | Vats Nitin | Generating a video using a video and user image or video |
CN108564642A (en) * | 2018-03-16 | 2018-09-21 | 中国科学院自动化研究所 | Unmarked performance based on UE engines captures system |
CN108734757A (en) * | 2017-04-14 | 2018-11-02 | 北京佳士乐动漫科技有限公司 | A kind of method that sound captures realization 3 D human face animation with expression |
CN109410298A (en) * | 2018-11-02 | 2019-03-01 | 北京恒信彩虹科技有限公司 | A kind of production method and expression shape change method of dummy model |
-
2019
- 2019-05-31 CN CN201910469527.1A patent/CN110189404B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1920886A (en) * | 2006-09-14 | 2007-02-28 | 浙江大学 | Video flow based three-dimensional dynamic human face expression model construction method |
US20100259538A1 (en) * | 2009-04-09 | 2010-10-14 | Park Bong-Cheol | Apparatus and method for generating facial animation |
CN101739712A (en) * | 2010-01-25 | 2010-06-16 | 四川大学 | Video-based 3D human face expression cartoon driving method |
WO2017141223A1 (en) * | 2016-02-20 | 2017-08-24 | Vats Nitin | Generating a video using a video and user image or video |
CN108734757A (en) * | 2017-04-14 | 2018-11-02 | 北京佳士乐动漫科技有限公司 | A kind of method that sound captures realization 3 D human face animation with expression |
CN108564642A (en) * | 2018-03-16 | 2018-09-21 | 中国科学院自动化研究所 | Unmarked performance based on UE engines captures system |
CN109410298A (en) * | 2018-11-02 | 2019-03-01 | 北京恒信彩虹科技有限公司 | A kind of production method and expression shape change method of dummy model |
Non-Patent Citations (3)
Title |
---|
何钦政;王运巧;: "基于Kinect的人脸表情捕捉及动画模拟系统研究" * |
叶艳芳;黄席樾;沈志熙;: "一种基于肤色和模板匹配的人脸检测方法" * |
钱鲲: "视频分析的三维表情动画生成系统" * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111124386A (en) * | 2019-12-23 | 2020-05-08 | 上海米哈游天命科技有限公司 | Unity-based animation event processing method, device, equipment and storage medium |
CN111124386B (en) * | 2019-12-23 | 2023-08-29 | 上海米哈游天命科技有限公司 | Animation event processing method, device, equipment and storage medium based on Unity |
CN114723860A (en) * | 2022-06-08 | 2022-07-08 | 深圳智华科技发展有限公司 | Method, device and equipment for generating virtual image and storage medium |
CN115908655A (en) * | 2022-11-10 | 2023-04-04 | 北京鲜衣怒马文化传媒有限公司 | Virtual character facial expression processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110189404B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110189404A (en) | Virtual facial modeling method based on real human face image | |
US9734613B2 (en) | Apparatus and method for generating facial composite image, recording medium for performing the method | |
Seol et al. | Artist friendly facial animation retargeting | |
CN105205846B (en) | Ink animation production method | |
JP2009534774A (en) | Face animation created from acting | |
CN103606190A (en) | Method for automatically converting single face front photo into three-dimensional (3D) face model | |
Park et al. | Example‐based motion cloning | |
Han et al. | Exemplar-based 3d portrait stylization | |
Orvalho et al. | Transferring the rig and animations from a character to different face models | |
CN104102487A (en) | Visual edit method and visual editor for 3D (three-dimensional) game role Avatar | |
Cetinaslan et al. | Sketch-Based Controllers for Blendshape Facial Animation. | |
Li et al. | 3D paper‐cut modeling and animation | |
Tejera et al. | Animation control of surface motion capture | |
Zhao et al. | Artistic Style Analysis of Root Carving Visual Image Based on Texture Synthesis | |
CN112435319A (en) | Two-dimensional animation generating system based on computer processing | |
CN100446039C (en) | Layering view point correlated model based computer assisted two-dimensional cartoon drawing method by hand | |
Liu et al. | Research on the computer case design of 3D human animation visual experience | |
Jiang et al. | Animation scene generation based on deep learning of CAD data | |
Jin et al. | A Semi-automatic Oriental Ink Painting Framework for Robotic Drawing from 3D Models | |
Kang et al. | A Style Transfer Network of Local Geometry for 3D Mesh Stylization | |
Wang et al. | Artificial intelligence application of virtual reality technology in digital media art creation | |
CN117576280B (en) | Intelligent terminal cloud integrated generation method and system based on 3D digital person | |
Miranda | Intuitive real-time facial interaction and animation | |
Lin et al. | Automatic Placement/Arrangement of Underneath Bone Structures for 3D Facial Expressions and Animations. | |
CN117745890A (en) | Artificial intelligence dynamic shadow-play generation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |