CN100527170C - Complex expression emulation system and implementation method - Google Patents

Complex expression emulation system and implementation method Download PDF

Info

Publication number
CN100527170C
CN100527170C CNB2006101530320A CN200610153032A CN100527170C CN 100527170 C CN100527170 C CN 100527170C CN B2006101530320 A CNB2006101530320 A CN B2006101530320A CN 200610153032 A CN200610153032 A CN 200610153032A CN 100527170 C CN100527170 C CN 100527170C
Authority
CN
China
Prior art keywords
expression
face
combination
module
basis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006101530320A
Other languages
Chinese (zh)
Other versions
CN101149840A (en
Inventor
杨斌
贾培发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB2006101530320A priority Critical patent/CN100527170C/en
Publication of CN101149840A publication Critical patent/CN101149840A/en
Application granted granted Critical
Publication of CN100527170C publication Critical patent/CN100527170C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The complex face simulation system comprises a human-face collection module, a controlled object producing module, a basic face database module, a combined face database module, a face combination module and a face producing module. It can pick up the objects to be controlled such as face radial muscle, mouth obicularis muscle, eye lid, eye ball and jaw on a 3-D human face model according to topological and anatomic structure of human face, then control the points on the 3-D face model according to the control parameters to make the points change and produce various basic faces, at same time it can also combine multiple basic faces to produce a complex artificial face.

Description

A kind of complex expression emulation system and its implementation
Technical field
The present invention relates to a kind of method of computer simulation and system, particularly a kind of computing machine generates the method for complicated emulation human face expression automatically, and the complex expression emulation system of setting up according to this method.
Background technology
Human face expression is the important channel that human non-linguistic information transmits, and also is a mutual importance of human emotion.Carry out alternately if calculate the mode and the mankind that function utilizes software or hardware system to express by expression, the more hommization that will seem so more meets people's esthetic requirement.Can simply control, will there be incomparable application prospect in the system that generates various complex expressions, it can be used for many research fields and application, as emotion calculating, man-machine interaction research fields such as (comprising computer animation, intelligent robot), applications such as education, medical treatment, amusement, communication.
Relatively the human face expression simulation generation method of moulding has the face's defined parameters (FDP) of encoding facial movement system (FACS) and MPEG-4 and kinematic parameter (FAP) to make up at present.
In 1978, Ekman and Friesen have studied 6 kinds of basic facial expressions (promptly glad, sad, surprised, frightened, angry and detest), systematically set up the image library that contains the different human face expression of thousands of width of cloth, and (Facial Action Coding System FACS) describes people's countenance to have developed the facial movement coded system.Anatomic characteristic according to people's face, they are divided into about 46 not only separate but also connect each other moving cell (Action Unit with people's face, AU), and analyzed the movement characteristic of these moving cells, the face area of each unit controls and relevant expression, give a large amount of photos simultaneously as an illustration.FACS is that all cause the enumerating of moving cell of facial movement to the people on the face.
In research afterwards and since the people from multimedia in occupation of crucial position, so in MPEG-4 to the 3 D human face animation formal definition international standard.This is a whole set of faceformization describing method, comprises facial definition parameters FDP (facial define parameters) and one group of human face animation parameter F AP (facial animation parameter) that is used to define the action of people's face portion of being used to define the faceform.FDP has comprised that altogether the positional information of 84 unique points defines the faceform, and these points not only comprise the observable human face characteristic point of appearance, have also comprised the unique point of organ in the oral cavities such as tongue, tooth.The FDP parameter comprises the characteristic parameter of faces such as the scale, face texture, animation definition list of unique point coordinate, texture coordinate, grid.Corresponding with the FDP parameter of static state is dynamic FAP parameter, and it is divided into 10 groups, describes 68 kinds of basic exercises and 6 kinds of basic facial expressions of human face respectively.FAP is the set of complete face's basic exercise, and every kind of FAP describes a certain motion of zone on certain direction of people's face, describes moving up and down of left eye upper eyelid such as FAP19, all FAP is combined the expression that just can represent people's face complexity.
Aspect the generation of emulation expression, these two kinds of methods all have ripe application respectively, but also have different weakness.
The main weakness of FACS is: people's face is whole rather than rigid body of a softness, moving cell is the space template of localization, and the template of localization can't be combined to form all countenances, therefore is difficult to express meticulous expression in the FACS system.
And the main weakness of MPEG-4 method is: because the FAP definition comes from the application of computer animation, what define is the deformation on people's face surface, therefore relatively be adapted at the computer animation field and generate the emulation expression, but be difficult to be extended to other field, as anthropomorphic robot etc.
Summary of the invention
The method that provides a kind of computing machine to generate complicated emulation human face expression automatically at the problems referred to above is provided, and the complex expression emulation system of setting up according to this method, to overcome the deficiency of above-mentioned several human face expression simulation generation methods.
One aspect of the present invention provides a kind of complex expression emulation system, this system comprises people's face acquisition module that is used to generate human face three-dimensional model, one is connected with described people's face acquisition module, be used for analyst's face three-dimensional model, generate the controlling object generation module of expression object and controlling object, a basis expression database module that is used to store basis expression parameter, a combination expression database module that is used to store the combination expression, one is connected with described basis expression database module and the combination database module of expressing one's feelings, generate the expression synthesis module of the controlled variable of combination expression according to the array mode of combination expression, be connected with the expression synthesis module with described controlling object generation module with one, be responsible for controlling the expression generation module that each controlling object generates people's face complex simulation expression according to the input controlled variable.
The title and the controlled variable of the various bases of expression database module stores in basis of the present invention expression, controlled variable are to obtain by the record to the controlled variable of the human face three-dimensional model of display base expression.
The title and the rule of combination of the title of combination expression database module stores complex expression of the present invention and each basis expression of synthetic complex expression.
Another aspect of the present invention provides the method that a kind of computing machine generates the complex simulation human face expression automatically, and the method comprising the steps of: the three-dimensional model of gathering people's face; Generate relevant expression object; According to people's face topological sum anatomical structure, generate controlling object and constraint condition; According to the expression controlling object that the parameter control of external input is different point on the human face three-dimensional model is carried out conversion, generate the emulation expression on various bases; Deposit the title and the corresponding controlled variable of basis expression in basis expression database; By making up a plurality of bases expression, form the combination expression, deposit combination expression database in; By the application combination rule, the basis expression that comprises in the combination expression is carried out combination calculation, form the controlled variable of combination expression; According to the controlled variable of combination expression, control the parameter of each object point on the human face three-dimensional model is carried out conversion, generate the complex simulation expression that various combinations are come out.
Human face expression object of the present invention comprises countenance object, mouth expression object, eye expression object and chin expression object.
Human face expression controlling object of the present invention comprises face's radiation flesh, mouth orbicularis, eyelid, eyeball and chin.
Automatic human face expression analogue system of the present invention and its implementation are from the angle of the physiological mechanism of human face's physiological structure and expression generation, adopted based on OO hyperspace model method human face expression has been carried out modeling, realized the emulation expression by control to model.Compare with classic method, major advantage of the present invention is: by the generative process of simulation expression, to countenance with the express one's feelings division of controlling object of OO mode, come the various expressions naturally of complete description with the controlled attribute of expression controlling object, save storage space, also improved counting yield; The expression controlling object has been undertaken related with human face three-dimensional model by space constraints, only need the attribute of expression controlling object is provided with, by using the control method of expression controlling object, can realize the generation of emulation expression, simplified operation to human face three-dimensional model; Constraint condition by each object and space constraints calculate the whole constraint condition of expression parameterized model, thereby make the point of visual human's face model surface can form the relation of interlock, when generating virtual expression according to the expression parameter, the situation of distortion can not appear; By the combination foundation expression, can generate the expression of various complexity, especially solve the unified problem of expressing of the countenance and the shape of the mouth as one speaks; Owing to adopted bionical principle to control, and fully from considering based on the angle of object oriented calculation machine control, so this control method not only can be applied in computer software fields, can also be applied to the hardware controls field, as the robot head of realizing with artificial-muscle; The expression parameter that this method forms has universality, can use on different faceforms.
Description of drawings
Fig. 1 is the structural representation of automatic human face expression analogue system of the present invention;
Fig. 2 is the operational flowchart of automatic generation face simulation expression of the present invention;
Fig. 3 is a face of the present invention controlling object distribution schematic diagram;
Fig. 4 is a mouth controlling object distribution schematic diagram of the present invention;
Fig. 5 is an eye controlling object distribution schematic diagram of the present invention;
Fig. 6 is a chin controlling object distribution schematic diagram of the present invention;
Fig. 7 is the human face three-dimensional model synoptic diagram that generates with 3DS MAX;
Fig. 8 is the synoptic diagram of the application point of various piece on the three-dimensional face;
Fig. 9 is the range of control synoptic diagram of left greater zygomatic muscle;
Figure 10 implements the design sketch of control separately to left greater zygomatic muscle, left eye skin, chin;
Figure 11 is the design sketch of part basis expression;
Figure 12 is the design sketch of part combination expression.
Among Fig. 1,1 people's face acquisition module, 2 controlling object generation modules, 3 basis expression database module, 4 combination expression database module, 5 expression synthesis modules, 6 expression generation modules.
Among Fig. 3,301 left and right sides frontalis, about 302 outside frontalis, 303 left and right sides procerus mescles, 304 left and right sides superciliary corrugator muscles, 305 left and right sides levator labii superioris alaeque nasis, 306 left and right sides levator labii superiorises, 307 left and right sides zygolabilais, 308 left and right sides greater zygomatic muscles, 309 left and right sides buccinator muscles, 310 left and right sides depressors muscle of angle of mouth, 311 left and right sides depressors muscle of lower lip.
Among Fig. 8, the point of 801 each controlling object control, 802 points of chin and eyelid control only.
Among Figure 10,1001 chins partly open, and 1002 chins open entirely, and 1003 left greater zygomatic muscles shrink 0.5,1004 left greater zygomatic muscle and shrink 1.0,1005 left eye skin semi-closures, 1006 left eye skin full cut-offs.
Among Figure 11,1101 sadnesss, 1102 pronunciations ' ', 1103 happinesss.
Among Figure 12,1201 say ' ' lamentabile, and 1202 say ' ' happily.
Table 1 is the control parameter list of part basis expression;
Table 2 is control parameter list of part combination expression.
Embodiment
By the embodiment of automatic human face expression analogue system of the present invention and method being described in detail the purpose that the present invention may be better understood, characteristic and advantage below in conjunction with accompanying drawing.
Referring to Fig. 1 complex man's face expression emulation system of the present invention is described.As shown in Figure 1, this automatic human face expression analogue system is by people's face acquisition module 1 that is used to generate human face three-dimensional model, one is connected with described people's face acquisition module 1, be used for analyst's face three-dimensional model, generate the controlling object generation module 2 of controlling object, a basis expression database module 3 that is used to store basis expression parameter, a combination expression database module 4 that is used to store the combination expression, one is connected with described basis expression database module 3 and the combination database module 4 of expressing one's feelings, the expression synthesis module 5 that generates combination expression parameter according to the combination expression is connected with expression synthesis module 5 with described controlling object generation module 2 with one, according to the input controlled variable, control expression generation module 6 compositions that each controlling object generates people's face complex simulation expression.People's face acquisition module 1 can be a spatial digitizer, by scanning true or the model head part sets up human face three-dimensional model, can be modeling softwares such as 3DS MAX or MAYA also, sets up human face three-dimensional model by hand by the designer.The human face three-dimensional model that 2 pairs of people's faces of controlling object generation module acquisition module 1 generates is analyzed, expression objects such as location face, mouth, eye and chin, and according to controlling object such as anatomical structure location face radiation flesh, mouth orbicularis, eyelid, pupil and chins.The title and the controlled variable of the various bases of basis expression database module 3 main storages expression.The title of combination expression database module 4 each combination expressions of main storage and title and the combination parameter that is combined into each basis expression of this combination expression.Expression synthesis module 5 makes up according to the corresponding basis expression controlled variable of the combination parameter in the combination expression database module 4 to storage in the expression database module 3 of basis, generates combination expression controlled variable.The combination expression controlled variable that expression generation module 6 generates according to expression synthesis module 5 is controlled each controlling object the point on the human face three-dimensional model is carried out conversion, thereby generates corresponding emulation expression.
Referring to Fig. 2 the method that computing machine of the present invention generates the face simulation expression is automatically described.According to the present invention, when needs generated the emulation expression of certain visual human's face, at first at step S1, people's face acquisition module 1 was gathered human face three-dimensional model and is sent controlling object generation module 2 to; At step S2, controlling object generation module 2 is according to the controlling object such as topological structure location face, mouth, eye and chin of human face three-dimensional model; At step S3, controlling object generation module 2 is according to controlling object such as anatomy principle location face radiation flesh, mouth orbicularis, eyelid, pupil and chins, and generates the restriction relation of each point and controlling object on the human face three-dimensional model and send the generation module 6 of expressing one's feelings to; At step S4, expression generation module 6 generates various bases expression according to the different expression controlling object of parameter control of external input; At step S5, the controlled variable of different basis expressions is preserved into basis expression database module 3; At step S6, select the basis expression to make up, and preserve to enter and make up expression database module 4; At step S7, when needs generate concrete emulation expression, expression synthesis module 5 takes out the basis expression that this combination expression comprises from combination expression database module 4, from the expression database module 3 of basis, take out the pairing controlled variable of these basis expressions again, calculate the controlled variable of combination expression according to rule of combination; At step S8, expression generation module 6 carries out conversion according to the controlled variable of the combination expression of calculating among the step S7 to the position of each point on the human face three-dimensional model, generates specific combining simulation expression.
Referring to Fig. 3 face of the present invention controlling object is described.According to anatomy, people's expression is the result that people's face, mouth, eye and bar muscle acting in conjunction of following Abbado on the face produces.People's mimetic muscle has two classes, and a class is linear radiation flesh, and an other class is the orbicularis of ring-type.For different people, people's muscle quantity is the same, and distribution is the same, and annexation also is the same.Anatomy be it is generally acknowledged, people's countenance mainly by 22 symmetrical radiation flesh controls (as shown in Figure 3), comprises left and right sides frontalis (frontal muscle) (301), about the outside frontalis (Outer frontal muscle) (302), left and right sides procerus mescle (procerus muslce) (303), left and right sides superciliary corrugator muscle (corrugator supercilii) (304), left and right sides levator labii superioris alaeque nasi (levator labii superiorisalaeque nasi) (305), left and right sides levator labii superioris (levator labii superioris propria) (306), left and right sides zygolabilais (zygomaticus minor) (307), left and right sides greater zygomatic muscle (zygomaticus major) (308), left and right sides buccinator muscle (buccinator) (309), left and right sides depressor muscle of angle of mouth (depressor anguli oris) (310), left and right sides depressor muscle of lower lip (depressor labi inferioris) (311).The main control parameters of radiation flesh is the contraction coefficient of radiation flesh.Suppose radiation flesh along
Figure C200610153032D00081
It is the contraction of Δ M that direction has been done intensity, supposes that the headform goes up certain some P and is changed to a P ', influences to be
Figure C200610153032D00082
Then have:
Figure C200610153032D00083
Figure C200610153032D00084
ΔM=status×totalLength
The point that produces the countenance zone on the human face three-dimensional model must be subjected to the influence of one or more radiation flesh objects, the influence of a plurality of radiation flesh objects calculating that can superpose.
Referring to Fig. 4, Fig. 5 and Fig. 6 mouth of the present invention, eye and chin controlling object are described.Ellipse in the annular rings shown in Figure 4 is the musculus orbicularis oris object that comprises in the mouth expression object.Main control parameters is the contraction coefficient fl of musculus orbicularis oris.Certain some O on the musculus orbicularis oris OnDisplacement x in the horizontal direction, the displacement y of vertical direction can basis
x 2 ( fl × a ) 2 + y 2 ( fl × b ) 2 = 1 ,
fl ∈ ( 0,1 ] , fl = 1 - Δ O el a
Calculate.O for correspondence OnOval inner some O In,
&Delta; O in &CenterDot; x = &Delta;x &times; cos ( ( 1 - d ) &times; &pi; 2 ) &Delta; O in &CenterDot; y = &Delta;y &times; cos ( ( 1 - d ) &times; &pi; 2 ) ( d < 1 )
O for correspondence OnOval outside some O Out,
&Delta; O out &CenterDot; x = &Delta;x &times; cos ( d - 1 R - 1 &times; &pi; 2 ) &Delta; O out &CenterDot; y = &Delta;y &times; cos ( d - 1 R - 1 &times; &pi; 2 ) ( 1 < d &le; R )
The control of eye and chin is relative simple.As shown in Figure 5, the eye controlled variable comprises left and right sides eyelid folding degree, left and right sides pupil vertical movement intensity and left and right sides pupil tangential movement intensity.And the controlled variable of chin comprises chin opening and closing intensity and chin side-to-side movement intensity (as shown in Figure 6).
Generate " happiness " with the instantiation explanation on the face the visual human below, " sadness ", basis expressions (controlled variable is as shown in table 1) such as " sending out ' ' sound ",
Table 1 basis expression control parameter list
The basis expression Controlled variable
Glad Left_Zygomatic_Major?1.10 Right_Zygomatic_Major?1.10 Left_Frontalis_Inner?0.80 Right_Frontalis_Inner?0.80 Left_Frontalis_Major?0.20 Right_Frontalis_Major?0.20 Left_Frontalis_Outer?0.10 Right_Frontalis_Outer?0.10
Sad Left_Angular_Depressor?0.70 Right_Angular_Depressor?0.70 Left_Frontalis_Inner?1.90 Right_Frontalis_Inner?1.90 Left_Labi_Nasi?0.70 Right_Labi_Nasi?0.70 Left_Inner_Labi_Nasi?0.20 Right_Inner_Labi_Nasi?0.20
Pronunciation ' ' Mouth?0.4 jaw?7.0
And " glad say ' ' " and " sad say ' ' " waits the concrete steps that make up express one's feelings (controlled variable is as shown in table 2).
Table 2 combination expression control parameter list
The combination expression Controlled variable
Glad say ' ' Left_Zygomatic_Major?0.00 Right_Zygomatic_Major?0.00 Left_Frontalis_Inner?0.80 Right_Frontalis_Inner?0.80 Left_Frontalis_Major?0.20 Right_Frontalis_Major?0.20 Left_Frontalis_Outer?0.10 Right_Frontalis_Outer?0.10 Mouth?0.4 jaw?7.0
Sad say ' ' Left_Angular_Depressor?0.00 Right_Angular_Depressor?0.00 Left_Frontalis_Inner?1.90 Right_Frontalis_Inner?1.90 Left_Labi_Nasi?0.70 Right_Labi_Nasi?0.70 Left_Inner_Labi_Nasi?0.20 Right_Inner_Labi_Nasi?0.20 Mouth?0.4 iaw?7.0
Step S1: generate the faceform with modeling software 3DS MAX, as shown in Figure 7.
Step S2: location countenance, the mouth expression, the controlling object of eye expression and chin expression, the point of face and mouth expression object effect on the three-dimensional face model of setting up among the clear and definite step S1, the eyelid of eye expression object effect and the point of eyeball, point (as shown in Figure 8) on the chin of chin expression object effect, the application point of face and mouth expression object be on the model except that eyelid and eyes all point, the application point of chin expression object is the red point set of chin part, the point of the eyelid of eye expression effect is the red points of eyes parts, and eyes are two spheroids independently.
Step S3: make up radiation flesh object, the orbicularis object in the mouth expression object, eye expression object and chin expression object in the countenance object, and the point of the respective action scope on these objects and the three-dimensional face model associated, so that in these expression objects of control, the relating dot on the three-dimensional face model can realize that interlock produces corresponding distortion.Countenance object and mouth expression object calculate the application point of their correspondences in real time according to its control formula.(left zygomaticus major) is example with left greater zygomatic muscle in the countenance, and its range of control is (see figure 9) shown in blue cone scope.The reference mark of eye expression object is the point that belongs to eyelid of being located among the step S2 and the point of eyeball.Notice that the point of eyelid and the point of eyeball are only by the control of eyeball expression object.The reference mark of chin expression object is the point that belongs to chin of being located among the step S2.
Step S4: call the expression generation module, to each expression controlling object, input parameter is controlled it separately.The design sketch of Figure 10 for respectively left greater zygomatic muscle (left zygomaticus major), left eye skin, chin being controlled separately.
Step S5: repeating step S4, generate different basis expressions, obtain the controlled variable of various bases expression, and preservation enters basis expression database.Basis expression " happiness ", " sadness ", the controlled variable of " pronunciation ' ' " is as shown in table 1, and its corresponding design sketch is as shown in figure 11.
Step S6: combination foundation expression " happiness " and " pronunciation ' ' " form combination expression " happiness say ' ' "; Combination foundation expression " sadness " and " pronunciation ' ' " form combination expression " sad says "
Step S7: the application combination rule, calculate the controlled variable of " glad say ' ' " and " sadness say ' ' ".Its controlled variable is as shown in table 2.
Step S8: call the expression generation module, import the controlled variable of combination expression into, generate the design sketch (as shown in figure 12) of " glad say ' ' " and " sadness say ' ' ".
Described above only is certain applications example of the present invention, and should not be regarded as limitation of the present invention.Notion disclosed according to the present invention, those skilled in the art can design other similar embodiments at an easy rate.Claims of the present invention should be regarded as comprising those not departing from the similar design of aim of the present invention.

Claims (4)

1, a kind of complex expression emulation system is characterized in that, described complex expression emulation system comprises:
People's face acquisition module that is used to generate human face three-dimensional model;
One is connected with described people's face acquisition module, is used for analyst's face three-dimensional model, generates the controlling object generation module of expression object and controlling object;
A basis expression database module that is used to store basis expression parameter;
A combination expression database module that is used to store the combination expression;
One is connected with described basis expression database module and combination expression database module, generates the expression synthesis module of the controlled variable that combination expresses one's feelings according to the array mode of combination expression; With
One is connected with the expression synthesis module with described controlling object generation module, is responsible for controlling the expression generation module that each controlling object generates people's face complex simulation expression according to the input controlled variable.
2, complex expression emulation system according to claim 1 is characterized in that, the title and the corresponding controlled variable of the various bases of expression database module stores, described basis expression.
3, complex expression emulation system according to claim 1 is characterized in that, the title and the rule of combination of the title of described combination expression database module stores complex expression and each basis expression of synthetic complex expression.
4, a kind of computing machine generates the method for complex simulation human face expression automatically, it is characterized in that, this method comprises the steps:
Gather the three-dimensional model of people's face;
Generate countenance object, mouth expression object, eye expression object and chin expression object;
According to people's face topological sum anatomical structure, generate controlling object, comprise face's radiation flesh, mouth orbicularis, eyelid, eyeball and chin, and generate the constraint condition of each point and controlling object on the human face three-dimensional model;
According to the expression controlling object that the parameter control of external input is different point on the human face three-dimensional model is carried out conversion, generate various bases expression;
Deposit the title and the corresponding controlled variable of basis expression in basis expression database;
By making up a plurality of bases expression, form the combination expression, deposit combination expression database in;
By the application combination rule, the basis expression that comprises in the combination expression is carried out combination calculation, form the controlled variable of combination expression;
According to the controlled variable of combination expression, control the parameter of each controlling object point on the human face three-dimensional model is carried out conversion, generate the complex simulation expression that various combinations are come out.
CNB2006101530320A 2006-09-20 2006-09-20 Complex expression emulation system and implementation method Expired - Fee Related CN100527170C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101530320A CN100527170C (en) 2006-09-20 2006-09-20 Complex expression emulation system and implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101530320A CN100527170C (en) 2006-09-20 2006-09-20 Complex expression emulation system and implementation method

Publications (2)

Publication Number Publication Date
CN101149840A CN101149840A (en) 2008-03-26
CN100527170C true CN100527170C (en) 2009-08-12

Family

ID=39250351

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101530320A Expired - Fee Related CN100527170C (en) 2006-09-20 2006-09-20 Complex expression emulation system and implementation method

Country Status (1)

Country Link
CN (1) CN100527170C (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103617B (en) * 2009-12-22 2013-02-27 华为终端有限公司 Method and device for acquiring expression meanings
CN102129706A (en) * 2011-03-10 2011-07-20 西北工业大学 Virtual human eye emotion expression simulation method
CN102184562B (en) * 2011-05-10 2015-02-04 深圳大学 Method and system for automatically constructing three-dimensional face animation model
CN107257403A (en) 2012-04-09 2017-10-17 英特尔公司 Use the communication of interaction incarnation
CN103198519A (en) * 2013-03-15 2013-07-10 苏州跨界软件科技有限公司 Virtual character photographic system and virtual character photographic method
CN103473807B (en) * 2013-09-26 2018-02-13 王治魁 A kind of 3D model transformation systems and method
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal
CN104217454B (en) * 2014-08-21 2017-11-03 中国科学院计算技术研究所 A kind of human face animation generation method of video drive
CN104463109A (en) * 2014-11-24 2015-03-25 苏州福丰科技有限公司 Three-dimensional face recognition method based on toys
EP3241187A4 (en) 2014-12-23 2018-11-21 Intel Corporation Sketch selection for rendering 3d model avatar
EP3410399A1 (en) 2014-12-23 2018-12-05 Intel Corporation Facial gesture driven animation of non-facial features
WO2016101131A1 (en) 2014-12-23 2016-06-30 Intel Corporation Augmented facial animation
CN104599309A (en) * 2015-01-09 2015-05-06 北京科艺有容科技有限责任公司 Expression generation method for three-dimensional cartoon character based on element expression
CN104715500A (en) * 2015-03-26 2015-06-17 金陵科技学院 3D animation production development system based on three-dimensional animation design
CN104767980B (en) * 2015-04-30 2018-05-04 深圳市东方拓宇科技有限公司 A kind of real-time emotion demenstration method, system, device and intelligent terminal
WO2017101094A1 (en) 2015-12-18 2017-06-22 Intel Corporation Avatar animation system
CN106952325B (en) * 2017-03-27 2020-07-21 厦门黑镜科技有限公司 Method and apparatus for manipulating three-dimensional animated characters
CN108573527B (en) * 2018-04-18 2020-02-18 腾讯科技(深圳)有限公司 Expression picture generation method and equipment and storage medium thereof
CN109101953A (en) * 2018-09-07 2018-12-28 大连东锐软件有限公司 The facial expressions and acts generation method of subregion element based on human facial expressions
CN109285208A (en) * 2018-09-29 2019-01-29 吉林动画学院 Virtual role expression cartooning algorithm based on expression dynamic template library
CN111383308B (en) 2018-12-29 2023-06-23 华为技术有限公司 Method for generating animation expression and electronic equipment
CN110021064A (en) * 2019-03-07 2019-07-16 李辉 A kind of aestheticism face system and method
CN110163957A (en) * 2019-04-26 2019-08-23 李辉 A kind of expression generation system based on aestheticism face program
CN110141857A (en) * 2019-04-26 2019-08-20 腾讯科技(深圳)有限公司 Facial display methods, device, equipment and the storage medium of virtual role
CN113763518A (en) * 2021-09-09 2021-12-07 北京顺天立安科技有限公司 Multi-mode infinite expression synthesis method and device based on virtual digital human

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998039735A1 (en) * 1997-03-06 1998-09-11 Drdc Limited Method of correcting face image, makeup simulation method, makeup method, makeup supporting device and foundation transfer film
CN1433240A (en) * 2002-01-17 2003-07-30 富士通株式会社 Electronic equipment and program
CN1552041A (en) * 2001-12-14 2004-12-01 日本电气株式会社 Face meta-data creation and face similarity calculation
WO2005073909A1 (en) * 2004-01-30 2005-08-11 Digital Fashion Ltd. Makeup simulation program, makeup simulation device, and makeup simulation method
JP2006023921A (en) * 2004-07-07 2006-01-26 Kao Corp Makeup simulation device and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998039735A1 (en) * 1997-03-06 1998-09-11 Drdc Limited Method of correcting face image, makeup simulation method, makeup method, makeup supporting device and foundation transfer film
CN1552041A (en) * 2001-12-14 2004-12-01 日本电气株式会社 Face meta-data creation and face similarity calculation
CN1433240A (en) * 2002-01-17 2003-07-30 富士通株式会社 Electronic equipment and program
WO2005073909A1 (en) * 2004-01-30 2005-08-11 Digital Fashion Ltd. Makeup simulation program, makeup simulation device, and makeup simulation method
JP2006023921A (en) * 2004-07-07 2006-01-26 Kao Corp Makeup simulation device and method

Also Published As

Publication number Publication date
CN101149840A (en) 2008-03-26

Similar Documents

Publication Publication Date Title
CN100527170C (en) Complex expression emulation system and implementation method
CN104008564B (en) A kind of human face expression cloning process
CN104541306B (en) Neurobehavioral animation system
Terzopoulos et al. Techniques for realistic facial modeling and animation
US7068277B2 (en) System and method for animating a digital facial model
US20040095344A1 (en) Emotion-based 3-d computer graphics emotion model forming system
CN101739709A (en) Control method of three-dimensional facial animation
CN101533523B (en) Control method for simulating human eye movement
CN104599309A (en) Expression generation method for three-dimensional cartoon character based on element expression
Bui Creating emotions and facial expressions for embodied agents
CN110310351A (en) A kind of 3 D human body skeleton cartoon automatic generation method based on sketch
CN110443872B (en) Expression synthesis method with dynamic texture details
CN110007754A (en) The real-time reconstruction method and device of hand and object interactive process
CN101477703B (en) Human body animation process directly driven by movement capturing data based on semantic model
CN102750549A (en) Automatic tongue contour extraction method based on nuclear magnetic resonance images
KR20110075372A (en) Generating method for exaggerated 3d facial expressions with personal styles
Sera et al. Physics-based muscle model for mouth shape control
Li et al. A mass-spring tongue model with efficient collision detection and response during speech
Moussa et al. MPEG-4 FAP animation applied to humanoid robot head
CN110163957A (en) A kind of expression generation system based on aestheticism face program
Fratarcangeli Computational models for animating 3d virtual faces
McDonald et al. A novel approach to managing lower face complexity in signing avatars
Ray et al. Text me the data: Generating Ground Pressure Sequence from Textual Descriptions for HAR
CN104658025A (en) Human face expression synthesis method based on characteristic point
CN106504308A (en) Face three-dimensional animation generation method based on 4 standards of MPEG

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090812

Termination date: 20110920