CN104599309A - Expression generation method for three-dimensional cartoon character based on element expression - Google Patents

Expression generation method for three-dimensional cartoon character based on element expression Download PDF

Info

Publication number
CN104599309A
CN104599309A CN201510013022.6A CN201510013022A CN104599309A CN 104599309 A CN104599309 A CN 104599309A CN 201510013022 A CN201510013022 A CN 201510013022A CN 104599309 A CN104599309 A CN 104599309A
Authority
CN
China
Prior art keywords
expression
unit
degree
controller
animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201510013022.6A
Other languages
Chinese (zh)
Inventor
李然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing spring film technology Co., Ltd.
Original Assignee
Beijing Section Skill Has Appearance Science And Technology Ltd Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Section Skill Has Appearance Science And Technology Ltd Co filed Critical Beijing Section Skill Has Appearance Science And Technology Ltd Co
Priority to CN201510013022.6A priority Critical patent/CN104599309A/en
Publication of CN104599309A publication Critical patent/CN104599309A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an expression generation method for a three-dimensional cartoon character based on element expression. The expression generation method for the three-dimensional cartoon character based on the element expression includes that step 1, confirming several kinds of element expressions for expressing emotions according to three-dimensional cartoon manufacturing features and a modern sentiment theory; step 2, building an expression model for each element expression with the maximum degree, and separately building an expression model without expression; step 3, decomposing more than one element expression of a complex expression to be expressed, confirming the degree value thereof, and using the degree value as weight to mix with each element expression to obtain an expression model of the complex expression; step 4, setting a key frame animation with the element expression degree to obtain the expression animation. The expression generation method for the three-dimensional cartoon character based on the element expression is capable of quickly and consistently generating the character expression animation, and the character expression animation can be reused among different characters.

Description

A kind of expression generation method of three-dimensional animation role based on unit's expression
Technical field
The invention belongs to three-dimensional animation field, the generation method of the expression animation of especially a kind of three-dimensional animation role, drive in order to provide a kind of unit's expression that utilizes the method generating three-dimensional character expression animation fast.
Background technology
Three-dimensional animation technology is a kind of emerging technology produced along with the development of computer hardware technique in recent years, can produce three-dimensional animation or the video display basic lens of satisfied director's demand quickly and easily.
The producing principle of three-dimensional animation is generally: first, utilizes three-dimensional software (such as 3ds Max, Maya or Houdini) to set up a virtual world in a computer; Then, in the three-dimensional world that this is virtual, the three-dimensional model such as scene and three-dimensional cartoon character is added; Finally, the animation curve of setting model, the movement locus of virtual camera and other animation parameters, play up and obtain expression animation.
Due to three-dimensional animation technology possess can the true scene of accurate analog, almost without feature such as creation restriction etc., be widely used in the numerous areas such as amusement, education, military affairs at present.
For the application of entertainment field, cartoon figure's moulding that usual needs provide according to designer, produces the three-dimensional animation role and bone binding system that possess corresponding profile, makes it to possess abundant fine and smooth expression animation, show happy, sad or angry etc. mood, thus promote story development.
The method of current generation expression animation mainly contains: merge anamorphoser method, bone binding animation method, motion capture method.
The defect merging anamorphoser method needs to make corresponding object module for the expression needed for each or transition state, and workload is comparatively large, and lacks details regulation and control.
Bone binding animation method directly controls the polygon morphing of face, without the need to a large amount of object modules, but its shortcoming needs each articulation point of the meticulous adjustment of binding teacher to arrange all multiparameters and animation curve to the weighing factor scope of surrounding vertex and animation teacher, just can complete expression animation and make.
The method of motion capture, it is a hundreds of trace point on true man performer on the face cloth, accurately record the appearance model of face in each moment, its defect is that data volume is huge, and the expression animation data obtained can only be used in the facial model near with true man's appearance.
Above-mentioned three kinds of methods respectively have quality, the solution towards masses imperfect at present.
Summary of the invention
In view of this, the invention provides the generation method of the expression animation of a kind of three-dimensional animation role based on unit's expression, the method is not driven facial model deformation from direct by fusion anamorphoser method or bone binding animation method thus be combined into required expression, but go out to send the basic unit's expression of combination according to the mood of role, thus when effectively overcoming existing expression animation method for making production costs, the shortcoming that technical difficulty is large, make the expression animation made by different animation teacher close to consistent, and the expression animation that mood implication can be provided unanimous on the whole for different role.
For solving the problems of the technologies described above, concrete grammar of the present invention is as follows:
Based on the expression animation generation method of the three-dimensional animation role of unit's expression, should comprise the steps:
Step 1, determine to express unit's expression needed for mood according to three-dimensional animation production feature and modern emotional theory; Described unit expression refers to by expressing the complex expression fractionation of mood, not subdivisible unit expression, the corresponding a kind of mood of a kind of unit's expression;
Step 2, for each unit expression set up degree maximum time expression model, then set up separately 1 amimia time expression model;
Step 3, complex expression according to required performance, point to be deconstructed into more than one yuan of expressions of this complex expression, to determine its degree value, using the degree value expression model that amimia and each unit expresses one's feelings as weight mixing, obtains the expression model of this complex expression;
Step 4, according to expression animation needs, repeat step 3 at each key frame time, obtain the property value of the complex expression of each key frame time and each unit expression of correspondence thereof; Other frame moment during expression animation carry out interpolation to key-frame animation data, obtain continuous print animation.
Wherein, described unit expression comprises happiness, surprised, angry, sad, frightened, tired and contempt 7 kinds unit expression.
Preferably, arrange corresponding controller according to facial expression muscle distribution, each controller has more than one property value; In described step 2, when unit's expression time maximum for degree and amimia foundation expression model, the property value of facial expression muscle position to the controller on relevant position according to different expression influence is arranged; The controller of different first expression influence has difference.
Preferably, method for making based on unit's expression drives the controller arranged according to facial expression muscle distribution, controller controls fusion anamorphoser again or textured bone produces required expression model, so both gone for merging anamorphoser method based on the method for making of unit's expression, also went for bone binding method.
Preferably, according to the facial shape feature of different role, position and the property value of controller is set targetedly.
Preferably, in step 3, described using the expression model of degree value as weight mixing each unit expression, the expression model obtaining this complex expression is specially:
For each property value in each controller, the mixed number P ' of computation attribute value under complex expression, thus the expression model obtaining this complex expression; Wherein, for arbitrary controller P, the account form of its mixed number P ' of a certain attribute q under complex expression is:
Suppose that complex expression is made up of N kind unit expression, the property value of known control device P attribute q in amimia situation is p 0, the property value of controller P attribute q when unit's expression i degree is maximum is p i, the degree value of setting unit expression i in complex expression is w i;
First, computing controller P attribute q is w in the degree of unit expression i itime property value:
P i=w i×p 0+(1-w i)×p i
Then, the mixed number P ' of computing controller P attribute q under this complex expression:
P ′ = Σ i N P i w i Σ i N w i .
Preferably, the degree value w used in step 3 iby the unified control inerface in three-dimensional animation production instrument is arranged; This control inerface is made up of multiple foursquare control, and each control represents a kind of unit expression; Containing 1 reference mark in each control; By the position, reference mark in the corresponding control of mobile each unit's expression, set the degree value of unit's expression;
With control base mid point for true origin, horizontal direction is X-axis, vertical direction is that Y-axis sets up two-dimensional coordinate system; The degree value of Y-axis representation element expression, X-axis representation element expression is partial to the degree of left and right sides face; Corresponding position, reference mark is expressed one's feelings as (x, y) by the unit set, then y is unit's expression degree value, and x is the degree value being partial to left and right sides face; The expression degree that left and right face controller uses, i.e. w is determined according to the x value at reference mark and y value i.
Preferably, the described foursquare control length of side is 1;
1) if x=0, then the expression degree of left and right side face is y; So, the w of left and right side all controller uses on the face ibe equal to y;
2) if-0.5≤x<0, then the expression degree of left side face is y, and the expression degree of right side face is 2 × (0.5+x) × y; So, the w of left side controller use on the face i=y, the w that right side controller on the face uses i=2 × (0.5+x) × y;
3) if 0<x≤0.5, then the expression degree of right side face is y, and the expression degree of left side face is 2 × (0.5-x) × y; So, the w of right side controller use on the face i=y, the w that left side controller on the face uses i=2 × (0.5-x) × y.
Beneficial effect:
1, according to three-dimensional animation production feature and modern emotional theory, the present invention determines that extracting 7 kinds possesses mood implication and not subdivisible unit expression, by complex expression being decomposed into the combination of unit's expression in various degree, the expression form of in cartoon making more than 90% can be described.
2, the present invention non-immediate utilize existing fusion anamorphoser method or bone binding animation method to make key-frame animation, the expression animation made by different animation teacher can be made close to consistent, and can be multiplexing between different role, the expression animation that mood implication is unanimous on the whole is provided.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the generation method of the expression animation of the three-dimensional animation role that the present invention is based on unit's expression.
Embodiment
Below with reference to accompanying drawings embodiments of the invention are described.
The invention provides a kind of expression animation generation method of three-dimensional animation role based on unit's expression, as shown in Figure 1, comprise the steps:
Step 1, determine to express unit's expression needed for mood according to three-dimensional animation production feature and modern emotional theory.
Expression involved in the present invention refers to the face action possessing emotion implication, gets rid of some and does not have face action in all senses, such as funny face.And unit proposed by the invention expression, first it belongs to a kind of expression, but it refers in particular to the expression that can not be produced by other expression mixing, namely unit's expression be appreciated that as being split by the complex expression of expressing mood, not subdivisible unit expression.The corresponding a kind of mood of a kind of unit's expression.
Specifically, for " laughing at middle band tear " this expression, this expression is split, " happiness " and " sadness " these two unit expression can be obtained.By that analogy, unit's expression that the expression of some other complexity comprises can be determined.By pressing different proportion mixing various units expression, all expressions possessing mood implication can be produced.
The present invention is theoretical with reference to modern emotional, and in conjunction with the jewelry making practice of role's expression animation, picks out happiness, surprised, angry, sad, frightened, tired and scorn 7 kinds and firstly to express one's feelings.
Unit expression itself have degree point, for " happiness " this unitary expression, along with degree by without increase to maximum, show as from amimia, feel happy secretly, happiness to laugh wildly transition.The degree value of each unit expression be all limited in [0,1] scope, time amimia, degree value is 0, and the maximum duration of degree is 1, thus establishes 7 dimension space coordinate systems, can describe the expression form of in cartoon making more than 90%.
Step 2, the expression model set up when degree value is 1 of expressing one's feelings for each unit, expression model when namely first expression degree is maximum, then set up separately 1 amimia time expression model, expression model when namely degree value is 0.
First, with reference to the distribution of the facial expression muscles such as depressor muscle of angle of mouth, musculus orbicularis oris, nasalis, frontalis, orbicular muscle of eye, role's mask arranges more than one controller, controller comprises the property values such as displacement, convergent-divergent, rotation, these property values control the change of merging anamorphoser or bone, thus drive the deformation of mask.Because role face organ mostly is symmetrical distribution, so be in two facial expression muscles of symmetric position for left and right sides, the position of two controllers corresponding to it is that opposite faces axis is symmetrical, and their each property value is identical.Therefore hereinafter only for one-sided face, opposite side processes equally, and the controller be positioned on axis is then only arranged once.
Set the property value of amimia Time Controller, and each unit expression degree value for 1 time its controller affected property value, the controller of different first expression influence is not quite similar, and this determines according to role's facial characteristics.Such as, time amimia, if the property value " convergent-divergent " of the controller A corresponding to musculus orbicularis oris is 1, the property value " convergent-divergent " of the controller B corresponding to superciliary corrugator muscle is 1; When unit's happy degree value of expression is 1, role is in wild laugh, and mouth obviously should magnify, so the property value of A " convergent-divergent " becomes 2, and the property value of B " convergent-divergent " and first sadness of expressing one's feelings are associated, and expresses one's feelings happy irrelevant, so be still 1 with unit.
Wherein, the property value of described controller and position arrange according to role's feature and have difference.This is because the facial shape feature of different role is not quite similar, the facial shape feature of such as outman and Earthian, distribution and the property value of more than one controller corresponding when therefore each unit expression degree value is 1 also have difference, need and arrange targetedly.
Because based on the animation method of unit's expression and non-immediate control to merge anamorphoser or textured bone, so key-frame animation can be applied in different role; In addition, even if the expression animation of same role in different scene is made by different animation teacher, the difference in perception also can be compared and directly use the method merging anamorphoser method or textured bone little.
Step 3, complex expression according to required performance, divide more than one yuan of expressions being deconstructed into this complex expression, determine its degree value, using degree value as weight during mixing, the property value of mixing corresponding controllers, thus obtain expression model corresponding to this complex expression of performance.
This step specifically comprises following sub-step:
S31, split this complex expression for unit expression, and set each unit expression degree.
Suppose that the mood of performance needed for animation is for " laughing at middle band tear ", splits this complex expression, can obtain " happiness " and " sadness " these two unit expression, because be not great rejoicing great sorrow, the degree value of setting " happiness " and " sadness " is 0.5.
Degree value setting is by the unified control inerface in three-dimensional animation production instrument is arranged.This control inerface is made up of, containing 1 reference mark in each control 7 controls representing 7 kinds of unit's expressions.If do not distinguish left and right face, simply can adopt the control of slide bar form, the present invention further contemplates and controls flexibly left and right face, adopts the square control be made up of square frame and a reference mark.If this square control length of side is 1, with square base mid point for true origin, horizontal direction is X-axis, vertical direction is that Y-axis sets up two-dimensional coordinate system, then the coordinate on square four summits is followed successively by (-0.5,0), (0.5,0), (-0.5,1), (0.5,1), in each square control, the initial position at reference mark is (0,0).
The degree value that animation teacher sets unit's expression by the position of moving each first reference mark of expressing one's feelings in corresponding square control is partial to the degree of left and right sides face with first expression, the degree value of above-mentioned Y-axis representation element expression, above-mentioned X-axis representation element expression is partial to the degree value of left and right sides face.If the unit that animation teacher sets expresses one's feelings, happy corresponding position, reference mark is as (x, y), then y is the degree of unit's expression happiness, and x is the degree value that happiness is partial to left and right sides face.Specifically, the present invention, when reference mark is positioned at vertical axis, thinks that the expression degree of left and right side face is identical; Time on the left of reference mark is positioned at Y-axis, think that expression degree affects left face completely, for right face impact therefrom the longitudinal axis start linear decrease; In like manner, time on the right side of reference mark is positioned at Y-axis, think that expression degree affects right face completely, for left face impact therefrom the longitudinal axis start linear decrease.For happiness, the concrete account form of expression degree is as follows:
1) if x=0, then the happy degree of left and right side face is y.
2) if-0.5≤x<0, then the happy degree of left side face is y, and the happy degree of right side face is 2 × (0.5+x) × y; As x=-0.5, namely right side face is amimia.
3) if 0<x≤0.5, then the happy degree of right side face is y, and the happy degree of left side face is 2 × (0.5-x) × y; As x=0.5, namely left side face is amimia.
S32, using happy for unit's expression and that unit's expression is sad degree value as weight during mixing, the property value of mixing corresponding controllers.
The expression degree y of left face is all used on left face controller, as the hybrid weight w of controller; And the expression degree y of right face is all used on right face controller, as the hybrid weight w of controller.
For left side face, time amimia, if the property value " convergent-divergent " of the controller A corresponding to musculus orbicularis oris is a0, the property value " convergent-divergent " of the controller B corresponding to superciliary corrugator muscle is b0; When unit's happy degree value of expression is 1, the property value " convergent-divergent " of above-mentioned A is a1; When unit's expression level of sadness value is 1, the property value " convergent-divergent " of above-mentioned A is a2, and the property value " convergent-divergent " of above-mentioned B is b1.Wherein, the not impact of the happy property value " convergent-divergent " on controller B of unit's expression, still continues to use b0.
If animation teacher is w1 by the happy degree value that control sets, level of sadness value is w2, and the position, reference mark that supposition is selected is positioned at the position of x=0, then the expression degree of left side face continues to use w1 and w2.In so above-mentioned Data Summary to table 1 be:
Amimia Happy (degree 1) Sad (degree 1)
Controller A a0 a1 a2
Controller B b0 b0 b1
Happy degree-w1 Level of sadness-w2
Table 1
According to table 1, then the specific algorithm of mixture control property value is as follows:
1) the lower mixed number A ' of two kinds of unit's expressions of the property value " convergent-divergent " of computing controller A.First Computing Meta is expressed one's feelings property value " convergent-divergent " A1=w1 × a0+ (the 1-w1) × a1 of happy degree value A when being w1, and property value " convergent-divergent " A2=w2 × a0+ (the 1-w2) × a2 of unit's expression level of sadness value A when being w2, then calculate A '=(A1 × w1+A2 × w2)/(w1+w2).
2) the lower mixed number B ' of two kinds of unit's expressions of the property value " convergent-divergent " of computing controller B.Property value " convergent-divergent " B1=w1 × b0+ (the 1-w1) × b0=0 of unit's expression happy degree value B when being w1, and property value " convergent-divergent " B2=w2 × b0+ (the 1-w2) × b1 of B when unit's expression level of sadness value is w2; Visible, because the not impact of the happy property value " convergent-divergent " on controller B of unit's expression, so B '=w2 × b0+ (1-w2) × b1, be equivalent to not mix unit's expression happy.
The like can mix more than 3 kinds unit expression.
Can be summed up by above example, for arbitrary controller P, the account form of its mixed number P ' of a certain attribute q under complex expression is:
Suppose that complex expression is made up of N kind unit expression, the property value of controller P attribute q in amimia situation is p 0, the property value of controller P attribute q when unit's expression i degree is maximum is p i, the degree value of setting unit expression i in complex expression is w i; Wherein, if do not distinguish the control of left and right face, then w ibe exactly the unit expression degree y of i in complex expression, and if distinguish left and right face, then need to calculate in conjunction with three kinds of situations according to the position at reference mark on control according to step S31.
Then, computing controller P attribute q is w in the degree of unit expression i itime property value:
P i=w i×p 0+(1-w i)×p i
Then, the mixed number P ' of computing controller P attribute q under this complex expression:
P &prime; = &Sigma; i N P i w i &Sigma; i N w i
So, adopt above-mentioned formula, for each property value in each controller, the mixed number of computation attribute value under complex expression, thus the expression model obtaining this complex expression.
Step 4, according to expression animation needs, repeats step 3 at each key frame time, obtains the property value of the complex expression of each key frame time and each unit expression of correspondence thereof; Other frame moment during expression animation carry out interpolation to key-frame animation data, obtain continuous print animation.
Described key frame is by animation teacher frame selected in animation frame sequence needed for role animation, and animation teacher is in the value of key frame time setting animation attributes.Key frame information comprises time value and property value, and property value during frame between key frame carries out interpolation by three-dimensional animation making software according to property value during adjacent key frame and obtains.For the unit's expression in the present invention, the property value of key frame is the reference mark positional information of the control that each unit expression is corresponding on unified control inerface in step 3.
In sum, these are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (8)

1. the expression animation generation method of the three-dimensional animation role expressed one's feelings based on unit, is characterized in that, comprise the steps:
Step 1, determine to express unit's expression needed for mood according to three-dimensional animation production feature and modern emotional theory; Described unit expression refers to by expressing the complex expression fractionation of mood, not subdivisible unit expression, the corresponding a kind of mood of a kind of unit's expression;
Step 2, for each unit expression set up degree maximum time expression model, then set up separately 1 amimia time expression model;
Step 3, complex expression according to required performance, point to be deconstructed into more than one yuan of expressions of this complex expression, to determine its degree value, using the degree value expression model that amimia and each unit expresses one's feelings as weight mixing, obtains the expression model of this complex expression;
Step 4, according to expression animation needs, repeat step 3 at each key frame time, obtain the property value of the complex expression of each key frame time and each unit expression of correspondence thereof; Other frame moment during expression animation carry out interpolation to key-frame animation data, obtain continuous print animation.
2. the method for claim 1, is characterized in that, described unit expression comprises happiness, surprised, angry, sad, frightened, tired and contempt 7 kinds unit expression.
3. the method for claim 1, is characterized in that, arrange corresponding controller according to facial expression muscle distribution, each controller has more than one property value; In described step 2, when unit's expression time maximum for degree and amimia foundation expression model, the property value of facial expression muscle position to the controller on relevant position according to different expression influence is arranged; The controller of different first expression influence has difference.
4. method as claimed in claim 3, is characterized in that, drive the controller arranged according to facial expression muscle distribution, controller controls fusion anamorphoser again or textured bone produces required expression model.
5. method as claimed in claim 3, is characterized in that, according to the facial shape feature of different role, arrange position and the property value of controller targetedly.
6. method as claimed in claim 3, is characterized in that, in step 3, described using the expression model of degree value as weight mixing each unit expression, the expression model obtaining this complex expression is specially:
For each property value in each controller, the mixed number P ' of computation attribute value under complex expression, thus the expression model obtaining this complex expression; Wherein, for arbitrary controller P, the account form of its mixed number P ' of a certain attribute q under complex expression is:
Suppose that complex expression is made up of N kind unit expression, the property value of known control device P attribute q in amimia situation is p 0, the property value of controller P attribute q when unit's expression i degree is maximum is p i, the degree value of setting unit expression i in complex expression is w i;
First, computing controller P attribute q is w in the degree of unit expression i itime property value:
P i=w i×p 0+(1-w i)×p i
Then, the mixed number P ' of computing controller P attribute q under this complex expression:
P &prime; = &Sigma; i N P i w i &Sigma; i N w i .
7. method as claimed in claim 3, is characterized in that, the degree value w used in step 3 iby the unified control inerface in three-dimensional animation production instrument is arranged; This control inerface is made up of multiple foursquare control, and each control represents a kind of unit expression; Containing 1 reference mark in each control; By the position, reference mark in the corresponding control of mobile each unit's expression, set the degree value of unit's expression;
With control base mid point for true origin, horizontal direction is X-axis, vertical direction is that Y-axis sets up two-dimensional coordinate system; The degree value of Y-axis representation element expression, X-axis representation element expression is partial to the degree of left and right sides face; Corresponding position, reference mark is expressed one's feelings as (x, y) by the unit set, then y is unit's expression degree value, and x is the degree value being partial to left and right sides face; The expression degree that left and right face controller uses, i.e. w is determined according to the x value at reference mark and y value i.
8. method as claimed in claim 7, it is characterized in that, the described foursquare control length of side is 1;
1) if x=0, then the expression degree of left and right side face is y; So, the w of left and right side all controller uses on the face ibe equal to y;
2) if-0.5≤x<0, then the expression degree of left side face is y, and the expression degree of right side face is 2 × (0.5+x) × y; So, the w of left side controller use on the face i=y, the w that right side controller on the face uses i=2 × (0.5+x) × y;
3) if 0<x≤0.5, then the expression degree of right side face is y, and the expression degree of left side face is 2 × (0.5-x) × y; So, the w of right side controller use on the face i=y, the w that left side controller on the face uses i=2 × (0.5-x) × y.
CN201510013022.6A 2015-01-09 2015-01-09 Expression generation method for three-dimensional cartoon character based on element expression Withdrawn CN104599309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510013022.6A CN104599309A (en) 2015-01-09 2015-01-09 Expression generation method for three-dimensional cartoon character based on element expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510013022.6A CN104599309A (en) 2015-01-09 2015-01-09 Expression generation method for three-dimensional cartoon character based on element expression

Publications (1)

Publication Number Publication Date
CN104599309A true CN104599309A (en) 2015-05-06

Family

ID=53125059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510013022.6A Withdrawn CN104599309A (en) 2015-01-09 2015-01-09 Expression generation method for three-dimensional cartoon character based on element expression

Country Status (1)

Country Link
CN (1) CN104599309A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096366A (en) * 2015-07-23 2015-11-25 文化传信科技(澳门)有限公司 3D virtual service publishing platform system
CN107180446A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression animation generation method and device of character face's model
CN108022277A (en) * 2017-12-02 2018-05-11 天津浩宝丰科技有限公司 A kind of cartoon character design methods
CN108304072A (en) * 2018-02-09 2018-07-20 北京北行科技有限公司 A kind of VR virtual worlds role's expression implanted device and method for implantation
CN108876879A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium that human face animation is realized
CN110163939A (en) * 2019-05-28 2019-08-23 上海米哈游网络科技股份有限公司 Three-dimensional animation role's expression generation method, apparatus, equipment and storage medium
CN110874869A (en) * 2018-08-31 2020-03-10 百度在线网络技术(北京)有限公司 Method and device for generating virtual animation expression
WO2020233253A1 (en) * 2019-05-17 2020-11-26 网易(杭州)网络有限公司 Expression realization method and device for virtual character, and storage medium
CN113658308A (en) * 2021-08-25 2021-11-16 福建天晴数码有限公司 Method and system for reusing actions of roles in different body types
CN113781611A (en) * 2021-08-25 2021-12-10 北京壳木软件有限责任公司 Animation production method and device, electronic equipment and storage medium
CN114187177A (en) * 2021-11-30 2022-03-15 北京字节跳动网络技术有限公司 Method, device and equipment for generating special effect video and storage medium
CN114187177B (en) * 2021-11-30 2024-06-07 抖音视界有限公司 Method, device, equipment and storage medium for generating special effect video

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101149840A (en) * 2006-09-20 2008-03-26 清华大学 Complex expression emulation system and implementation method
CN101271593A (en) * 2008-04-03 2008-09-24 石家庄市桥西区深度动画工作室 Auxiliary production system of 3Dmax cartoon
CN101739709A (en) * 2009-12-24 2010-06-16 四川大学 Control method of three-dimensional facial animation
CN103198508A (en) * 2013-04-07 2013-07-10 河北工业大学 Human face expression animation generation method
US20140240324A1 (en) * 2008-12-04 2014-08-28 Intific, Inc. Training system and methods for dynamically injecting expression information into an animated facial mesh

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101149840A (en) * 2006-09-20 2008-03-26 清华大学 Complex expression emulation system and implementation method
CN101271593A (en) * 2008-04-03 2008-09-24 石家庄市桥西区深度动画工作室 Auxiliary production system of 3Dmax cartoon
US20140240324A1 (en) * 2008-12-04 2014-08-28 Intific, Inc. Training system and methods for dynamically injecting expression information into an animated facial mesh
CN101739709A (en) * 2009-12-24 2010-06-16 四川大学 Control method of three-dimensional facial animation
CN103198508A (en) * 2013-04-07 2013-07-10 河北工业大学 Human face expression animation generation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘箴: "虚拟人情绪向量和表情向量的合成", 《系统仿真学报》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096366A (en) * 2015-07-23 2015-11-25 文化传信科技(澳门)有限公司 3D virtual service publishing platform system
CN107180446A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression animation generation method and device of character face's model
US11087519B2 (en) 2017-05-12 2021-08-10 Tencent Technology (Shenzhen) Company Limited Facial animation implementation method, computer device, and storage medium
CN108876879A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium that human face animation is realized
CN108876879B (en) * 2017-05-12 2022-06-14 腾讯科技(深圳)有限公司 Method and device for realizing human face animation, computer equipment and storage medium
CN108022277A (en) * 2017-12-02 2018-05-11 天津浩宝丰科技有限公司 A kind of cartoon character design methods
CN108304072A (en) * 2018-02-09 2018-07-20 北京北行科技有限公司 A kind of VR virtual worlds role's expression implanted device and method for implantation
CN110874869A (en) * 2018-08-31 2020-03-10 百度在线网络技术(北京)有限公司 Method and device for generating virtual animation expression
CN110874869B (en) * 2018-08-31 2020-11-13 百度在线网络技术(北京)有限公司 Method and device for generating virtual animation expression
WO2020233253A1 (en) * 2019-05-17 2020-11-26 网易(杭州)网络有限公司 Expression realization method and device for virtual character, and storage medium
US11837020B2 (en) 2019-05-17 2023-12-05 Netease (Hangzhou) Network Co., Ltd. Expression realization method and device for virtual character, and storage medium
CN110163939A (en) * 2019-05-28 2019-08-23 上海米哈游网络科技股份有限公司 Three-dimensional animation role's expression generation method, apparatus, equipment and storage medium
CN113658308A (en) * 2021-08-25 2021-11-16 福建天晴数码有限公司 Method and system for reusing actions of roles in different body types
CN113781611A (en) * 2021-08-25 2021-12-10 北京壳木软件有限责任公司 Animation production method and device, electronic equipment and storage medium
CN113658308B (en) * 2021-08-25 2024-01-05 福建天晴数码有限公司 Method and system for multiplexing actions of roles of different body types
CN114187177A (en) * 2021-11-30 2022-03-15 北京字节跳动网络技术有限公司 Method, device and equipment for generating special effect video and storage medium
CN114187177B (en) * 2021-11-30 2024-06-07 抖音视界有限公司 Method, device, equipment and storage medium for generating special effect video

Similar Documents

Publication Publication Date Title
CN104599309A (en) Expression generation method for three-dimensional cartoon character based on element expression
CN100527170C (en) Complex expression emulation system and implementation method
CN102509333B (en) Action-capture-data-driving-based two-dimensional cartoon expression animation production method
CN104599305B (en) A kind of two three-dimensional animation producing methods combined
Wu et al. Establishment virtual maintenance environment based on VIRTOOLS to effectively enhance the sense of immersion of teaching equipment
CN101739709A (en) Control method of three-dimensional facial animation
CN104574481B (en) A kind of non-linear amending method of three-dimensional character animation
CN110443872B (en) Expression synthesis method with dynamic texture details
CN104835195A (en) Hierarchical skeleton model for virtual body posture control
CN114998488A (en) Virtual human model making method suitable for sign language information propagation
CN112634456B (en) Real-time high-realism drawing method of complex three-dimensional model based on deep learning
CN108908353A (en) Robot expression based on the reverse mechanical model of smoothness constraint imitates method and device
CN104537704A (en) Real-time dynamic generating method for features on bird body model
CN104574475B (en) A kind of fine animation method based on secondary controller
Xu Immersive display design based on deep learning intelligent VR technology
Wang et al. Application of Virtual Reality Technology and 3D Technology in Game Animation Production
Tang et al. Research on facial expression animation based on 2d mesh morphing driven by pseudo muscle model
Wang et al. A physically-based modeling and simulation framework for facial animation
CN102298784A (en) Cloud model-based synthetic method for facial expressions
Malazita et al. Contextualizing 3D printing's and photosculpture's contributions to techno-creative literacies
Li [Retracted] Application of Computational Digital Technology in the Improvement of Art Creation
Obradovic et al. Fine arts subjects at computer graphics studies at the Faculty of technical sciences in Novi Sad
Li et al. Overview of research on virtual intelligent human modeling technology
Huang et al. 3D Communication of Animated Character Image Visualization Based on Digital Media Perspectives
Lan Development and Application of 3D Modeling in Game

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160513

Address after: 100084, No. 8, building No. 1, Zhongguancun East Road, Haidian District, Beijing, CG05-101

Applicant after: Beijing spring film technology Co., Ltd.

Address before: 100083 No. 95 East Zhongguancun Road, Beijing, Haidian District

Applicant before: Beijing section skill has appearance science and technology limited Company

WW01 Invention patent application withdrawn after publication

Application publication date: 20150506

WW01 Invention patent application withdrawn after publication