CN104658022A - Method and device for generating three-dimensional cartoons - Google Patents
Method and device for generating three-dimensional cartoons Download PDFInfo
- Publication number
- CN104658022A CN104658022A CN201310585986.9A CN201310585986A CN104658022A CN 104658022 A CN104658022 A CN 104658022A CN 201310585986 A CN201310585986 A CN 201310585986A CN 104658022 A CN104658022 A CN 104658022A
- Authority
- CN
- China
- Prior art keywords
- key point
- bone
- motion
- model
- jth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method and a device for generating three-dimensional cartoons. The method for generating three-dimensional cartoons comprises the following steps: extracting skeleton framework information of a role main body, marking skeleton key points in a skeleton framework, enabling the skeleton key points to be in one-to-one correspondence with corresponding model key points in a pre-manufactured three-dimensional model, and determining the movement of the model key points according to the movement of the skeleton key points. Consequently, the three-dimensional cartoons are driven by actions of video roles; the cartoons can be generated in real time; the generation efficiency of the cartoons can be improved.
Description
Technical field
The present invention relates to the communications field, particularly a kind of three-dimensional animation manufacturing method and device.
Background technology
The manufacturing process of existing three-dimensional animation is comparatively complicated, generally by three-dimensional modeling, utilizes the action carried to drive role.
Three-dimensional animation drive process comparatively complicated, need professional engine and professional producer to spend the plenty of time, but also may occur action nature etc. problem.
Current various video content is very abundant, and corresponding animated content exists certain scarcity.Become animation version substantially to want all again to make for existing TV reproduction, take time and effort.
Summary of the invention
The embodiment of the present invention provides a kind of three-dimensional animation manufacturing method and device, is carried out the driving of three-dimensional animation, can generate animation in real time by the action of video character, improves the make efficiency of animation.
According to an aspect of the present invention, a kind of three-dimensional animation manufacturing method is provided, comprises:
Extract the bone frame information of role's main body;
Demarcate the bone key point in bone framework;
By model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point;
According to the motion of the motion Confirming model key point of bone key point, thus drive the motion of three-dimensional model.
Preferably, comprise according to the step of the motion of the motion Confirming model key point of bone key point:
Model key point P
idirection of motion P
idfor
Model key point P
imotion amplitude P
iffor
Wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
Preferably, comprise according to the step of the motion of the motion Confirming model key point of bone key point:
Model key point P
idirection of motion P
idfor
Model key point P
imotion amplitude P
iffor
Wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
Preferably, W
jiwith a jth bone key point K
jrelative to i-th bone key point K
idistance be inversely proportional to, wherein K
iwith P
icorresponding.
Preferably, after the step of bone frame information extracting role's main body, also comprise:
Calculate the degree of separation of each bone in bone framework;
Judge whether to exist the bone that degree of separation is greater than predetermined threshold;
If there is the bone that degree of separation is greater than predetermined threshold, then reject the bone that degree of separation is greater than predetermined threshold;
Then the step of the bone key point of demarcating in bone frame information is performed.
According to a further aspect in the invention, provide a kind of three-dimensional animation production device, comprise bone framework extraction unit, key point demarcates unit, matching unit and model-driven unit, wherein:
Bone framework extraction unit, for extracting the bone frame information of role's main body;
Key point demarcates unit, for demarcating the bone key point in bone framework;
Matching unit, for by model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point;
Model-driven unit, for the motion of the motion Confirming model key point according to bone key point, thus drives the motion of three-dimensional model.
Preferably, model-driven unit specifically utilizes formula
Confirming model key point P
idirection of motion P
idwith motion amplitude P
if, wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
Preferably, model-driven unit specifically utilizes formula
Confirming model key point P
idirection of motion P
idwith motion amplitude P
if, wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
Preferably, W
jiwith a jth bone key point K
jrelative to i-th bone key point K
idistance be inversely proportional to, wherein K
iwith P
icorresponding.
Preferably, said apparatus also comprises pseudo-branch culling unit, wherein:
Pseudo-branch culling unit, for after the bone frame information of bone framework extraction unit extraction role main body, calculate the degree of separation of each bone in bone framework, judge whether to exist the bone that degree of separation is greater than predetermined threshold, if there is the bone that degree of separation is greater than predetermined threshold, then reject the bone that degree of separation is greater than predetermined threshold, then indicate key point to demarcate the operation of the bone key point in unit execution demarcation bone frame information.
The present invention is by extracting the bone frame information of role's main body, demarcate the bone key point in bone framework, by model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point, according to the motion of the motion Confirming model key point of bone key point, thus drive the motion of three-dimensional model.Therefore carry out the driving of three-dimensional animation by the action of video character, animation can be generated in real time, improve the make efficiency of animation.
Description of the invention provides in order to example with for the purpose of describing, and is not exhaustively or limit the invention to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.Selecting and describing embodiment is in order to principle of the present invention and practical application are better described, and enables those of ordinary skill in the art understand the present invention thus design the various embodiments with various amendment being suitable for special-purpose.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic diagram of a three-dimensional animation manufacturing method of the present invention embodiment.
Fig. 2 is bone key point schematic diagram of the present invention.
Fig. 3 is the schematic diagram of a three-dimensional animation production device of the present invention embodiment.
Fig. 4 is the schematic diagram of another embodiment of three-dimensional animation production device of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Illustrative to the description only actually of at least one exemplary embodiment below, never as any restriction to the present invention and application or use.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Unless specifically stated otherwise, otherwise positioned opposite, the numerical expression of the parts of setting forth in these embodiments and step and numerical value do not limit the scope of the invention.
Meanwhile, it should be understood that for convenience of description, the size of the various piece shown in accompanying drawing is not draw according to the proportionate relationship of reality.
May not discuss in detail for the known technology of person of ordinary skill in the relevant, method and apparatus, but in the appropriate case, described technology, method and apparatus should be regarded as a part of authorizing instructions.
In all examples with discussing shown here, any occurrence should be construed as merely exemplary, instead of as restriction.Therefore, other example of exemplary embodiment can have different values.
It should be noted that: represent similar terms in similar label and letter accompanying drawing below, therefore, once be defined in an a certain Xiang Yi accompanying drawing, then do not need to be further discussed it in accompanying drawing subsequently.
Fig. 1 is the schematic diagram of a three-dimensional animation manufacturing method of the present invention embodiment.As shown in Figure 1, the method step of the present embodiment is as follows:
Step 101, extracts the bone frame information of role's main body.
By analyzing role's main body, carry out the corrosion process of successive ignition, iterations judges according to the effect of final iteration, the iterations slightly difference that different roles needs, the simple iterations of surface structure is just corresponding few, each iteration of surface structure complexity is in order to ensure that accuracy will suitable more iterations, each merger surface data to a certain degree, until only remaining bone framework.
Wherein, concerning two-dimensional video role, carry out bone and extract the process being similar to and eliminating frontier point, utilize opening operation the bur filtering less than structural element, make outer boundary point smooth, the breach less than structural element or hole are filled out in closed operation, make internal boundary points smooth.
Preferably, because skeleton has connectedness, the degree of separation of each skeleton can be calculated after many iterations, for any point, if degree of separation is less than certain particular value, then without the need to process, otherwise, it can be used as pseudo-branch to reject.
In one embodiment, after the step of bone frame information extracting role main body, calculate the degree of separation of each bone in bone framework further, judge whether to exist the bone that degree of separation is greater than predetermined threshold.If there is the bone that degree of separation is greater than predetermined threshold, then reject the bone that degree of separation is greater than predetermined threshold.
Step 102, demarcates the bone key point in bone framework.
Such as, the identification of key point is carried out according to skeleton feature, preset some key points of organization of human body feature, the left and right knee joint, shoulder joint, elbow joint etc. of such as human body, can limit and probably need to demarcate how many key points, determine will demarcate to refine to what degree to joint, also can manually mate some key points, the forefinger etc. such as in finger again.Corresponding schematic diagram as shown in Figure 2.
Step 103, by model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point.
That is, bone key point and the three-dimensional model made (existing three-dimensional model can use some professional softwares to make, such as Maya, 3D Max etc.) are carried out one_to_one corresponding, skeleton model is associated with three-dimensional model by key point.That is, the bone extracted and three-dimensional model all have the key point of some, extract bone in corresponding process to be consistent with the key point of three-dimensional model, that is the key point of bone has how many, three-dimensional model just also should arrange a corresponding key point in corresponding position, like this, just bone can be mapped with three-dimensional model, for ensuing three-dimensional model drives ready work.
Step 104, according to the motion of the motion Confirming model key point of bone key point, thus drives the motion of three-dimensional model.
When driving, first the point of model surface to be distributed according to different weights from bone, from surface point distance more close to key point weights proportion larger, far affect less, such as foot's key point will be far smaller than the impact of neck on head to head motion effects.The relative motion of motion with movable model of recycling bone, the motion of skeleton model, mainly for the definition of key point, drives the relative motion of all surface point by the motion of key point, thus reaches the effect of model-driven.
Based on the three-dimensional animation manufacturing method that the above embodiment of the present invention provides, by extracting the bone frame information of role's main body, demarcate the bone key point in bone framework, by model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point, according to the motion of the motion Confirming model key point of bone key point, thus drive the motion of three-dimensional model.Therefore carry out the driving of three-dimensional animation by the action of video character, animation can be generated in real time, improve the make efficiency of animation.
Preferably, the step of the motion of the above-mentioned motion Confirming model key point according to bone key point comprises:
Model key point P
idirection of motion P
idfor
Model key point P
imotion amplitude P
iffor
Wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
In another embodiment, the weight of bone key point relative to self can be increased the weight of further.Preferably, the step of the motion of the above-mentioned motion Confirming model key point according to bone key point comprises:
Model key point P
idirection of motion P
idfor
Model key point P
imotion amplitude P
iffor
Wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, W
iibe i-th bone key point K
irelative to the weights of self, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
Preferably, W
jiwith a jth bone key point K
jrelative to i-th bone key point K
idistance be inversely proportional to, wherein K
iwith P
icorresponding.That is, neck key point will much larger than the impact of sole key point on head key point on the impact of head key point.
Fig. 3 is the schematic diagram of a three-dimensional animation production device of the present invention embodiment.As shown in Figure 3, three-dimensional animation production device comprises bone framework extraction unit 301, key point demarcates unit 302, matching unit 303 and model-driven unit 304.Wherein:
Bone framework extraction unit 301, for extracting the bone frame information of role's main body.
Key point demarcates unit 302, for demarcating the bone key point in bone framework.
Matching unit 303, for by model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point.
Model-driven unit 304, for the motion of the motion Confirming model key point according to bone key point, thus drives the motion of three-dimensional model.
Based on the three-dimensional animation production device that the above embodiment of the present invention provides, by extracting the bone frame information of role's main body, demarcate the bone key point in bone framework, by model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point, according to the motion of the motion Confirming model key point of bone key point, thus drive the motion of three-dimensional model.Therefore carry out the driving of three-dimensional animation by the action of video character, animation can be generated in real time, improve the make efficiency of animation.
Preferably, model-driven unit specifically utilizes formula
Confirming model key point P
idirection of motion P
idwith motion amplitude P
if, wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
In another embodiment, model-driven unit specifically utilizes formula
Confirming model key point P
idirection of motion P
idwith motion amplitude P
if, wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
Preferably, W
jiwith a jth bone key point K
jrelative to i-th bone key point K
idistance be inversely proportional to, wherein K
iwith P
icorresponding.
Fig. 4 is the schematic diagram of another embodiment of three-dimensional animation production device of the present invention.Compared with embodiment illustrated in fig. 3, in the embodiment shown in fig. 4, three-dimensional animation production device also comprises pseudo-branch culling unit 401, wherein:
Pseudo-branch culling unit 401, for extract role's main body at bone framework extraction unit 301 bone frame information after, calculate the degree of separation of each bone in bone framework, judge whether to exist the bone that degree of separation is greater than predetermined threshold, if there is the bone that degree of separation is greater than predetermined threshold, then reject the bone that degree of separation is greater than predetermined threshold, then indicate key point to demarcate the operation of the bone key point in unit 302 execution demarcation bone frame information.
The present invention carries out the driving of three-dimensional animation by the action of video character, can generate animation in real time, improves the make efficiency of animation.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can have been come by hardware, the hardware that also can carry out instruction relevant by program completes, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium mentioned can be ROM (read-only memory), disk or CD etc.
Claims (10)
1. a three-dimensional animation manufacturing method, is characterized in that, comprising:
Extract the bone frame information of role's main body;
Demarcate the bone key point in bone framework;
By model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point;
According to the motion of the motion Confirming model key point of bone key point, thus drive the motion of three-dimensional model.
2. method according to claim 1, is characterized in that,
Step according to the motion of the motion Confirming model key point of bone key point comprises:
Model key point P
idirection of motion P
idfor
Model key point P
imotion amplitude P
iffor
Wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
3. method according to claim 1, is characterized in that,
Step according to the motion of the motion Confirming model key point of bone key point comprises:
Model key point P
idirection of motion P
idfor
Model key point P
imotion amplitude P
iffor
Wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
4. according to the method in claim 2 or 3, it is characterized in that,
W
jiwith a jth bone key point K
jrelative to i-th bone key point K
idistance be inversely proportional to, wherein K
iwith P
icorresponding.
5. method according to claim 1, is characterized in that,
After the step of bone frame information extracting role's main body, also comprise:
Calculate the degree of separation of each bone in bone framework;
Judge whether to exist the bone that degree of separation is greater than predetermined threshold;
If there is the bone that degree of separation is greater than predetermined threshold, then reject the bone that degree of separation is greater than predetermined threshold;
Then the step of the bone key point of demarcating in bone frame information is performed.
6. a three-dimensional animation production device, is characterized in that, comprises bone framework extraction unit, key point demarcates unit, matching unit and model-driven unit, wherein:
Bone framework extraction unit, for extracting the bone frame information of role's main body;
Key point demarcates unit, for demarcating the bone key point in bone framework;
Matching unit, for by model key point one_to_one corresponding corresponding to the three-dimensional model made in advance for bone key point;
Model-driven unit, for the motion of the motion Confirming model key point according to bone key point, thus drives the motion of three-dimensional model.
7. device according to claim 6, is characterized in that,
Model-driven unit specifically utilizes formula
Confirming model key point P
idirection of motion P
idwith motion amplitude P
if, wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
8. device according to claim 6, is characterized in that,
Model-driven unit specifically utilizes formula
Confirming model key point P
idirection of motion P
idwith motion amplitude P
if, wherein 1≤i≤n, n is the quantity of model key point, W
jifor a jth bone key point K
jrelative to i-th model key point P
iweights, K
jdfor the direction of motion of a jth bone key point, K
jffor a jth bone key point K
jmotion amplitude.
9. the device according to claim 7 or 8, is characterized in that,
W
jiwith a jth bone key point K
jrelative to i-th bone key point K
idistance be inversely proportional to, wherein K
iwith P
icorresponding.
10. device according to claim 6, is characterized in that, described device also comprises pseudo-branch culling unit, wherein:
Pseudo-branch culling unit, for after the bone frame information of bone framework extraction unit extraction role main body, calculate the degree of separation of each bone in bone framework, judge whether to exist the bone that degree of separation is greater than predetermined threshold, if there is the bone that degree of separation is greater than predetermined threshold, then reject the bone that degree of separation is greater than predetermined threshold, then indicate key point to demarcate the operation of the bone key point in unit execution demarcation bone frame information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310585986.9A CN104658022B (en) | 2013-11-20 | 2013-11-20 | Three-dimensional animation manufacturing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310585986.9A CN104658022B (en) | 2013-11-20 | 2013-11-20 | Three-dimensional animation manufacturing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104658022A true CN104658022A (en) | 2015-05-27 |
CN104658022B CN104658022B (en) | 2019-02-26 |
Family
ID=53249098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310585986.9A Active CN104658022B (en) | 2013-11-20 | 2013-11-20 | Three-dimensional animation manufacturing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104658022B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023287A (en) * | 2016-05-31 | 2016-10-12 | 中国科学院计算技术研究所 | Data driven interactive type three-dimensional animation compositing method and data driven interactive type three-dimensional animation compositing system |
CN106845400A (en) * | 2017-01-19 | 2017-06-13 | 南京开为网络科技有限公司 | A kind of brand show method realized special efficacy and produce based on the tracking of face key point |
CN106951095A (en) * | 2017-04-07 | 2017-07-14 | 胡轩阁 | Virtual reality interactive approach and system based on 3-D scanning technology |
CN107845126A (en) * | 2017-11-21 | 2018-03-27 | 江西服装学院 | A kind of three-dimensional animation manufacturing method and device |
CN109727302A (en) * | 2018-12-28 | 2019-05-07 | 网易(杭州)网络有限公司 | Bone creation method, device, electronic equipment and storage medium |
WO2019178853A1 (en) * | 2018-03-23 | 2019-09-26 | 真玫智能科技(深圳)有限公司 | Runway show realization method and device |
CN110310350A (en) * | 2019-06-24 | 2019-10-08 | 清华大学 | Action prediction generation method and device based on animation |
CN110321008A (en) * | 2019-06-28 | 2019-10-11 | 北京百度网讯科技有限公司 | Exchange method, device, equipment and storage medium based on AR model |
CN112037310A (en) * | 2020-08-27 | 2020-12-04 | 成都先知者科技有限公司 | Game character action recognition generation method based on neural network |
WO2021164653A1 (en) * | 2020-02-18 | 2021-08-26 | 京东方科技集团股份有限公司 | Method and device for generating animated figure, and storage medium |
WO2021169839A1 (en) * | 2020-02-29 | 2021-09-02 | 华为技术有限公司 | Action restoration method and device based on skeleton key points |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1725246A (en) * | 2005-07-14 | 2006-01-25 | 中国科学院计算技术研究所 | A kind of human body posture deforming method based on video content |
CN101197049A (en) * | 2007-12-21 | 2008-06-11 | 西北工业大学 | Full-automatic driving method of three-dimensional motion model based on three-dimensional motion parameter |
CN102509338A (en) * | 2011-09-20 | 2012-06-20 | 北京航空航天大学 | Contour and skeleton diagram-based video scene behavior generation method |
CN102509333A (en) * | 2011-12-07 | 2012-06-20 | 浙江大学 | Action-capture-data-driving-based two-dimensional cartoon expression animation production method |
-
2013
- 2013-11-20 CN CN201310585986.9A patent/CN104658022B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1725246A (en) * | 2005-07-14 | 2006-01-25 | 中国科学院计算技术研究所 | A kind of human body posture deforming method based on video content |
CN101197049A (en) * | 2007-12-21 | 2008-06-11 | 西北工业大学 | Full-automatic driving method of three-dimensional motion model based on three-dimensional motion parameter |
CN102509338A (en) * | 2011-09-20 | 2012-06-20 | 北京航空航天大学 | Contour and skeleton diagram-based video scene behavior generation method |
CN102509333A (en) * | 2011-12-07 | 2012-06-20 | 浙江大学 | Action-capture-data-driving-based two-dimensional cartoon expression animation production method |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023287B (en) * | 2016-05-31 | 2019-06-18 | 中国科学院计算技术研究所 | A kind of the interactive three-dimensional animation synthesizing method and system of data-driven |
CN106023287A (en) * | 2016-05-31 | 2016-10-12 | 中国科学院计算技术研究所 | Data driven interactive type three-dimensional animation compositing method and data driven interactive type three-dimensional animation compositing system |
CN106845400A (en) * | 2017-01-19 | 2017-06-13 | 南京开为网络科技有限公司 | A kind of brand show method realized special efficacy and produce based on the tracking of face key point |
CN106845400B (en) * | 2017-01-19 | 2020-04-10 | 南京开为网络科技有限公司 | Brand display method generated by realizing special effect based on face key point tracking |
CN106951095A (en) * | 2017-04-07 | 2017-07-14 | 胡轩阁 | Virtual reality interactive approach and system based on 3-D scanning technology |
CN107845126A (en) * | 2017-11-21 | 2018-03-27 | 江西服装学院 | A kind of three-dimensional animation manufacturing method and device |
WO2019178853A1 (en) * | 2018-03-23 | 2019-09-26 | 真玫智能科技(深圳)有限公司 | Runway show realization method and device |
CN109727302A (en) * | 2018-12-28 | 2019-05-07 | 网易(杭州)网络有限公司 | Bone creation method, device, electronic equipment and storage medium |
CN110310350A (en) * | 2019-06-24 | 2019-10-08 | 清华大学 | Action prediction generation method and device based on animation |
CN110321008A (en) * | 2019-06-28 | 2019-10-11 | 北京百度网讯科技有限公司 | Exchange method, device, equipment and storage medium based on AR model |
CN110321008B (en) * | 2019-06-28 | 2023-10-24 | 北京百度网讯科技有限公司 | Interaction method, device, equipment and storage medium based on AR model |
WO2021164653A1 (en) * | 2020-02-18 | 2021-08-26 | 京东方科技集团股份有限公司 | Method and device for generating animated figure, and storage medium |
US11836839B2 (en) | 2020-02-18 | 2023-12-05 | Boe Technology Group Co., Ltd. | Method for generating animation figure, electronic device and storage medium |
WO2021169839A1 (en) * | 2020-02-29 | 2021-09-02 | 华为技术有限公司 | Action restoration method and device based on skeleton key points |
CN112037310A (en) * | 2020-08-27 | 2020-12-04 | 成都先知者科技有限公司 | Game character action recognition generation method based on neural network |
Also Published As
Publication number | Publication date |
---|---|
CN104658022B (en) | 2019-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104658022A (en) | Method and device for generating three-dimensional cartoons | |
Wang et al. | Automatic generation of synthetic LiDAR point clouds for 3-D data analysis | |
CN104008564B (en) | A kind of human face expression cloning process | |
Luo et al. | NNWarp: Neural network-based nonlinear deformation | |
CN106919899A (en) | The method and system for imitating human face expression output based on intelligent robot | |
CN108140020A (en) | The enhanced incarnation animation of emotion | |
CN104331164A (en) | Gesture movement smoothing method based on similarity threshold value analysis of gesture recognition | |
Porfirio et al. | Libras sign language hand configuration recognition based on 3D meshes | |
CN109993131A (en) | A kind of design idea judgement system and method based on multi-modal signal fused | |
Garcia et al. | Spatial motion doodles: Sketching animation in vr using hand gestures and laban motion analysis | |
Devi et al. | Dance gesture recognition: a survey | |
Mahendran et al. | Researchdoom and cocodoom: Learning computer vision with games | |
Wang et al. | Learning human dynamics in autonomous driving scenarios | |
KR102467903B1 (en) | Method for presenting motion by mapping of skeleton employing Augmented Reality | |
Nunnari et al. | Animating azee descriptions using off-the-shelf ik solvers | |
CN104484034A (en) | Gesture motion element transition frame positioning method based on gesture recognition | |
CN103700129A (en) | Random human head and random human body 3D (three-dimensional) combination method | |
Kawai et al. | Data-driven speech animation synthesis focusing on realistic inside of the mouth | |
Kakarla et al. | A real time facial emotion recognition using depth sensor and interfacing with Second Life based Virtual 3D avatar | |
Sharma et al. | Intermediate block generation for multi-track sign language synthesis | |
Carreno-Medrano et al. | From expressive end-effector trajectories to expressive bodily motions | |
Hädrich et al. | Interactive Wood Fracture. | |
Sharma et al. | Facial Expressions for Sign Language Synthesis using FACSHuman and AZee | |
Yu et al. | Data-driven animation technology (D2AT) | |
JP6282121B2 (en) | Image recognition apparatus, image recognition method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |