CN113313792A - Animation video production method and device - Google Patents

Animation video production method and device Download PDF

Info

Publication number
CN113313792A
CN113313792A CN202110560177.7A CN202110560177A CN113313792A CN 113313792 A CN113313792 A CN 113313792A CN 202110560177 A CN202110560177 A CN 202110560177A CN 113313792 A CN113313792 A CN 113313792A
Authority
CN
China
Prior art keywords
action
animation
groups
keywords
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110560177.7A
Other languages
Chinese (zh)
Inventor
黄昌正
周言明
陈曦
黄庆麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Yilian Interation Information Technology Co ltd
Nanjing Harley Intelligent Technology Co ltd
Guangzhou Huanjing Technology Co ltd
Original Assignee
Dongguan Yilian Interation Information Technology Co ltd
Nanjing Harley Intelligent Technology Co ltd
Guangzhou Huanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Yilian Interation Information Technology Co ltd, Nanjing Harley Intelligent Technology Co ltd, Guangzhou Huanjing Technology Co ltd filed Critical Dongguan Yilian Interation Information Technology Co ltd
Priority to CN202110560177.7A priority Critical patent/CN113313792A/en
Publication of CN113313792A publication Critical patent/CN113313792A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method and a device for making an animation video. The method comprises the following steps: establishing an action animation model base, wherein the action animation model base comprises a plurality of groups of action animation models, acquiring text content, sequentially extracting a plurality of groups of action keywords and attribute keywords corresponding to the action keywords in the text content, sequentially matching the plurality of groups of action keywords with the action animation model base, determining a plurality of groups of first action animation models from the plurality of groups of action animation models, and sequentially making the plurality of groups of first action animation models into animation videos according to the attribute keywords.

Description

Animation video production method and device
Technical Field
The invention relates to the technical field of animation video production, in particular to a method and a device for producing animation video.
Background
At present, there are two main methods for creating animation. The first is to subdivide the animation into frames, in units of frames, by drawing static pictures and then concatenating the static pictures frame by frame to form a coherent animation. The production method is mainly used for two-dimensional animation scenes, and has the defects of consuming a relatively long time and consuming relatively much labor resources for drawing.
The second method is that the body motions of the person are collected by applying a motion capture technology, then the motion data are transmitted to a computing device, modeling is carried out through a professional engine and an algorithm, and the body motions are restored.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a method of producing an animated video and a corresponding apparatus for producing an animated video that overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for producing an animation video, where the method includes:
establishing an action animation model library, wherein the action animation model library comprises a plurality of groups of action animation models;
acquiring text content;
sequentially extracting a plurality of groups of action keywords and attribute keywords corresponding to the action keywords in the text content;
matching the plurality of groups of action keywords with the action animation model library in sequence, and determining a plurality of groups of first action animation models from the plurality of groups of action animation models;
and sequentially making the plurality of groups of first action animation models into animation videos according to the attribute keywords.
Optionally, the step of building a library of action animation models comprises:
acquiring action data;
making an initial action animation model by adopting the action data;
acquiring an action label corresponding to the initial action animation model;
generating an action animation model by adopting the initial action animation model and the action label;
repeating the steps to obtain a plurality of groups of action animation models;
and establishing an action animation model library by adopting the plurality of groups of action animation models.
Optionally, the step of sequentially extracting a plurality of groups of action keywords and corresponding attribute keywords in the text content includes:
dividing the text content into a plurality of sentences according to punctuation marks;
verbs are sequentially recognized from the sentences to serve as action keywords;
and identifying the words associated with the verbs as attribute keywords corresponding to the action keywords.
Optionally, the step of sequentially creating the plurality of groups of first motion animation models as animation videos according to the attribute keywords comprises:
determining action attributes by adopting the attribute keywords;
editing a first action animation model corresponding to the attribute keywords by adopting the action attributes to obtain a second action animation model;
repeating the steps to edit the plurality of groups of first action animation models into a plurality of groups of second action animation models;
and adopting the plurality of groups of second action animation models to produce animation videos.
The embodiment of the invention also discloses a device for making the animation video, which comprises:
the animation model library establishing module is used for establishing an action animation model library, and the action animation model library comprises a plurality of groups of action animation models;
the text content acquisition module is used for acquiring text content;
the action keyword and attribute keyword extraction module is used for sequentially extracting a plurality of groups of action keywords in the text content and attribute keywords corresponding to the action keywords;
the first action animation model determining module is used for matching the plurality of groups of action keywords with the action animation model library in sequence and determining a plurality of groups of first action animation models from the plurality of groups of action animation models;
and the animation video production module is used for producing the multiple groups of first action animation models into animation videos according to the attribute keywords in sequence.
Optionally, the animation model library building module includes:
the action data acquisition submodule is used for acquiring action data;
the initial action animation model making submodule is used for making an initial action animation model by adopting the action data;
the action tag obtaining submodule is used for obtaining an action tag corresponding to the initial action animation model;
the action animation model generation submodule is used for generating an action animation model by adopting the initial action animation model and the action label;
the repeated manufacturing submodule is used for repeating the steps to obtain a plurality of groups of action animation models;
and the action animation model base establishing submodule is used for establishing an action animation model base by adopting the plurality of groups of action animation models.
Optionally, the action keyword and attribute keyword extracting module includes:
the text content segmentation submodule is used for segmenting the text content into a plurality of sentences according to punctuations;
a verb identification submodule, which is used for sequentially identifying verbs from the sentences as action keywords;
and the verb associated word identification submodule is used for identifying the word associated with the verb as the attribute keyword corresponding to the action keyword.
Optionally, the animation video production module comprises:
the action attribute determining submodule is used for determining the action attribute by adopting the attribute key words;
the second action animation model generation module is used for editing the first action animation model corresponding to the attribute keyword by adopting the action attribute to obtain a second action animation model;
the repeated editing submodule is used for editing the multiple groups of first action animation models into multiple groups of second action animation models by repeating the steps;
and the animation video production submodule adopts the plurality of groups of second action animation models to produce animation videos.
The embodiment of the invention has the following advantages: establishing an action animation model base, wherein the action animation model base comprises a plurality of groups of action animation models, acquiring text content, sequentially extracting a plurality of groups of action keywords and attribute keywords corresponding to the action keywords in the text content, sequentially matching the plurality of groups of action keywords with the action animation model base, determining a plurality of groups of first action animation models from the plurality of groups of action animation models, and sequentially making the plurality of groups of first action animation models into animation videos according to the attribute keywords.
Drawings
Fig. 1 is a flowchart illustrating a first step of a method for creating an animation video according to an embodiment of the present invention.
FIG. 2 is a block diagram of a first embodiment of an apparatus for creating an animation video according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating steps of a first embodiment of a method for producing an animation video according to the present invention is shown, which may specifically include the following steps:
step 101, establishing an action animation model library, wherein the action animation model library comprises a plurality of groups of action animation models;
in the embodiment of the invention, firstly, a motion animation model library is established, wherein the motion animation model library comprises a plurality of groups of motion animation models, the motion animation models are animation models for performing a certain motion on an animation character, and the motion can be running, jumping, sitting, standing and the like.
The step of establishing the action animation model library comprises the following steps:
substep 1011, acquiring motion data;
in particular, motion data capture of human body motion is performed using various motion capture devices, which may be data gloves, motion capture suits, cameras, and the like. For example, the tester makes a jump action and obtains the action data of the jump.
Substep 1012, using the motion data to make an initial motion animation model;
after the motion data is acquired, the motion data is mapped into an animation model to produce an initial motion animation model, for example, after the motion data of a jump is acquired, the motion data of the jump is mapped into the animation model to produce the motion animation model of the jump.
A substep 1013, obtaining an action label corresponding to the initial action animation model;
the action tag is a description of the action animation model, for example, the action tag of the jumped action animation model may be a "jump," which may be input by a technician.
A substep 1014 generating an action animation model by using the initial action animation model and the action label;
substep 1015, repeating the above steps to obtain a plurality of groups of action animation models;
in an embodiment of the present invention, multiple sets of motion animation models may be obtained, for example, a jumping motion animation model, a running motion animation model, a sitting motion animation model, and the like may be obtained.
And a substep 1016 of establishing a motion animation model library by using the plurality of groups of motion animation models.
Step 102, acquiring text content;
the text content may be a script text, which may be written by any person, and is not further limited by the embodiment of the present invention.
103, sequentially extracting a plurality of groups of action keywords and attribute keywords corresponding to the action keywords in the text content;
the action keywords are keywords for the character to perform an action in the script, such as jumping, running, walking, and the like. The attribute keywords are used to represent attributes of action progress, and may be a speed attribute, a direction attribute, a height attribute, and the like.
Specifically, the step of sequentially extracting a plurality of groups of action keywords and corresponding attribute keywords in the text content includes:
substep 1031, segmenting the text content into a plurality of sentences according to punctuation marks;
a substep 1032 of sequentially recognizing verbs from the sentences as action keywords;
for example, if a sentence is "Xiaoming runs two minutes in the southeast direction at a speed of two meters per second", the verb "run" can be recognized from the sentence as an action keyword.
And a substep 1033 of identifying a word associated with the verb as an attribute keyword corresponding to the action keyword.
For example, a sentence in the text content is "xiaoming runs two minutes in the southeast direction at a speed of two meters per second", the verb is "run", and the words associated with the verb "run" are recognized as "southeast direction", "two meters per second", and "two minutes", the "southeast direction" being the direction attribute keyword corresponding to the "run" action keyword, "two meters per second" being the speed attribute keyword corresponding to the "run" action keyword, and the "two minutes" being the time attribute keyword corresponding to the "run" action keyword.
Step 104, matching the plurality of groups of action keywords with the action animation model library in sequence, and determining a plurality of groups of first action animation models from the plurality of groups of action animation models;
specifically, the action keyword may be matched with an action tag in an action animation model, and an action tag that is the same as or similar to the action keyword is searched, so as to determine a plurality of sets of first action animation models from the plurality of sets of action animation models.
And 105, sequentially making the plurality of groups of first action animation models into animation videos according to the attribute keywords.
In an embodiment of the present invention, the step of sequentially creating the plurality of sets of first motion animation models as animation videos according to the attribute keywords includes:
substep 1051, adopting the attribute key words to determine action attributes;
substep 1052, editing the first action animation model corresponding to the attribute keyword by using the action attribute to obtain a second action animation model;
for example, if the first motion animation model is a running motion animation model and the corresponding motion attributes are "southeast direction", "two meters per second", and "two minutes", the corresponding running motion animation model is edited to obtain a second motion animation model that runs two minutes in the southeast direction at a speed of two meters per second.
Substep 1053, repeat the above-mentioned step and edit the first animation model of said multiple groups into the second animation model of multiple groups;
in the embodiment of the present invention, the above-described editing steps are repeated to edit the plurality of sets of first motion animation models into a plurality of sets of second motion animation models, for example, a running motion animation model into a second motion animation model running for two minutes in the southeast direction at a speed of two meters per second, and a jumping motion animation model into a second motion animation model jumping upward by two meters.
And a substep 1054 of adopting the plurality of groups of second motion animation models to produce animation videos.
Specifically, the plurality of sets of second motion animation models are used to be produced as animation videos in the original order, and for example, the animation videos may be produced such that an animation character runs for two minutes at a speed of two meters per second in the southeast direction, and then jumps up by two meters.
In the embodiment of the invention, a motion animation model library is established, the motion animation model library comprises a plurality of groups of motion animation models, text content is obtained, a plurality of groups of motion keywords in the text content and attribute keywords corresponding to the motion keywords are sequentially extracted, the plurality of groups of motion keywords are sequentially matched with the motion animation model library, a plurality of groups of first motion animation models are determined from the plurality of groups of motion animation models, and the plurality of groups of first motion animation models are sequentially made into animation videos according to the attribute keywords.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 2, a block diagram of a first embodiment of an apparatus for creating an animation video according to the present invention is shown, which may specifically include the following modules:
an animation model library establishing module 201, configured to establish an action animation model library, where the action animation model library includes multiple groups of action animation models;
a text content obtaining module 202, configured to obtain text content;
an action keyword and attribute keyword extraction module 203, configured to sequentially extract multiple groups of action keywords in the text content and attribute keywords corresponding to the action keywords;
a first action animation model determining module 204, configured to match the multiple sets of action keywords with the action animation model library in sequence, and determine multiple sets of first action animation models from the multiple sets of action animation models;
and the animation video making module 205 is configured to make the plurality of groups of first motion animation models into animation videos according to the attribute keywords in sequence.
In an embodiment of the present invention, the animation model library creating module includes:
the action data acquisition submodule is used for acquiring action data;
the initial action animation model making submodule is used for making an initial action animation model by adopting the action data;
the action tag obtaining submodule is used for obtaining an action tag corresponding to the initial action animation model;
the action animation model generation submodule is used for generating an action animation model by adopting the initial action animation model and the action label;
the repeated manufacturing submodule is used for repeating the steps to obtain a plurality of groups of action animation models;
and the action animation model base establishing submodule is used for establishing an action animation model base by adopting the plurality of groups of action animation models.
In the embodiment of the present invention, the action keyword and attribute keyword extracting module includes:
the text content segmentation submodule is used for segmenting the text content into a plurality of sentences according to punctuations;
a verb identification submodule, which is used for sequentially identifying verbs from the sentences as action keywords;
and the verb associated word identification submodule is used for identifying the word associated with the verb as the attribute keyword corresponding to the action keyword.
In an embodiment of the present invention, the animation video production module includes:
the action attribute determining submodule is used for determining the action attribute by adopting the attribute key words;
the second action animation model generation module is used for editing the first action animation model corresponding to the attribute keyword by adopting the action attribute to obtain a second action animation model;
the repeated editing submodule is used for editing the multiple groups of first action animation models into multiple groups of second action animation models by repeating the steps;
and the animation video production submodule adopts the plurality of groups of second action animation models to produce animation videos.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an apparatus, including:
the animation video production method comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein when the computer program is executed by the processor, each process of the animation video production method embodiment is realized, the same technical effect can be achieved, and in order to avoid repetition, the details are not repeated.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the above-mentioned embodiment of the method for creating an animation video, and can achieve the same technical effect, and is not described here again to avoid repetition.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method for producing the animation video and the device for producing the animation video provided by the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. A method of producing an animated video, the method comprising:
establishing an action animation model library, wherein the action animation model library comprises a plurality of groups of action animation models;
acquiring text content;
sequentially extracting a plurality of groups of action keywords and attribute keywords corresponding to the action keywords in the text content;
matching the plurality of groups of action keywords with the action animation model library in sequence, and determining a plurality of groups of first action animation models from the plurality of groups of action animation models;
and sequentially making the plurality of groups of first action animation models into animation videos according to the attribute keywords.
2. The method of claim 1, wherein the step of building a library of action animation models comprises:
acquiring action data;
making an initial action animation model by adopting the action data;
acquiring an action label corresponding to the initial action animation model;
generating an action animation model by adopting the initial action animation model and the action label;
repeating the steps to obtain a plurality of groups of action animation models;
and establishing an action animation model library by adopting the plurality of groups of action animation models.
3. The method according to claim 1, wherein the step of sequentially extracting the plurality of sets of action keywords and the corresponding attribute keywords from the text content comprises:
dividing the text content into a plurality of sentences according to punctuation marks;
verbs are sequentially recognized from the sentences to serve as action keywords;
and identifying the words associated with the verbs as attribute keywords corresponding to the action keywords.
4. The method according to claim 1, wherein the step of sequentially animating the sets of first motion animation models as animated videos according to the attribute keywords comprises:
determining action attributes by adopting the attribute keywords;
editing a first action animation model corresponding to the attribute keywords by adopting the action attributes to obtain a second action animation model;
repeating the steps to edit the plurality of groups of first action animation models into a plurality of groups of second action animation models;
and adopting the plurality of groups of second action animation models to produce animation videos.
5. An apparatus for producing an animated video, the apparatus comprising:
the animation model library establishing module is used for establishing an action animation model library, and the action animation model library comprises a plurality of groups of action animation models;
the text content acquisition module is used for acquiring text content;
the action keyword and attribute keyword extraction module is used for sequentially extracting a plurality of groups of action keywords in the text content and attribute keywords corresponding to the action keywords;
the first action animation model determining module is used for matching the plurality of groups of action keywords with the action animation model library in sequence and determining a plurality of groups of first action animation models from the plurality of groups of action animation models;
and the animation video production module is used for producing the multiple groups of first action animation models into animation videos according to the attribute keywords in sequence.
6. The apparatus of claim 5, wherein the animation model library creation module comprises:
the action data acquisition submodule is used for acquiring action data;
the initial action animation model making submodule is used for making an initial action animation model by adopting the action data;
the action tag obtaining submodule is used for obtaining an action tag corresponding to the initial action animation model;
the action animation model generation submodule is used for generating an action animation model by adopting the initial action animation model and the action label;
the repeated manufacturing submodule is used for repeating the steps to obtain a plurality of groups of action animation models;
and the action animation model base establishing submodule is used for establishing an action animation model base by adopting the plurality of groups of action animation models.
7. The method of claim 5, wherein the action keyword and attribute keyword extraction module comprises:
the text content segmentation submodule is used for segmenting the text content into a plurality of sentences according to punctuations;
a verb identification submodule, which is used for sequentially identifying verbs from the sentences as action keywords;
and the verb associated word identification submodule is used for identifying the word associated with the verb as the attribute keyword corresponding to the action keyword.
8. The apparatus of claim 5, wherein the animated video production module comprises:
the action attribute determining submodule is used for determining the action attribute by adopting the attribute key words;
the second action animation model generation module is used for editing the first action animation model corresponding to the attribute keyword by adopting the action attribute to obtain a second action animation model;
the repeated editing submodule is used for editing the multiple groups of first action animation models into multiple groups of second action animation models by repeating the steps;
and the animation video production submodule adopts the plurality of groups of second action animation models to produce animation videos.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of a method of producing an animated video according to any one of claims 1 to 4.
CN202110560177.7A 2021-05-21 2021-05-21 Animation video production method and device Pending CN113313792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110560177.7A CN113313792A (en) 2021-05-21 2021-05-21 Animation video production method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110560177.7A CN113313792A (en) 2021-05-21 2021-05-21 Animation video production method and device

Publications (1)

Publication Number Publication Date
CN113313792A true CN113313792A (en) 2021-08-27

Family

ID=77374102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110560177.7A Pending CN113313792A (en) 2021-05-21 2021-05-21 Animation video production method and device

Country Status (1)

Country Link
CN (1) CN113313792A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090147009A1 (en) * 2005-09-21 2009-06-11 Matsushita Electric Industrial Co., Ltd. Video creating device and video creating method
CN106504304A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of method and device of animation compound
CN108986191A (en) * 2018-07-03 2018-12-11 百度在线网络技术(北京)有限公司 Generation method, device and the terminal device of figure action
CN109493402A (en) * 2018-11-09 2019-03-19 网易(杭州)网络有限公司 A kind of production method and device of plot animation
CN112004163A (en) * 2020-08-31 2020-11-27 北京市商汤科技开发有限公司 Video generation method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090147009A1 (en) * 2005-09-21 2009-06-11 Matsushita Electric Industrial Co., Ltd. Video creating device and video creating method
CN106504304A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of method and device of animation compound
CN108986191A (en) * 2018-07-03 2018-12-11 百度在线网络技术(北京)有限公司 Generation method, device and the terminal device of figure action
CN109493402A (en) * 2018-11-09 2019-03-19 网易(杭州)网络有限公司 A kind of production method and device of plot animation
CN112004163A (en) * 2020-08-31 2020-11-27 北京市商汤科技开发有限公司 Video generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110020437B (en) Emotion analysis and visualization method combining video and barrage
CN111582241B (en) Video subtitle recognition method, device, equipment and storage medium
KR20190116199A (en) Video data processing method, device and readable storage medium
CN109618236B (en) Video comment processing method and device
CN111275784B (en) Method and device for generating image
Wang et al. 3D human motion editing and synthesis: A survey
CN111524593B (en) Medical question-answering method and system based on context language model and knowledge embedding
CN110750996B (en) Method and device for generating multimedia information and readable storage medium
CN115497448A (en) Method and device for synthesizing voice animation, electronic equipment and storage medium
CN116188250A (en) Image processing method, device, electronic equipment and storage medium
CN113572981B (en) Video dubbing method and device, electronic equipment and storage medium
CN111862061A (en) Method, system, device and medium for evaluating aesthetic quality of picture
CN113313792A (en) Animation video production method and device
KR20240013613A (en) Method for generating AI human 3D motion only with video and its recording medium
CN115909390B (en) Method, device, computer equipment and storage medium for identifying low-custom content
CN115100581B (en) Video reconstruction model training method and device based on text assistance
CN112309181A (en) Dance teaching auxiliary method and device
Das et al. Storytube-generating 2d animation for a short story
CN112989114B (en) Video information generation method and device applied to video screening
CN109582296B (en) Program representation method based on stack enhanced LSTM
KR20240013612A (en) Apparatus for generating artificial intelligence-based three-dimensional motion matching sound and text and its operation method
KR20240013610A (en) Method and device for providing image-based ai human and motion generation sevice
KR20240013611A (en) Apparatus and method for generating a full 3D motion by reconstructing an omitted body part of an image
CN114170557A (en) Method and device for enhancing lip language data set, computer equipment and storage medium
CN114091662A (en) Text image generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination