CN118537464A - Animation generation method, device, electronic equipment and computer readable storage medium - Google Patents
Animation generation method, device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN118537464A CN118537464A CN202310185255.9A CN202310185255A CN118537464A CN 118537464 A CN118537464 A CN 118537464A CN 202310185255 A CN202310185255 A CN 202310185255A CN 118537464 A CN118537464 A CN 118537464A
- Authority
- CN
- China
- Prior art keywords
- animation
- information
- text information
- text
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 89
- 239000000463 material Substances 0.000 claims abstract description 296
- 230000004044 response Effects 0.000 claims abstract description 23
- 238000012216 screening Methods 0.000 claims description 42
- 238000004590 computer program Methods 0.000 claims description 29
- 230000008451 emotion Effects 0.000 claims description 20
- 238000001514 detection method Methods 0.000 claims description 7
- 241000283973 Oryctolagus cuniculus Species 0.000 description 83
- 241000270708 Testudinidae Species 0.000 description 34
- 230000008569 process Effects 0.000 description 34
- 238000012545 processing Methods 0.000 description 23
- 238000005516 engineering process Methods 0.000 description 15
- 238000002372 labelling Methods 0.000 description 13
- 230000014509 gene expression Effects 0.000 description 10
- 238000003058 natural language processing Methods 0.000 description 10
- 241000270666 Testudines Species 0.000 description 8
- 230000009471 action Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000012986 modification Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 241000282326 Felis catus Species 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000036651 mood Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241000699666 Mus <mouse, genus> Species 0.000 description 2
- 230000003203 everyday effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000036997 mental performance Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application discloses an animation generation method, an animation generation device, electronic equipment and a computer readable storage medium; in the embodiment of the application, the input text information is acquired; carrying out semantic recognition on the text information to display a recognition result, wherein the recognition result comprises key information recognized based on natural language semantics, and the key information comprises a plurality of animation element characteristic information; and generating a target animation corresponding to the text information in response to an animation generation operation for the text information, wherein the target animation is generated according to the animation materials matched with the animation element characteristic information. The embodiment of the application can simplify the operation of generating the animation and improve the efficiency of creating the animation.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an animation generation method, an animation generation device, an electronic device, and a computer readable storage medium.
Background
With the development of science and technology, more and more users like to watch the animation, and the creation and editing of the animation also become a hot research topic in the animation field.
Currently, the process of creating an animation is typically: the user manually makes the animation resources such as the script, the roles and the background corresponding to the script, and then generates the animation according to the animation resources, and the process is complicated, the creation efficiency is low, and the cost is high.
Disclosure of Invention
The embodiment of the application provides a video generation method, a video generation device, electronic equipment and a computer readable storage medium, which can solve the technical problems of complex process and low efficiency of animation generation.
The embodiment of the application provides an animation generation method, which comprises the following steps:
acquiring input text information;
Carrying out semantic recognition on the text information to display a recognition result, wherein the recognition result comprises key information recognized based on natural language semantics, and the key information comprises a plurality of animation element characteristic information;
And generating a target animation corresponding to the text information in response to an animation generation operation for the text information, wherein the target animation is generated according to an animation material matched with the animation element characteristic information.
Accordingly, an embodiment of the present application provides an animation generating apparatus, including:
The acquisition module is used for acquiring the input text information;
The display module is used for carrying out semantic recognition on the text information so as to display a recognition result, wherein the recognition result comprises key information recognized based on natural language semantics, and the key information comprises a plurality of animation element characteristic information;
And the generation module is used for responding to the animation generation operation of the text information and generating a target animation corresponding to the text information, wherein the target animation is generated according to the animation material matched with the animation element characteristic information.
Optionally, the generating module is specifically configured to perform:
responding to the animation generation operation aiming at the text information, and screening a plurality of animation element characteristic information from the key information;
And determining the animation materials matched with the feature information of each animation element, and generating a target animation corresponding to the text information according to the animation materials matched with the feature information of each animation element.
Optionally, the generating module is specifically configured to perform:
Determining the type of the animation material corresponding to the text information according to the key information;
acquiring an animation material set corresponding to the animation material type;
And screening the animation materials matched with the feature information of each animation element from the animation material set.
Optionally, the generating module is specifically configured to perform:
Screening a plurality of first candidate animation materials matched with the feature information of each animation element from the animation material set;
determining emotion attribute information corresponding to the text information according to the key information;
And screening the animation materials matched with the feature information of each animation element from the first candidate animation materials according to the emotion attribute information.
Optionally, the generating module is specifically configured to perform:
acquiring attribute information of an input object of the text information;
and screening the animation materials matched with the characteristic information of each animation element and the attribute information from the animation material set.
Optionally, the generating module is specifically configured to perform:
screening an initial animation material subset matched with the attribute information from the animation material set;
and screening out the animation materials matched with the characteristic information of each animation element from the initial animation material subset.
Optionally, the generating module is specifically configured to perform:
determining a target object matched with the input object according to the attribute information;
And taking the corresponding subset of the target objects in the animation material set as an initial animation material subset matched with the attribute information.
Optionally, the generating module is specifically configured to perform:
Acquiring a history matching record;
and using the history animation material corresponding to the history feature information matched with each animation element feature information in the history matching record as the animation material matched with each animation element feature information.
Optionally, the generating module is specifically configured to perform:
determining time axis information of the animation materials matched with the feature information of each animation element;
And displaying the animation materials matched with the feature information of each animation element on the time axis information to obtain the target animation corresponding to the text information.
Optionally, the display module is specifically configured to perform:
carrying out semantic recognition on the text information to obtain key information corresponding to the text information, wherein the key information comprises index information and a plurality of animation element characteristic information;
Determining a pointing object corresponding to the pointing information;
And determining a recognition result according to the index information, the index object and the animation element characteristic information, and displaying the recognition result.
Optionally, the display module is specifically configured to perform:
Carrying out semantic recognition on the text information to obtain key information corresponding to the text information, wherein the key information comprises fuzzy description information and a plurality of animation element characteristic information;
predicting the fuzzy description information to obtain prediction information corresponding to the fuzzy description information;
And determining a recognition result according to the fuzzy description information, the prediction information and the animation element characteristic information, and displaying the recognition result.
Optionally, the display module is specifically configured to perform:
determining scene attribute information corresponding to the text information according to the key information;
and predicting prediction information corresponding to the fuzzy description information according to the scene attribute information and the fuzzy description information.
Optionally, the presentation module is further configured to perform:
performing sub-mirror detection on the target animation to obtain each sub-mirror corresponding to the target animation;
And displaying the target animation and the feature information of the segment animation element corresponding to each of the sub-mirrors.
Optionally, the animation generating device further includes:
a modification module for performing:
Responding to the editing operation of the segment animation element characteristic information corresponding to the sub-mirror, editing the sub-mirror, and obtaining an edited sub-mirror;
and generating the edited animation corresponding to the text information according to the edited sub-mirror.
In addition, the embodiment of the application also provides electronic equipment, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for running the computer program in the memory to realize the animation generation method provided by the embodiment of the application.
In addition, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program is suitable for being loaded by a processor to execute any one of the animation generation methods provided by the embodiment of the application.
In addition, the embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program realizes any animation generation method provided by the embodiment of the application when being executed by a processor.
In the embodiment of the application, the input text information is acquired, semantic recognition is carried out on the text information to show the recognition result, the recognition result comprises the key information recognized based on natural language semantics, the key information comprises a plurality of animation element characteristic information, the animation generating operation for the text information is responded to, the target animation corresponding to the text information is generated, the target animation is generated according to the animation materials matched with the animation element characteristic information, the target animation corresponding to the text information can be obtained only by a user through inputting the text information, other operations are not needed by the user, the operation of generating the animation is simplified, the efficiency of creating the animation is improved, and the cost is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a scene of an animation generation process provided by an embodiment of the present application;
FIG. 2 is a flow chart of an animation generation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a target animation provided by an embodiment of the present application;
FIG. 4 is a flowchart of another animation generation method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an animation generation method provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an animation generating device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides an animation generation method, an animation generation device, electronic equipment and a computer readable storage medium. The animation generation device may be integrated in an electronic device, which may be a server or a terminal.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, network acceleration services (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform.
And, wherein a plurality of servers may be organized into a blockchain, and the servers are nodes on the blockchain.
The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
For example, as shown in fig. 1, the terminal acquires input text information and performs semantic recognition on the text information to display a recognition result, the recognition result includes key information recognized based on natural language semantics, the key information includes a plurality of animation element feature information, and the plurality of animation element feature information is transmitted to the server in response to an animation generation operation for the text information. The server determines the animation materials matched with the feature information of each animation element and returns the animation materials matched with the feature information of each animation element to the terminal. And the terminal generates a target animation corresponding to the text information according to the animation materials matched with the feature information of each animation element.
In addition, "plurality" in the embodiments of the present application means two or more. "first" and "second" and the like in the embodiments of the present application are used for distinguishing descriptions and are not to be construed as implying relative importance.
Natural language processing (Nature Language processing, NLP) is an important direction in the fields of computer science and artificial intelligence. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. Natural language processing is a science that integrates linguistics, computer science, and mathematics. Thus, the research in this field will involve natural language, i.e. language that people use daily, so it has a close relationship with the research in linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic questions and answers, knowledge graph techniques, and the like.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
In this embodiment, the animation generation method of the present application will be described from the viewpoint of the animation generation device, and for convenience, the animation generation device will be described in detail below as integrated in a terminal, that is, the terminal will be used as an execution subject.
Referring to fig. 2, fig. 2 is a flowchart of an animation generation method according to an embodiment of the application. The animation generation method may include:
S201, acquiring input text information.
Text information refers to information that describes the content of an animation and is composed of characters, which is the basis for generating a target animation. The text information may include at least one sentence.
The other terminals can send the input text information to the terminal, and the terminal acquires the input text information. Or the terminal can display a text input interface and then acquire the input text information in response to the editing operation of the text input interface, wherein the terminal can acquire the input text information through the input device in response to the editing operation of the text input interface.
The text input interface may be in a client, where the existence form of the client may be set according to the actual situation, for example, the client may be an application program, an applet or a web page, and embodiments of the present application are not limited herein.
Input devices refer to tools for information interaction between a user and an electronic device. The input device may include at least one of a keyboard, a mouse, a camera, a scanner, and a voice capture device.
When the input device is a camera, the terminal can respond to the editing operation of the text input interface to identify and extract the image acquired by the camera, so that the input text information is acquired. When the input device is a voice acquisition device, the terminal can respond to the editing operation of the text input interface to recognize and convert the voice acquired by the voice acquisition device, so as to obtain text information corresponding to the acquired voice, wherein the text information corresponding to the acquired voice is the input text information.
S202, carrying out semantic recognition on the text information to display a recognition result, wherein the recognition result comprises key information recognized based on natural language semantics, and the key information comprises a plurality of animation element characteristic information.
Semantic recognition is one of the important components of natural language processing (Natural Language Processing, NLP) technology, meaning in text information of each word is recognized, and content expressed by the text information is determined according to the meaning of each word in the text information.
The key information refers to information in the text information, which is effective in determining the content expressed by the text information, and may exist in the form of words or sentences. The embodiments of the present application are not limited herein.
For example, if the text information is "a tortoise and a rabbit in a forest", the key information of the text information refers to "forest", "a tortoise" and "rabbit".
The moving picture element feature information refers to information corresponding to a necessary factor constituting a target moving picture. For example, the animation element feature information may refer to one of character feature information, motion feature information, expression feature information, background feature information, prop feature information, place feature information, and dialogue feature information in an animation.
In some embodiments, semantic recognition is performed on text information to reveal recognition results, including:
performing sentence breaking processing on the text information to obtain each sentence corresponding to the text information;
Carrying out semantic recognition on each sentence to obtain key information corresponding to the text information;
and determining the identification result according to the key information, and displaying the identification result.
The sentence breaking process refers to a process of separating text information so as to facilitate understanding. For example, the text information is "a tortoise and a rabbit in a forest are racing" and the sentence "a tortoise and a rabbit in a forest are racing" can be obtained after the text information is subjected to sentence breaking.
After the terminal obtains the key information, the terminal can display the key information as an identification result, or after the terminal obtains the key information, the terminal can display the text information and the key information as the identification result at the same time, or after the terminal obtains the key information, the terminal can determine the user expression intention corresponding to the text information according to the key information, and then display the user expression intention corresponding to the key information and the text information as the identification result.
For example, if the text information is "a tortoise and a rabbit in a forest are racing", the key information corresponding to the text information may be "forest", "a", "tortoise", "a rabbit", "they" and "racing", and the user expression intention of the text information is "the tortoise and the rabbit are racing", and then the user expression intention corresponding to the key information and the text information is displayed as the recognition result.
After the terminal acquires the text information, if the text information is manually processed by the input object, the text information can be firstly subjected to sentence breaking inspection. Aiming at the situation that the text information is in a correct sentence breaking mode, when the text information is subjected to sentence breaking processing, the terminal can directly break sentences of the text information according to the text information sentence breaking mode, and aiming at the situation that the text information is in a wrong sentence breaking mode, the terminal can re-perform sentence breaking processing on sentences in the wrong sentence breaking mode of the text information according to the correct sentence breaking mode, so that each sentence corresponding to the text information is obtained.
If the text information is not manually processed by the input object, the terminal can directly process the text information.
The correct sentence breaking mode refers to a sentence breaking mode which is convenient for correctly understanding the text information after the text information is subjected to sentence breaking processing, and the incorrect sentence breaking mode refers to a sentence breaking mode which affects the understanding of the text information after the text information is subjected to sentence breaking processing.
For example, the text information is "a tortoise and a rabbit are in a forest, they are in race", the text information is manually broken sentence by a user, and the manual broken sentence processing mode of the text information by the user is a correct broken sentence mode, then the broken sentence processing is directly carried out on the text information according to the broken sentence mode of the text information, so that the sentence "the tortoise and the rabbit are in the forest" and the sentence "they are in race" are obtained.
For example, if the text information is manually processed by the user in a way of incorrect sentence breaking, the text information is processed by the user in a way of correct sentence breaking, and the text information is processed by the user in a way of repeated sentence breaking, so that the sentences corresponding to the text information, namely the text information, is processed by the user in a way of incorrect sentence breaking.
The terminal can store a correct sentence-breaking mode in advance, so that the terminal can perform sentence-breaking processing on the text information according to the correct sentence-breaking mode.
Or for the case of text message with wrong sentence breaking mode, the terminal may display the sentence with wrong sentence breaking mode in the first highlighting mode (the first highlighting mode may be highlighting for example), and display the correct sentence breaking mode of the sentence with wrong sentence breaking mode, then respond to the confirmation operation of the input object of the text message, re-perform the sentence breaking process on the sentence with wrong sentence with respect to the correct sentence breaking mode, or respond to the cancellation operation of the input object, regard the wrong sentence breaking mode in the text message as the sentence breaking mode of the sentence with wrong sentence with respect to the text message with wrong sentence breaking mode, that is, at this time, the sentence breaking mode of the sentence with wrong sentence with sentence breaking mode is not modified.
After obtaining each sentence in the text information, the terminal can perform semantic recognition on each sentence, so that key information corresponding to the text information is obtained. The terminal performs semantic recognition on each sentence, or the terminal may also perform semantic recognition on each sentence according to a preset sentence sequence, so as to obtain key information corresponding to the text information, where the preset sentence sequence may be a preset sequence or a sequence of each sentence in the text information.
Alternatively, the process of semantic recognition for each sentence may be: the sentence is subjected to word segmentation processing to obtain each word corresponding to the sentence, then the part of speech of each word of the sentence is analyzed to obtain the part of speech of each word (the part of speech of the word can be verb, noun and the like), and then key information is screened out from each word according to the part of speech of each word.
Or the process of carrying out semantic recognition on each sentence can be as follows: the terminal can firstly identify sentence components of each sentence to obtain a main-predicate-guest feature sentence, a time feature sentence or a place feature sentence and the like, and then identify meanings of the main-predicate-guest feature sentence, the time feature sentence and the place feature sentence to obtain key information corresponding to the text information.
For example, the text information "long before" and the sentence corresponding to "the turtle and the rabbit are" long before "and" the turtle and the rabbit are racing ", the sentence" long before "and the sentence" the turtle and the rabbit are racing "are subjected to sentence component analysis, the sentence" long before "can be determined as the time feature sentence, the sentence" the turtle and the rabbit are racing "as the main guest sentence, meaning identification is performed on the time feature sentence to obtain key information" time feature information ", meaning identification is performed on the main guest sentence to obtain key information" character feature information (the turtle and the rabbit) "and key information" action feature information (racing) ".
In the embodiment of the application, the text information is subjected to sentence breaking processing to obtain each sentence corresponding to the text information, then each sentence is subjected to semantic recognition to obtain the key information corresponding to the text information, the key information is used as a recognition result, and the recognition result is displayed, so that the text information is not required to be manually subjected to sentence breaking processing by an input object, or when the text information is subjected to sentence breaking by the input object, the sentence breaking processing can be carried out again on the sentence with the sentence breaking error.
In other embodiments, semantic recognition of the text information to reveal recognition results includes:
Carrying out semantic recognition on the text information to obtain key information corresponding to the text information, wherein the key information comprises index information and a plurality of animation element characteristic information;
Determining a pointing object corresponding to the pointing information;
and determining a recognition result according to the index information, the index object and the feature information of the animation element, and displaying the recognition result.
Some reference information is usually present in the text information, for example, the reference information may be "this", "that", "those", "it", "her" or "he", and the terminal needs to explicitly refer to the reference object corresponding to the reference information when generating the target animation. Therefore, in the embodiment of the application, after obtaining the key information in the text information, if the key information contains the index information, the index object corresponding to the index information is determined first, then the index information, the index object and the animation element feature information are used as the recognition result, and the recognition result is displayed.
For example, the text information is "rabbit and tortoise are good friends, they go to school together every day", "they are reference information, reference object of reference information is" rabbit and tortoise ", at this time, the recognition result is" rabbit and tortoise are good friends, they (rabbit and tortoise) go to school together every day.
After the identification result is displayed, if the index object corresponding to the index information determined by the terminal is wrong, the input object can modify the index object corresponding to the index information, at this time, the terminal can respond to the modification operation of the index object, modify the index object corresponding to the index information into the index object corresponding to the modification operation, and improve the accuracy of the index object, so that the accuracy of the obtained target animation is higher.
The index object corresponding to the index information determined by the terminal indicates that the index object corresponding to the index information determined by the terminal is different from the object to which the input object wants the index information to refer. For example, when the text information is "rabbit and tortoise go to school together, it is very happy," the reference information is "it," the input object is "it" to refer to rabbit, but the reference object corresponding to the reference information is determined to be tortoise by the terminal, and at this time, the error occurs to the reference object corresponding to the reference information determined by the terminal.
The reference object corresponding to the reference information may be the feature information of the animation element or may not be the feature information of the animation element.
In other embodiments, semantic recognition of the text information to reveal recognition results includes:
Carrying out semantic recognition on the text information to obtain key information corresponding to the text information, wherein the key information comprises fuzzy description information and a plurality of animation element characteristic information;
predicting the fuzzy description information to obtain prediction information corresponding to the fuzzy description information;
And determining a recognition result according to the fuzzy description information, the prediction information and the animation element characteristic information, and displaying the recognition result.
Fuzzy descriptive information refers to information that cannot be explicitly defined. When the fuzzy description information exists in the text information, the text information is not easy to understand, so that errors can be caused in the subsequent process of generating the target animation, therefore, in the embodiment of the application, in order to improve the accuracy of the target animation, the terminal can predict the fuzzy description information to obtain the prediction information corresponding to the fuzzy description information, and then the fuzzy description information, the prediction information and the animation element characteristic information are taken as recognition results, and the recognition results are displayed.
For example, the text information includes "story occurs in the northern song year" and "story occurs in the northern song year" which is an explicit time feature information, and then the subsequent time in the text can be set according to the time feature information. For another example, the text information includes "ancient time", where "ancient time" and "village" are key information, and the key information "ancient time" is time feature information that cannot be confirmed, that is, "ancient time" is fuzzy description information, and the terminal may determine the prediction information corresponding to "ancient time" as "virtual time", where the recognition result is "ancient time (virtual time) and" village ".
Optionally, after presenting the prediction information, the terminal may respond to an information modification operation on the prediction information, and modify the prediction information corresponding to the fuzzy description information into information corresponding to the information modification operation, so that the input object may modify the prediction information corresponding to the fuzzy description information. For example, the prediction information "virtual time" in the above case may be modified to be "virtual time-no history background".
In other embodiments, predicting the fuzzy description information to obtain prediction information corresponding to the fuzzy description information includes:
determining scene attribute information corresponding to the text information according to the key information;
and predicting prediction information corresponding to the fuzzy description information according to the scene attribute information and the fuzzy description information.
The scene attribute information corresponding to the text information refers to information indicating the property of the scene corresponding to the text information, the property of the scene corresponding to the text information can be the emotion property of the scene corresponding to the text information, namely, the scene attribute information comprises emotion attribute information, the emotion attribute information refers to information expressing emotion related to the text information, the emotion attribute information can be a key or atmosphere corresponding to the scene of the text information, the key corresponding to the scene of the text information refers to emotion which an input object wants to express through the scene corresponding to the text information, and the atmosphere corresponding to the scene of the text information refers to the mental performance of a certain strong feeling of the scene of the text information.
Or the property of the scene corresponding to the text information may be the style property of the scene corresponding to the text information, that is, the scene attribute information may include style attribute information, where the style attribute information refers to the picture feature of the scene corresponding to the text information, and may include brightness, virtual-real contrast, color, and the like of the scene corresponding to the text information.
For example, the scene attribute information is emotion attribute information, the text information is "after a test paper is taken in a small state, the score is found out to be bad, the mood of the user is very general," the general "in the text information is fuzzy description information, the emotion attribute information corresponding to the text information can be determined to be sad according to the key information" score "," bad state "and" mood ", and at the moment, the prediction information corresponding to the general" can be guessed to be sad.
For another example, the scene attribute information is style attribute information, the text information is "in forest, sunny, tortoise and rabbit race", the text information is "in forest" as fuzzy description information, the scene corresponding to the text information can be determined to be a scene of Bo's organism according to the key information "sunny", and at this time, the prediction information corresponding to "in forest" can be guessed to be "in forest of Bo's organism".
It should be noted that, if the scene attribute information corresponding to the text information exists in the key information, the process of determining the scene attribute information corresponding to the text information according to the key information may be: and screening scene attribute information corresponding to the text information from the key information.
For example, the text information is "having a ghost story, and is in a forest", the key information corresponding to the text information is "ghost story" and "forest", and "ghost story" is the scene attribute information corresponding to the text information.
If the scene attribute information corresponding to the text information does not exist in the key information, the process of determining the scene attribute information corresponding to the text information according to the key information may be: and determining the content expressed by the text information according to the key information, and determining scene attribute information corresponding to the text information according to the content expressed by the text information.
For example, the text information is "in forest, sunshine is bright", the tortoise and the rabbit are racing ", the key information corresponding to the text information is" forest "," sunshine is bright "," tortoise "," rabbit "and" racing "," forest "is fuzzy description information, the content expressed by the text information can be determined to be" sunshine is bright when the tortoise and the rabbit race in forest "according to the key information, and the scene attribute information corresponding to the text information is a Bo-Bo scene.
In the embodiment of the application, the scene attribute information corresponding to the text information is determined according to the key information, and then the prediction information corresponding to the fuzzy description information is predicted according to the scene attribute information and the fuzzy description information, so that the accuracy of the prediction information is improved.
S203, generating a target animation corresponding to the text information in response to the animation generation operation on the text information, wherein the target animation is generated according to the animation material matched with the feature information of the animation element.
After the recognition result is displayed by the terminal, if the recognition result is modified by the user, generating a target animation corresponding to the text information according to the modified recognition result in response to the animation generation operation for the text information, and if the recognition result is not modified by the user, generating the target animation corresponding to the text information according to the recognition result in response to the animation generation operation for the text information.
The target animation corresponding to the text information is generated according to the modified recognition result, which may refer to the target animation corresponding to the text information generated according to the feature information of the animation element in the modified recognition result. The generation of the target animation corresponding to the text information according to the recognition result may refer to the generation of the target animation corresponding to the text information according to the feature information of the animation element in the recognition result.
Optionally, generating a target animation corresponding to the text information according to the animation materials matched with the feature information of each animation element, including:
determining time axis information of the animation materials matched with the feature information of each animation element;
And displaying the animation materials matched with the feature information of each animation element on the time axis information to obtain the target animation corresponding to the text information.
After obtaining the animation materials matched with the feature information of each animation element, the terminal can determine the time axis information of the animation materials matched with the feature information of each animation element according to the key information, and then display the animation materials matched with the feature information of each animation element on the time axis information, so as to obtain the target animation corresponding to the text information.
The terminal can display the animation materials matched with the feature information of each animation element on the time axis information through the preview editing engine to obtain the target animation corresponding to the text information. The preview editing engine may be selected according to practical situations, for example, the preview editing engine may be a Unity engine or a UE engine, which is not limited herein.
Alternatively, the process of displaying the animation material matched with the feature information of each animation element on the time axis information by the preview editing engine may be:
The terminal sends the animation materials matched with the feature information of each animation element to a preview editing engine, the loading resources of the animation materials matched with the feature information of each animation element are determined through the preview editing engine, the animation materials matched with the feature information of each animation element are loaded according to the loading resources, then time axis information is generated according to key information, and the loaded animation materials matched with the feature information of each animation element are mounted on the time axis information to obtain target animation corresponding to the text information.
The loading resources may include picture resources, animation resources, voice resources, music resources, and the like. The loading resource may be cached in the terminal, or the terminal may also obtain the loading resource through a network request. After the terminal acquires the loading resource, after determining that the loading resource is not damaged, the path of the loading resource can be transferred to the preview editing engine, so that the preview editing engine can acquire the loading resource according to the path of the loading resource.
After obtaining the target animation, the terminal can preview the target animation through a preview editing engine, or modify the target animation, or export the target animation into a target video. When the preview editing engine is a Unity engine, the cat technology in the Unity engine can be used to directly export the target animation into the target video, so that the subsequent propagation is convenient.
It should be noted that, the form of the different animation materials after loading may be different, for example, when the animation material matched with the feature information of the animation element is dialogue voice information, the dialogue voice information is loaded into a file by using an Audio resource, and when the animation material matched with the feature information of the animation element is an image, the image is loaded into a texture.
According to the embodiment of the application, the animation materials matched with the feature information of each animation element are displayed on the time axis information through the preview editing engine, so that the target animation corresponding to the text information is obtained, a user can preview the target animation rapidly, the target animation can be modified in real time, the target animation is exported as the target video by one key, and the efficiency of publishing the target video is improved.
In the embodiment of the application, after acquiring the input text information, the terminal performs semantic recognition on the text information based on natural language processing to display the recognition result corresponding to the text information, and then automatically generates the target animation corresponding to the text information in response to the animation generation operation for the text information, wherein the target animation is generated according to the animation materials matched with the feature information of the animation elements, so that the input object can obtain the target animation corresponding to the text information only by inputting the text information, other operations are not required to be performed by the input object, and the input object has professional knowledge, thereby simplifying the operation of generating the animation, improving the convenience and efficiency of creating the animation and reducing the cost.
In the related art, before inputting text information, an input object needs to select a role picture corresponding to a role in the text information before inputting the text information, if a new role of which the role picture is not selected appears in the text information in the process of inputting the text information, the input object needs to exit from an input interface of the text information, firstly select the role picture of the new role, and then return to the input interface of the text information, so that the input object is incontinuous in the process of inputting the text information, the operation is complex, and the user experience is reduced.
In the embodiment of the application, after the terminal acquires the input text information and displays the recognition result corresponding to the text information (the recognition result comprises the feature information of the animation element), the terminal responds to the animation generation operation for the text information to automatically generate the target animation corresponding to the text information, wherein the target animation is generated according to the animation material matched with the feature information of the animation element, and the character picture corresponding to the character in the text information is not required to be selected by the input object before the text information is input, so that the phenomenon of a new character of the unselected character picture does not occur in the process of inputting the text information, the input object is enabled to be consistent in the process of inputting the text information, the creation is not interrupted by any interactive logic, the operation is simplified, the user experience is improved, the recognition result and the target animation are displayed to the input object, the visual result of the input object is enabled to be sensed, and the user experience is further improved.
Optionally, if the key information includes only feature information of the animation element, the terminal may directly determine the animation material matched with the feature information of the animation element, and generate a target animation corresponding to the text information according to the animation material matched with the feature information of the animation element, and if the key information includes not only feature information of the animation element but also other information, the terminal may screen the feature information of the animation element from the key information, then determine the animation material matched with the feature information of the animation element, and generate a target animation corresponding to the text information according to the animation material matched with the feature information of the animation element.
That is, in response to the animation generation operation for the text information, the process of generating the target animation corresponding to the text information from the animation element feature information in the recognition result may be:
determining an animation material matched with the feature information of the animation element in response to an animation generation operation for the text information;
and generating a target animation corresponding to the text information according to the animation material matched with the feature information of the animation element.
Or in response to the animation generation operation for the text information, the process of generating the target animation corresponding to the text information according to the feature information of the animation element in the recognition result may also be:
Responding to animation generation operation aiming at text information, and screening out a plurality of animation element characteristic information from key information;
And determining the animation materials matched with the feature information of each animation element, and generating the target animation corresponding to the text information according to the animation materials matched with the feature information of each animation element.
Wherein, the animation material refers to the dispersed image material which is not finished. The animation material matched with the animation element feature information may refer to an animation material containing animation element feature information, for example, the animation element feature information is character feature information and the character is a rabbit, and the animation material matched with the character feature information may refer to an image containing the rabbit, for example, the animation element feature information is school property feature information, and the animation material matched with the school property feature information may be an image containing a school bag.
When the feature information of the animation element is dialogue feature information, the animation material matching the feature information of the animation element may refer to dialogue voice information corresponding to the dialogue feature information.
Alternatively, the dialogue voice information corresponding To the dialogue feature information may be determined according To the dialogue feature information through a voice synthesis technology (Text To Speech, TTS), and the type of the voice synthesis technology may be selected according To the actual situation, for example, an end-To-end neural network model or a conventional voice synthesis technology (the conventional voice synthesis technology includes a front end module and a back end module), which is not limited herein.
When determining the dialogue voice information corresponding to the dialogue feature information through the conventional voice synthesis technology, the process of determining the dialogue voice information corresponding to the dialogue feature information may be:
Performing phoneme processing on the dialogue characteristic information to obtain a phoneme sequence corresponding to the dialogue characteristic information;
Extracting the characteristics of the phoneme sequence to obtain the spectral characteristics corresponding to the dialogue characteristics;
And generating dialogue voice information corresponding to the dialogue characteristic information according to the spectrum characteristics.
Alternatively, the process of determining the animation material that matches each animation element feature information may be:
acquiring a pre-stored preset animation material set;
Determining the similarity between the feature of the animation element and each animation material in a preset animation material set;
and screening the animation materials matched with the feature information of the animation elements from a preset animation material set according to the animation materials with the similarity equal to or greater than the preset similarity.
However, the preset animation material set includes animation materials of various animation material types, so that the number of the animation materials included in the preset animation material set is large, and the speed is low when the animation materials matched with the feature information of the animation elements are screened out from the preset animation material set.
Thus, in some embodiments, determining the animation material that matches each animation element feature information comprises:
Determining the type of the animation material corresponding to the text information according to the key information;
Acquiring an animation material set corresponding to the animation material type;
And screening the animation materials matched with the characteristic information of each animation element from the animation material set.
The terminal may determine a story type (a story type may be, for example, a fairy tale story, a moral story, a life story, or a science fiction story) corresponding to the text information according to the key information, and then determine an animation material type corresponding to the text information according to the story type.
The preset story type and the preset animation material type can be pre-associated and stored in a preset mapping table, after the story type corresponding to the text information is acquired, the story type is matched with the preset story type in the preset mapping table, and the preset animation material type corresponding to the preset story type matched with the story type is used as the animation material type corresponding to the text information.
The animation material matching each animation element feature information may exist in one animation material set, or the animation material matching each animation element feature information may exist in a different animation material set, and the embodiment of the present application is not limited herein.
Because various types of animation materials exist, for example, character feature information is rabbit feature information, but the animation materials matched with the character feature information comprise images of real rabbits and images of rabbits including cartoons, in the embodiment of the application, the type of the animation materials corresponding to text information can be determined according to key information, an animation material set corresponding to the type of the animation materials is obtained, then the animation materials matched with each animation element feature information are screened out from the animation material set, so that the similarity between the animation materials and the animation element feature information in the animation material set is only required to be calculated, the number of the animation materials included in the animation material set is smaller than that of the animation materials included in the preset animation material set, the time for obtaining the animation materials matched with the animation element feature information is greatly reduced, the speed for obtaining the animation materials matched with each animation element feature information is increased, the animation materials matched with each animation element feature information accord with the context of the text information, and finally the object animation materials generated according to the animation materials matched with each animation element feature information accord with the input requirements.
It should be noted that, the animation materials having a similarity equal to or greater than the similarity with the feature information of the animation elements in the animation material set may be one or more.
When there are a plurality of the animation materials having a similarity equal to or greater than a preset similarity in the animation material set, any one of the animation materials having a similarity equal to or greater than the preset similarity may be used as the animation material matching the feature information of the animation elements.
Or the animation materials matched with the feature information of each animation element can be screened out from the animation materials with the similarity equal to or larger than the preset similarity according to the attribute information of the input object of the key information or the text information, namely, the animation materials matched with the feature information of each animation element can be screened out from the animation material set according to the attribute information of the input object of the key information or the text information, so that the matching degree of the target animation and the text information is further improved, and the target animation meets the requirements of the input object more.
When the animation materials matched with the feature information of each animation element are screened out from the animation material set according to the key information, the process of screening out the animation materials matched with the feature information of each animation element from the animation material set can be as follows:
screening a plurality of first candidate animation materials matched with the characteristic information of each animation element from the animation material set;
determining emotion attribute information corresponding to the text information according to the key information;
And screening the animation materials matched with the characteristic information of each animation element from the first candidate animation materials according to the emotion attribute information.
When there are a plurality of the animation materials having a similarity equal to or greater than a preset similarity in the animation material set, the animation material having a similarity equal to or greater than the preset similarity may be used as the first candidate animation material. Because there are multiple emotion attribute information corresponding to the text information (emotion attribute information may be atmosphere or mood corresponding to the text information, for example, atmosphere corresponding to the text information may be horror or happy), animation materials matched with feature information of each animation element can be screened out from multiple first candidate animation materials according to emotion attribute information of the text information, so that matching degree of target animation generated according to the animation materials matched with feature information of each animation element and the text information is further improved, and the target animation meets requirements of input objects better.
For example, the feature information of the animation element is character feature information of the rabbit, the animation material set includes an image including the horror rabbit and an image including the lovely rabbit, that is, the first candidate animation material is the image including the horror rabbit and the image including the lovely rabbit, the atmosphere of the text information is horror atmosphere, if the image including the lovely rabbit is used as the animation material matched with the character feature information of the rabbit, the matching degree of the target animation generated according to the image of the lovely rabbit and the text information is lower, and the requirement of an input object is not met. In the application, the first candidate animation materials, namely the image containing the horror rabbit and the image containing the loving rabbit, are screened out from the animal material set, then the image containing the happy rabbit is determined to be used as the animation material matched with character characteristic information of the rabbit according to emotion attribute information of text information, so that the matching degree of target animation generated according to the image containing the happy rabbit and the text information is improved, the requirement of an input object is further met, and the user experience is improved.
When the animation materials matched with the feature information of each animation element are screened out from the animation material set according to the attribute information of the input object, the process of screening out the animation materials matched with the feature information of each animation element from the animation material set can be as follows:
acquiring attribute information of an input object of text information;
And screening the animation materials matched with the characteristic information of each animation element and the attribute information from the animation material set.
The attribute information of the input object may refer to information indicating characteristics of the input object, which may include static attribute information such as at least one of sex information of the input object and character information of the input object, and dynamic attribute information such as at least one of preference information of the input object, geographical position information of the input object, and age information of the input object.
The process of screening the animation materials matched with the feature information of each animation element and the attribute information from the animation material set may be:
Screening a plurality of second candidate animation materials matched with the characteristic information of each animation element from the animation material set;
and screening the animation materials matched with the attribute information from the second candidate animations.
At this time, there are various kinds of animation materials having a similarity equal to or greater than a preset similarity in the animation material set, that is, animation materials having a similarity equal to or greater than a preset similarity may be used as the second candidate animation material.
For example, when the input object prefers the peter rabbits and the feature information of the animation materials is the feature information of the roles of the rabbits, the plurality of second candidate animation materials comprise images containing the peter rabbits and images containing the netherlands rabbits, and then the images containing the peter rabbits are screened out from the plurality of second candidate animation materials to serve as the animation materials matched with the feature information of the roles of the rabbits.
Or the process of screening the animation materials matched with the characteristic information of each animation element and the attribute information from the animation material set can also be as follows:
screening an initial animation material subset matched with the attribute information from the animation material set;
and screening the animation materials matched with the characteristic information of each animation element from the initial animation material subset.
For example, when the input object prefers peter rabbits, maines and puppet cats, and the characteristic information of the animation materials is characteristic information of roles of the rabbits, the initial subset of the animation materials can comprise images containing peter rabbits, images containing maines and images containing puppet cats, and then the images containing peter rabbits are screened out from the initial subset of the animation materials to serve as the animation materials matched with the characteristic information of the roles of the rabbits.
In the embodiment of the application, the attribute information of the input object of the text information is acquired first, and then the animation materials which are matched with the feature information of each animation element and are matched with the attribute information are screened out from the animation material set, so that the matching degree of the target animation generated according to the animation materials matched with the feature information of each animation element and the text information is further improved, and the target animation meets the requirement of the input object.
Optionally, to further increase the speed of obtaining the initial animation material subset, selecting the initial animation material subset matching with the attribute information from the driven material set includes:
determining a target object matched with the input object according to the attribute information;
and taking the corresponding subset of the target object in the animation material set as an initial animation material subset matched with the attribute information.
A target object that matches an input object may be understood as an object that has some commonality with the input object. For example, the target object may be an object belonging to the same age group as the input object.
In the embodiment of the application, the subset of the target object matched with the input object in the animation material set is directly used as the initial animation material subset matched with the attribute information, and the one-to-one matching is not needed to be carried out on the animation materials in the animation material set according to the attribute information, so that the speed of obtaining the initial animation material subset is improved.
In other embodiments, to increase the speed of obtaining the animation material that matches each animation element feature information, the process of determining the animation material that matches each animation element feature information may be:
Acquiring a history matching record;
and taking the historical animation materials corresponding to the historical characteristic information matched with the characteristic information of each animation element in the historical matching record as the animation materials matched with the characteristic information of each animation element.
The history matching record may be a history matching record of the input object, or may be a history matching record of the client, that is, a history matching record of all objects recorded in the client.
In the embodiment of the application, the historical animation materials corresponding to the historical feature information matched with the feature information of each animation element in the historical matching record are directly used as the animation materials matched with the feature information of each animation element, so that the feature information of the animation elements is not required to be matched with the animation materials one by one, and the speed of obtaining the animation materials matched with the feature information of each animation element is improved.
In other embodiments, to increase the flow rate of the target animation, the process of screening the animation material from the animation material set, which matches with the feature information of each animation element, may be:
screening out a third candidate animation material matched with the characteristic information of each animation element from the animation material set;
determining heat information of a third candidate animation material;
And taking the third candidate animation material with the highest heat information as the animation material matched with the characteristic information of each animation element.
The heat information of the third candidate animation materials may be obtained by dividing the number of times of use of each third candidate animation material by the total number of times of use of all third candidate animation materials, or the heat information of the third candidate animation materials may be determined according to the flow rate of the existing animation corresponding to the third candidate animation materials, and the higher the flow rate of the existing animation corresponding to the third candidate animation materials is, the higher the heat information of the third candidate animation materials is.
For example, the animation element feature information is rabbit character feature information, the third candidate animation material matched with the rabbit character feature information comprises an image containing peter rabbits, an image containing rogue rabbits and an image containing blue rabbits, and if the flow rate of the existing animation of the image containing the peter rabbits is highest, the image containing the peter rabbits is taken as the animation material matched with each animation element feature information.
In the embodiment of the application, the third candidate animation materials matched with the feature information of each animation element are screened out from the animation material set, then the heat information of the third candidate animation materials is determined, and finally the third candidate animation materials with the highest heat information are used as the animation materials matched with the feature information of each animation element, so that the flow of the target animation generated according to the animation materials matched with the feature information of each animation element is improved.
In other embodiments, after generating the target animation corresponding to the text information in response to the animation generation operation for the text information, the method further includes:
And displaying the target animation.
To facilitate modification of the target animation, in other embodiments, after generating the target animation corresponding to the text information in response to the animation generation operation for the text information, the method further includes:
Performing sub-mirror detection on the target animation to obtain each sub-mirror corresponding to the target animation;
and displaying the target animation and the feature information of the segment animation element corresponding to each sub-mirror.
The split mirror refers to a graph that decomposes and describes a target moving image in units of one-shot mirror. And performing sub-mirror detection on the target animation to obtain each sub-mirror corresponding to the target animation, and then displaying the feature information of the target animation and the segment animation elements corresponding to each sub-mirror so as to edit the target animation.
For example, as shown in fig. 3, a target animation and a plurality of sub-mirrors are displayed, and the clip animation element feature information of the sub-mirror 4 among the plurality of sub-mirrors includes background feature information, character feature information (mew and Wang A), dialogue feature information (english speaking mice, mickey mouse), expression feature information (shock and blink) and action feature information (cross waist).
After the terminal displays the target animation and the sub-mirrors, the terminal can respond to the editing operation aiming at the segment animation element characteristic information corresponding to the sub-mirrors, edit the sub-mirrors to obtain edited sub-mirrors, and generate edited animation corresponding to text information according to the edited sub-mirrors, so that the target animation is modified, and the obtained edited animation meets the requirements of an input object.
The editing operation may be selected according to the actual situation, for example, the editing operation may be deletion of the feature information of the segment animation element, replacement of the feature information (e.g., character) of the segment animation element, extension time, or adjustment of the order of the feature information of the segment animation element, which is not limited herein.
Optionally, the terminal may edit the split mirrors by previewing the editing engine in response to an editing operation for the feature information of the segment animation element corresponding to the split mirrors, obtain edited split mirrors, and generate an edited animation corresponding to the text information according to the edited split mirrors. The preview editing engine may be a Unity engine, that is, the Unity engine may be used as a preview component and an animation editing component.
From the above, in the embodiment of the present application, the input text information is obtained, the text information is subjected to semantic recognition to display the recognition result, the recognition result includes the key information recognized based on the natural language semantic, the key information includes a plurality of animation element feature information, the target animation corresponding to the text information is generated in response to the animation generation operation for the text information, the target animation is the animation generated according to the animation material matched with the animation element feature information, so that the target animation corresponding to the text information can be obtained by the user only by inputting the text information, other operations are not required by the user, the operation of generating the animation is simplified, the efficiency of creating the animation is improved, and the cost is reduced.
The method described in the above embodiments is described in further detail below by way of example.
Referring to fig. 4, fig. 4 is a flowchart illustrating an animation generation method according to an embodiment of the application. The animation generation method flow may include:
S401, the terminal displays a text input interface of the client, and responds to editing operation of the text input interface to acquire input text information.
S402, the terminal performs semantic recognition on the text information based on a natural voice processing technology to obtain key information, wherein the key information comprises index information, fuzzy description information and a plurality of animation element characteristic information.
The terminal can generate a recognition toolkit (NLP SDK) according to a natural voice processing technology, and the recognition toolkit is embedded into the client so that the text information can be subjected to semantic recognition through the recognition toolkit in the client to obtain key information.
Optionally, the terminal may perform a sentence breaking process on the text information by using a recognition tool package in the client to obtain each sentence corresponding to the text information, and then perform semantic recognition on each sentence to obtain key information corresponding to the text information, where the key information may include a main guest feature sentence, a time feature sentence, or a place feature sentence, for example, the main guest feature sentence, the time feature sentence, or the place feature sentence may include animation element feature information, reference information, and fuzzy description information, and the animation element feature information may include character feature information (for example, the main guest feature sentence may include a character), one of action feature information, expression feature information, background feature information, prop feature information, place feature information, and dialogue feature information, and the fuzzy description information refers to information that cannot explicitly indicate meaning.
The terminal can carry out semantic recognition on each sentence through different sentence component recognition strategies of the recognition tool kit in the client, so that key information is obtained, and when one sentence component recognition strategy changes, other sentence component recognition strategies are not affected, so that subsequent maintenance and expansion are facilitated.
The different sentence component recognition strategies may include a subject recognition strategy, a predicate recognition strategy, a time recognition strategy, a place recognition strategy, and the like.
In the embodiment of the application, key information such as time, place and person in text information (large text) is automatically extracted by a natural voice processing technology.
S403, the terminal determines the index object corresponding to the index information, and extracts the key words from the key information, wherein the key words comprise animation element feature words corresponding to the animation element feature information.
For example, text messages are "long ago, a forest with a tortoise and rabbit that is racing. The rabbit runs quickly, and the rabbit starts to sleep after running for a while; the turtle runs slowly, but has no rest at a moment and runs all the time. Finally, the tortoise runs through the end point first, and the rabbit regret: i never prized later. ".
The text information contains the reference information of "they", "it" and "me", "their" reference objects are tortoises and rabbits, the first "it" reference object is a rabbit, the second "it" reference object is a tortoises, and the "me" reference object is a "rabbit".
The keywords are "long before", "in forest", "rabbit", "tortoise", "running", "sleeping", "running", "end point" and "say", wherein the feature words of the animation element may be "in forest", "rabbit", "tortoise", "running", "sleeping", "running", "end point" and "say", that is, the feature words of the animation element may be keywords representing places, keywords representing roles or keywords representing actions, etc.
S404, the terminal determines scene attribute information corresponding to the text information according to the keywords, and predicts prediction information corresponding to the fuzzy description information according to the scene attribute information and the fuzzy description information.
For example, text messages are "long ago, a forest with a tortoise and rabbit that is racing. The rabbit runs quickly, and the rabbit starts to sleep after running for a while; the turtle runs slowly, but has no rest at a moment and runs all the time. Finally, the tortoise runs through the end point first, and the rabbit regret: i never prized later. ".
Since "long before" does not exist a specific time, the "long before" is a fuzzy description information, and the "long before" is determined as a virtual time according to a keyword, and there is no history background, that is, the prediction information corresponding to the "long before" is the "virtual time and there is no history background".
And S405, the terminal displays the reference information, the reference object, the fuzzy description information, the prediction information and the animation element characteristic words as the recognition results.
For example, text messages are "long ago, a forest with a tortoise and rabbit that is racing. The rabbit runs quickly, and the rabbit starts to sleep after running for a while; the turtle runs slowly, but has no rest at a moment and runs all the time. Finally, the tortoise runs through the end point first, and the rabbit regret: i never prized later.
The recognition result may be "long ago (virtual time-no history background), with one tortoise and rabbit in the forest, they (rabbit and tortoise) racing. The rabbit runs quickly, and the rabbit starts to sleep after a while; the tortoise runs slowly, but it (tortoise) has no rest for a while and runs all the time. Finally, the tortoise runs through the end point first, and the rabbit regret: i (rabbits) have no pride after that. ".
Where the gray-marked places represent the identified keywords. Alternatively, different types of keywords may be labeled in different labeling manners, for example, keywords representing time may be labeled in a first labeling manner (the first labeling manner may be gray labeling, for example), keywords representing places may be labeled in a second labeling manner (the second labeling manner may be green labeling, for example), keywords representing roles may be labeled in a third labeling manner (the third labeling manner may be yellow labeling, for example), and keywords representing actions may be labeled in a fourth labeling manner (the fourth labeling manner may be red labeling, for example).
In the embodiment of the application, after the text information is acquired, the terminal automatically carries out voice recognition on the text through a natural voice processing technology to obtain key information, then determines the index object of index information in the key information, extracts the key word from the key information, determines the prediction information of fuzzy description information in the key information according to the key word, and then displays the key word, the index information, the index object, the fuzzy description information and the prediction information as recognition results, thereby realizing automatic recognition of the text information, generating a target animation corresponding to the text information according to the recognition results, and avoiding other operations of an input object, so that the input object can directly edit a large section of characters, the editing process is free from disturbing the input object, and the convenience of generating the target animation is improved.
For example, as shown in fig. 5, after obtaining the text information, the terminal point performs sentence breaking processing on the text information to obtain each sentence corresponding to the text information, and then performs semantic recognition on each sentence to obtain key information corresponding to the text information, where the key information may include, for example, a main-predicate feature sentence, a time feature sentence, a place feature sentence, or the like. And then, determining a reference object of the reference information, extracting keywords from the key information, determining prediction information of the fuzzy description information according to the keywords, and finally, displaying the keywords, the reference information, the reference object, the fuzzy description information and the prediction information as recognition results.
S406, the terminal responds to the animation generation operation for the text information, and the animation element characteristic words are sent to the server corresponding to the client through the client.
S407, the server acquires history matching records of all objects of the client, and judges whether history feature words matched with feature words of each animation element exist in the history matching records.
S408, if the history matching record contains the history feature words matched with each animation element feature word, the server takes the history animation materials corresponding to the history feature words matched with each animation element feature word in the history matching record as the animation materials matched with each animation element feature word.
In the embodiment of the application, the keywords can be recorded in the server, so that if the similar keywords are used by users, the recommendation of the animation materials can be quickly given.
S409, if the history matching record does not have the history characteristic words matched with the characteristic words of each animation element, the server screens out a plurality of animation element characteristic words from the keywords and determines the animation material type corresponding to the text information according to the keywords.
S4010, the server acquires an animation material set corresponding to the animation material type, and screens out the animation materials matched with the feature words of each animation element from the animation material set.
For example, when the animated element feature words are character feature words, the animated material that matches the character feature words may be an image that contains a character; when the feature words of the animation elements are feature words of props, the animation materials matched with the feature words of the props can be images containing the props; when the feature words of the animation element are background feature words, the animation materials matched with the background feature words can be images containing the background; when the feature words of the animation element are expression feature words, the animation materials matched with the expression feature words can be images containing expressions; when the animation element feature words are action feature words, the animation material that matches the action feature words may be an image that contains an action.
In the embodiment of the application, the resource files (the animation material sets) corresponding to the keywords are automatically matched, and the animation material with the highest matching degree is returned.
S4011, the server returns the animation materials matched with the feature words of each animation element to the client.
S4012, the terminal determines time axis information of the animation materials matched with the feature words of each animation element through the client, and displays the animation materials matched with the feature words of each animation element on the time axis information to obtain a target animation corresponding to the text information.
It should be noted that if a dialogue exists in the Text information, the dialogue in the Text information may be converted into a voice in the target animation through a Speech synthesis technology (TTS).
In the embodiment of the application, after the input object edits the text information, the target animation can be automatically generated according to the default resources (the animation materials matched with the feature words of the animation elements), and other operations are not required to be carried out by the input object, so that the process of creating the target animation is continuous and has strong operability.
S4013, the terminal performs sub-mirror detection on the target animation to obtain each sub-mirror corresponding to the target animation, and displays the target animation and the segment animation element characteristic words corresponding to each sub-mirror.
S4014, the terminal responds to the editing operation of the feature words of the segment animation elements corresponding to the sub-mirrors, edits the sub-mirrors to obtain edited sub-mirrors, and generates edited animation corresponding to the text information according to the edited sub-mirrors.
S4015, the terminal responds to the export operation of the edited animation, and exports the video corresponding to the edited animation.
In the embodiment of the application, after the terminal generates the target animation, the target animation can be adjusted to obtain the edited animation, so that the edited animation meets the requirement of the input object better.
In the embodiment of the application, the text information is subjected to semantic recognition through a natural language processing technology to obtain the index information, the fuzzy description information and the animation element characteristic words, and then the index object corresponding to the index information and the prediction information corresponding to the fuzzy description information are determined. And then displaying the indication information, the indication object, the fuzzy description information, the prediction information and the feature words of the animation elements, generating a target animation according to the animation materials matched with the feature words of the animation elements, realizing that the target animation can be automatically generated only by inputting text information into an input object, and other operations of the input object are not needed, simplifying the generation process of the animation, and improving the generation efficiency and convenience of the animation.
The specific implementation manner and the corresponding beneficial effects in the embodiment of the present application may refer to the above-mentioned animation generation method embodiment, and the embodiment of the present application will not be described herein.
In order to facilitate better implementation of the animation generation method provided by the embodiment of the application, the embodiment of the application also provides a device based on the animation generation method. Where the meaning of nouns is the same as in the animation generation method described above, specific implementation details may be referred to in the description of the method embodiments.
For example, as shown in fig. 6, the animation generating device may include:
an obtaining module 601, configured to obtain input text information;
The display module 602 is configured to perform semantic recognition on the text information to display a recognition result, where the recognition result includes key information that is recognized based on natural language semantics, and the key information includes feature information of a plurality of animation elements;
the generating module 603 is configured to generate, in response to an animation generating operation for the text information, a target animation corresponding to the text information, where the target animation is an animation generated according to an animation material that matches the feature information of the animation element.
Optionally, the generating module 603 is specifically configured to perform:
Responding to animation generation operation aiming at text information, and screening out a plurality of animation element characteristic information from key information;
And determining the animation materials matched with the feature information of each animation element, and generating the target animation corresponding to the text information according to the animation materials matched with the feature information of each animation element.
Optionally, the generating module 603 is specifically configured to perform:
Determining the type of the animation material corresponding to the text information according to the key information;
Acquiring an animation material set corresponding to the animation material type;
And screening the animation materials matched with the characteristic information of each animation element from the animation material set.
Optionally, the generating module 603 is specifically configured to perform:
screening a plurality of first candidate animation materials matched with the characteristic information of each animation element from the animation material set;
determining emotion attribute information corresponding to the text information according to the key information;
And screening the animation materials matched with the characteristic information of each animation element from the first candidate animation materials according to the emotion attribute information.
Optionally, the generating module 603 is specifically configured to perform:
acquiring attribute information of an input object of text information;
And screening the animation materials matched with the characteristic information of each animation element and the attribute information from the animation material set.
Optionally, the generating module 603 is specifically configured to perform:
screening an initial animation material subset matched with the attribute information from the animation material set;
and screening the animation materials matched with the characteristic information of each animation element from the initial animation material subset.
Optionally, the generating module 603 is specifically configured to perform:
determining a target object matched with the input object according to the attribute information;
and taking the corresponding subset of the target object in the animation material set as an initial animation material subset matched with the attribute information.
Optionally, the generating module 603 is specifically configured to perform:
Acquiring a history matching record;
and taking the historical animation materials corresponding to the historical characteristic information matched with the characteristic information of each animation element in the historical matching record as the animation materials matched with the characteristic information of each animation element.
Optionally, the generating module 603 is specifically configured to perform:
determining time axis information of the animation materials matched with the feature information of each animation element;
And displaying the animation materials matched with the feature information of each animation element on the time axis information to obtain the target animation corresponding to the text information.
Optionally, the presentation module 602 is specifically configured to perform:
Carrying out semantic recognition on the text information to obtain key information corresponding to the text information, wherein the key information comprises index information and a plurality of animation element characteristic information;
Determining a pointing object corresponding to the pointing information;
and determining a recognition result according to the index information, the index object and the feature information of the animation element, and displaying the recognition result.
Optionally, the presentation module 602 is specifically configured to perform:
Carrying out semantic recognition on the text information to obtain key information corresponding to the text information, wherein the key information comprises fuzzy description information and a plurality of animation element characteristic information;
predicting the fuzzy description information to obtain prediction information corresponding to the fuzzy description information;
And determining a recognition result according to the fuzzy description information, the prediction information and the animation element characteristic information, and displaying the recognition result.
Optionally, the presentation module 602 is specifically configured to perform:
determining scene attribute information corresponding to the text information according to the key information;
and predicting prediction information corresponding to the fuzzy description information according to the scene attribute information and the fuzzy description information.
Optionally, the presentation module 602 is further configured to perform:
Performing sub-mirror detection on the target animation to obtain each sub-mirror corresponding to the target animation;
and displaying the target animation and the feature information of the segment animation element corresponding to each sub-mirror.
Optionally, the animation generating device further includes:
a modification module for performing:
responding to the editing operation of the segment animation element characteristic information corresponding to the split mirrors, and editing the split mirrors to obtain edited split mirrors;
and generating the edited animation corresponding to the text information according to the edited sub-mirror.
In the specific implementation, each module may be implemented as an independent entity, or may be combined arbitrarily, and implemented as the same entity or a plurality of entities, and the specific implementation and the corresponding beneficial effects of each module may be referred to the foregoing method embodiments, which are not described herein again.
The embodiment of the application also provides an electronic device, which may be a server or a terminal, as shown in fig. 7, and shows a schematic structural diagram of the electronic device according to the embodiment of the application, specifically:
The electronic device may include one or more processing cores 'processors 701, one or more computer-readable storage media's memory 702, power supply 703, and input unit 704, among other components. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 7 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
The processor 701 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing computer programs and/or modules stored in the memory 702, and invoking data stored in the memory 702. Optionally, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
The memory 702 may be used to store computer programs and modules, and the processor 701 performs various functional applications and data processing by executing the computer programs and modules stored in the memory 702. The memory 702 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, computer programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory 702 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 702 may also include a memory controller to provide access to the memory 702 by the processor 701.
The electronic device further comprises a power supply 703 for powering the various components, preferably the power supply 703 is logically connected to the processor 701 by a power management system, whereby the functions of managing charging, discharging, and power consumption are performed by the power management system. The power supply 703 may also include one or more of any component, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, etc.
The electronic device may further comprise an input unit 704, which input unit 704 may be used for receiving input digital or character information and generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with user settings and function control.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 701 in the electronic device loads executable files corresponding to the processes of one or more computer programs into the memory 702 according to the following instructions, and the processor 701 executes the computer programs stored in the memory 702, so as to implement various functions, for example:
acquiring input text information;
Carrying out semantic recognition on the text information to display a recognition result, wherein the recognition result comprises key information recognized based on natural language semantics, and the key information comprises a plurality of animation element characteristic information;
in response to an animation generation operation for text information, a target animation corresponding to the text information is generated, the target animation being an animation generated from an animation material that matches the animation element feature information.
The specific embodiments and the corresponding beneficial effects of the above operations can be referred to the detailed description of the animation generation method, and are not described herein.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of the various methods of the above embodiments may be performed by a computer program, or by computer program control related hardware, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium in which a computer program is stored, the computer program being capable of being loaded by a processor to perform the steps of any one of the animation generation methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
acquiring input text information;
Carrying out semantic recognition on the text information to display a recognition result, wherein the recognition result comprises key information recognized based on natural language semantics, and the key information comprises a plurality of animation element characteristic information;
in response to an animation generation operation for text information, a target animation corresponding to the text information is generated, the target animation being an animation generated from an animation material that matches the animation element feature information.
The specific embodiments and the corresponding beneficial effects of each of the above operations can be found in the foregoing embodiments, and are not described herein again.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Since the computer program stored in the computer readable storage medium can execute the steps in any animation generation method provided by the embodiment of the present application, the beneficial effects that any animation generation method provided by the embodiment of the present application can achieve can be achieved, which are detailed in the previous embodiments and are not described herein.
Wherein according to an aspect of the application, a computer program product or a computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the above-described animation generation method.
The foregoing describes in detail a method, apparatus, electronic device and computer readable storage medium for generating an animation according to embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing examples are only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.
Claims (18)
1. An animation generation method, comprising:
acquiring input text information;
Carrying out semantic recognition on the text information to display a recognition result, wherein the recognition result comprises key information recognized based on natural language semantics, and the key information comprises a plurality of animation element characteristic information;
And generating a target animation corresponding to the text information in response to an animation generation operation for the text information, wherein the target animation is generated according to the animation materials matched with the animation element characteristic information.
2. The animation generation method according to claim 1, wherein the generating a target animation corresponding to the text information in response to an animation generation operation for the text information, comprises:
Screening a plurality of animation element feature information from the key information in response to an animation generation operation for the text information;
And determining the animation materials matched with the feature information of each animation element, and generating a target animation corresponding to the text information according to the animation materials matched with the feature information of each animation element.
3. The animation generation method according to claim 2, wherein the determining the animation material that matches each of the animation element feature information comprises:
determining the type of the animation material corresponding to the text information according to the key information;
Acquiring an animation material set corresponding to the animation material type;
And screening the animation materials matched with the feature information of each animation element from the animation material set.
4. The animation generation method of claim 3, wherein the screening the animation material matching each of the animation element feature information from the animation material set comprises:
Screening a plurality of first candidate animation materials matched with the feature information of each animation element from the animation material set;
determining emotion attribute information corresponding to the text information according to the key information;
and screening the animation materials matched with the feature information of each animation element from the first candidate animation materials according to the emotion attribute information.
5. The animation generation method of claim 3, wherein the screening the animation material matching each of the animation element feature information from the animation material set comprises:
acquiring attribute information of an input object of the text information;
And screening the animation materials matched with the characteristic information of each animation element and the attribute information from the animation material set.
6. The animation generation method of claim 5, wherein the screening the animation material that matches each of the animation element feature information and matches the attribute information from the animation material set comprises:
Screening an initial animation material subset matched with the attribute information from the animation material set;
And screening out the animation materials matched with the characteristic information of each animation element from the initial animation material subset.
7. The animation generation method of claim 6, wherein the screening the initial subset of animation materials from the set of animation materials that match the attribute information comprises:
Determining a target object matched with the input object according to the attribute information;
and taking the corresponding sub-set of the target object in the animation material set as an initial animation material sub-set matched with the attribute information.
8. The animation generation method according to claim 2, wherein the determining the animation material that matches each of the animation element feature information comprises:
Acquiring a history matching record;
And taking the historical animation materials corresponding to the historical feature information matched with each animation element feature information in the historical matching record as the animation materials matched with each animation element feature information.
9. The animation generation method according to claim 2, wherein the generating the target animation corresponding to the text information based on the animation material matched with each of the animation element feature information comprises:
determining time axis information of the animation materials matched with the feature information of each animation element;
And displaying the animation materials matched with the feature information of each animation element on the time axis information to obtain the target animation corresponding to the text information.
10. The animation generation method according to claim 1, wherein the performing semantic recognition on the text information to display a recognition result comprises:
Carrying out semantic recognition on the text information to obtain key information corresponding to the text information, wherein the key information comprises index information and a plurality of animation element characteristic information;
determining a pointing object corresponding to the pointing information;
and determining a recognition result according to the indication information, the indication object and the animation element characteristic information, and displaying the recognition result.
11. The animation generation method according to claim 1, wherein the performing semantic recognition on the text information to display a recognition result comprises:
Carrying out semantic recognition on the text information to obtain key information corresponding to the text information, wherein the key information comprises fuzzy description information and a plurality of animation element characteristic information;
Predicting the fuzzy description information to obtain prediction information corresponding to the fuzzy description information;
and determining a recognition result according to the fuzzy description information, the prediction information and the animation element characteristic information, and displaying the recognition result.
12. The animation generation method according to claim 11, wherein predicting the blurred description information to obtain the predicted information corresponding to the blurred description information comprises:
Determining scene attribute information corresponding to the text information according to the key information;
And predicting prediction information corresponding to the fuzzy description information according to the scene attribute information and the fuzzy description information.
13. The animation generation method according to any one of claims 1 to 12, characterized by further comprising, after the generating of the target animation corresponding to the text information in response to the animation generation operation for the text information:
performing sub-mirror detection on the target animation to obtain each sub-mirror corresponding to the target animation;
And displaying the target animation and the segment animation element characteristic information corresponding to each sub-mirror.
14. The animation generation method of claim 13, further comprising, after the displaying the target animation and the clip animation element feature information corresponding to each of the partial mirrors:
responding to the editing operation of the segment animation element characteristic information corresponding to the sub-mirror, editing the sub-mirror, and obtaining an edited sub-mirror;
and generating the edited animation corresponding to the text information according to the edited sub-mirror.
15. An animation generation device, comprising:
The acquisition module is used for acquiring the input text information;
The display module is used for carrying out semantic recognition on the text information so as to display a recognition result, wherein the recognition result comprises key information recognized based on natural language semantics, and the key information comprises a plurality of animation element characteristic information;
and the generation module is used for responding to the animation generation operation aiming at the text information and generating a target animation corresponding to the text information, wherein the target animation is generated according to the animation materials matched with the feature information of the animation elements.
16. An electronic device comprising a processor and a memory, the memory storing a computer program, the processor configured to execute the computer program in the memory to perform the animation generation method of any of claims 1-14.
17. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded by a processor for performing the animation generation method of any of claims 1 to 14.
18. A computer program product, characterized in that the computer program product stores a computer program adapted to be loaded by a processor for performing the animation generation method of any of claims 1 to 14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310185255.9A CN118537464A (en) | 2023-02-22 | 2023-02-22 | Animation generation method, device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310185255.9A CN118537464A (en) | 2023-02-22 | 2023-02-22 | Animation generation method, device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118537464A true CN118537464A (en) | 2024-08-23 |
Family
ID=92379744
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310185255.9A Pending CN118537464A (en) | 2023-02-22 | 2023-02-22 | Animation generation method, device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118537464A (en) |
-
2023
- 2023-02-22 CN CN202310185255.9A patent/CN118537464A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11762915B2 (en) | Multi-modal virtual experiences of distributed content | |
US20220050661A1 (en) | Analyzing graphical user interfaces to facilitate automatic interaction | |
KR102457665B1 (en) | Predict topics for potential relevance based on retrieved/generated digital media files | |
US20140164507A1 (en) | Media content portions recommended | |
US20130152000A1 (en) | Sentiment aware user interface customization | |
US9940307B2 (en) | Augmenting text with multimedia assets | |
CN112231563B (en) | Content recommendation method, device and storage medium | |
JP7550257B2 (en) | Video generation method and device, neural network training method and device | |
CN112231554B (en) | Search recommended word generation method and device, storage medium and computer equipment | |
CN111506794A (en) | Rumor management method and device based on machine learning | |
WO2023082841A1 (en) | Image processing method, apparatus and device, and storage medium and computer program product | |
CN111723784A (en) | Risk video identification method and device and electronic equipment | |
CN110852047A (en) | Text score method, device and computer storage medium | |
CN113392273A (en) | Video playing method and device, computer equipment and storage medium | |
CN115129212A (en) | Video editing method, video editing device, computer equipment, storage medium and product | |
US20140163956A1 (en) | Message composition of media portions in association with correlated text | |
CN113557504A (en) | System and method for improved search and categorization of media content items based on their destinations | |
CN118537464A (en) | Animation generation method, device, electronic equipment and computer readable storage medium | |
CN113785540B (en) | Method, medium and system for generating content promotions using machine learning nominators | |
CN114443916A (en) | Supply and demand matching method and system for test data | |
CN113821669A (en) | Searching method, searching device, electronic equipment and storage medium | |
US12124524B1 (en) | Generating prompts for user link notes | |
US12147732B2 (en) | Analyzing graphical user interfaces to facilitate automatic interaction | |
US20240265604A1 (en) | Generating animated videos based on linguistic inputs | |
CN117875274A (en) | Method for generating article content and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |