CN114693848A - Method, device, electronic equipment and medium for generating two-dimensional animation - Google Patents

Method, device, electronic equipment and medium for generating two-dimensional animation Download PDF

Info

Publication number
CN114693848A
CN114693848A CN202210290746.5A CN202210290746A CN114693848A CN 114693848 A CN114693848 A CN 114693848A CN 202210290746 A CN202210290746 A CN 202210290746A CN 114693848 A CN114693848 A CN 114693848A
Authority
CN
China
Prior art keywords
information
real
action
prompt
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210290746.5A
Other languages
Chinese (zh)
Other versions
CN114693848B (en
Inventor
黎贯宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shrubs Entertainment Cultural Technology Co ltd
Original Assignee
Shanxi Guanmu Culture Medium Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Guanmu Culture Medium Co ltd filed Critical Shanxi Guanmu Culture Medium Co ltd
Priority to CN202210290746.5A priority Critical patent/CN114693848B/en
Publication of CN114693848A publication Critical patent/CN114693848A/en
Application granted granted Critical
Publication of CN114693848B publication Critical patent/CN114693848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2222Prompting

Abstract

The application relates to a method, a device, electronic equipment and a medium for generating a two-dimensional animation, which relate to the technical field of animation generation, and the method comprises the steps of acquiring plot information, background image information and character image information; matching the corresponding two-dimensional character model in the storage unit based on the character image information; generating prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information; receiving real-time action information and real-time voice information corresponding to a two-dimensional character model input by a user; and generating a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. The method and the device have the effect of improving the production efficiency of the two-dimensional animation.

Description

Method, device, electronic equipment and medium for generating two-dimensional animation
Technical Field
The present application relates to the field of animation generation technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for automatically generating a two-dimensional animation.
Background
Two-dimensional animation has a wide range of applications, such as movie, entertainment, education, and advertising. In the process of two-dimensional animation production, basic movements of the character, such as basic movements of limbs such as walking, running, striding, jumping and the like, or facial expressions such as joy, sorrow and sadness and the like, are often required to be produced, and a series of complex movements are formed by appropriate combination of the basic movements.
At present, when a two-dimensional animation is manufactured, a worker can control a role to further automatically generate the animation, but in the generating process, due to different operation habits of the worker, the effect of the two-dimensional animation is poor, the two-dimensional animation needs to be edited and modified repeatedly, and the manufacturing efficiency of the two-dimensional animation is low.
Disclosure of Invention
In order to improve the production efficiency of two-dimensional animations, the application provides a method, a device, an electronic device and a medium for generating two-dimensional animations.
In a first aspect, the present application provides a method for generating a two-dimensional animation, which adopts the following technical scheme:
a method of generating a two-dimensional animation, comprising:
obtaining plot information, background image information and figure image information;
matching a corresponding two-dimensional character model in a storage unit based on the character image information;
generating prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information;
receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user;
and generating a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information.
By adopting the technical scheme, the scenario information, the background image information and the figure image information are obtained, and some basic information for generating the animation is obtained. And matching the corresponding two-dimensional character model in the storage unit based on the character image information so as to achieve the precondition of operation and control of the working personnel. And generating prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information, and further assisting the staff to generate the two-dimensional animation. And receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, and generating a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. By setting the prompt action information and the prompt speech information, misoperation is reduced in the process of generating the two-dimensional animation by workers, the times of clipping and modifying the two-dimensional animation are reduced, and the efficiency of making the two-dimensional animation is effectively improved.
In another possible implementation manner, the generating of the cue action information and cue speech information of the two-dimensional character model based on the scenario information and the background image information includes:
performing feature extraction on the background image information to determine the size of an object in the background image information;
determining a character size of the two-dimensional character model in the background image information based on the scenario information;
generating the prompt action information based on the plot information, the size of the object and the size of the person;
and generating the prompting line information based on the plot information.
By adopting the technical scheme, the characteristics of the background image information are extracted to determine the size of an object in the background image information, and the size of a person of the two-dimensional person model in the background image information is determined based on the scenario information, so that the size of the person can correspond to the size of the object in the background image information, and the picture is more harmonious. Generating prompt action information based on the scenario information, the size of the object and the size of the person, so that a worker can input real-time action information based on the prompt action information; and prompt line information is generated based on the plot information, so that the condition that the worker forgets the line in the dubbing process is reduced, the times of later modification and editing are reduced, and the efficiency of generating the two-dimensional animation is higher.
In another possible implementation manner, the generating the cue action information based on the scenario information, the size of the object, and the size of the person includes:
judging whether the two-dimensional character model needs to generate actions or not based on the plot information;
if so, determining the prompt action information based on the plot information, the size of the object and the size of the person, wherein the prompt action information comprises action content and action sequence;
and if not, determining that the prompt action information is kept static.
By adopting the technical scheme, whether the two-dimensional character model needs to generate the action or not is judged based on the scenario information, if so, the fact that the worker needs to input the real-time action information is indicated, the prompt action information is determined based on the scenario information, the size of the object and the size of the character before the real-time action information is input, the prompt action information comprises action content and action sequence, and the action content and the action sequence to be input are known in advance, so that the worker can more accurately input the action in the input process. And if the two-dimensional character model does not need to generate the motion, determining that the prompt motion information is kept static so as to prompt the staff not to input the motion information. The prompt action information is convenient for the staff to input accurate real-time action information, the effect of the two-dimensional animation is better, and the efficiency of generating the two-dimensional animation is higher.
In another possible implementation manner, the generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the cue action information, and the cue speech information includes:
generating the two-dimensional animation based on the real-time action information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information;
if not, outputting a prompt question, wherein the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again;
receiving a selection result corresponding to the prompt question, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is the same as the scenario information or the scenario information is not received, judging whether the two-dimensional animation is generated completely or not based on the scenario information;
if the two-dimensional character model is not generated or received, the real-time action information and the real-time voice information corresponding to the two-dimensional character model input by a user are received in a circulating mode, the two-dimensional animation is generated based on the real-time action information and the real-time voice information, whether the prompt action information is the same as the real-time action information or not is judged, if the prompt action information is not the same as the real-time action information, a prompt problem is output, a selection result corresponding to the prompt problem is received, and if the prompt action information is the same as the real-time action information or not, whether the two-dimensional animation is generated or not is judged based on the plot information until the generation is completed.
By adopting the technical scheme, the two-dimensional animation is generated based on the real-time action information and the real-time voice information. And judging whether the prompt action information is the same as the real-time action information or not, if not, indicating that the real-time action information is possibly input wrongly, and further influencing the effect of the two-dimensional animation. And outputting a prompt question to inquire whether the user needs to input the real-time action information and the real-time voice information again. And receiving a selection result corresponding to the prompt question, and judging whether the generation of the two-dimensional animation is finished based on the scenario information if the prompt action information is identical to the real-time action information or the prompt action information does not need to be input again. And if the generation is not finished or the recording needs to be carried out again, continuously receiving the real-time action information and the real-time voice information, and generating the two-dimensional animation until the generation of the two-dimensional animation is finished. The real-time action information and the prompt action information are repeatedly judged whether to be the same or not in the process of generating the two-dimensional animation, and whether the user needs to input again or not is inquired, so that the two-dimensional animation effect is better, the operation and control of workers are facilitated, and the generation efficiency of the two-dimensional animation is improved.
In another possible implementation manner, the generating the two-dimensional animation based on the real-time motion information and the real-time voice information includes:
controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
combining the speech and the action.
By adopting the technical scheme, the two-dimensional character model is controlled to generate corresponding actions based on the real-time action information so that the two-dimensional animation is vivid, the two-dimensional task model is controlled to generate corresponding voice based on the real-time voice information, the dubbing of the character in the animation is completed, and the voice and the actions are combined to obtain the complete two-dimensional animation.
In another possible implementation manner, the controlling the two-dimensional character model to generate the corresponding action based on the real-time action information includes:
matching the action coordinates of the two-dimensional character model corresponding to the real-time action information in the storage unit;
determining the action duration of the two-dimensional character model at the action coordinates;
and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
By adopting the technical scheme, the action coordinates of the two-dimensional character model corresponding to the real-time action information are matched in the storage unit, and then the action of the two-dimensional character model is determined. And determining the action duration of the two-dimensional character model at the action coordinate, outputting the action corresponding to the two-dimensional character model based on the action coordinate and the action duration, and finishing the dynamic effect of the character in the two-dimensional animation.
In another possible implementation manner, the controlling, based on the real-time speech information, the two-dimensional character model to generate corresponding speech, and combining the speech and the action includes:
matching tone information corresponding to the two-dimensional character model in a storage unit;
generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information;
analyzing the real-time voice information and determining an analysis result;
matching a mouth shape corresponding to the two-dimensional character model in a storage unit based on the analysis result;
and combining the mouth shape with the action and simultaneously outputting the dubbing information.
By adopting the technical scheme, the tone information corresponding to the two-dimensional character model is matched in the storage unit, and the tone of each animation character is different, so that a better dubbing effect can be realized. And generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information. And analyzing the real-time voice information, determining an analysis result, and matching a mouth shape corresponding to the two-dimensional character model in the storage unit based on the analysis result. The mouth shape and the action are combined, so that the character is more vivid, and the dubbing information is output at the same time, and the generation of the two-dimensional animation is further completed.
In a second aspect, the present application provides an apparatus for generating a two-dimensional animation, which adopts the following technical solutions:
an apparatus for generating a two-dimensional animation, comprising:
the acquisition module is used for acquiring plot information, background image information and figure image information;
a matching module for matching a corresponding two-dimensional character model in a storage unit based on the character image information;
the prompt generation module is used for generating prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information;
the receiving module is used for receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user;
and the animation generation module is used for generating a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information.
By adopting the technical scheme, the obtaining module obtains the plot information, the background image information and the character image information to obtain some basic information for generating the animation. The matching module matches the corresponding two-dimensional character model in the storage unit based on the character image information, and therefore the precondition of operation and control of workers is achieved. And the prompt generation module generates prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information, and further assists the staff in generating the two-dimensional animation. The receiving module receives real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, and the animation generating module generates a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. By setting the prompt action information and the prompt speech information, misoperation is reduced in the process of generating the two-dimensional animation by workers, the times of clipping and modifying the two-dimensional animation are reduced, and the efficiency of making the two-dimensional animation is effectively improved.
In another possible implementation manner, when the cue generation module generates cue action information and cue speech information of the two-dimensional character model based on the scenario information and the background image information, the cue generation module is specifically configured to:
extracting features of the background image information to determine the size of an object in the background image information;
determining a character size of the two-dimensional character model in the background image information based on the scenario information;
generating the prompt action information based on the plot information, the size of the object and the size of the person;
and generating the prompting line information based on the plot information.
In another possible implementation manner, when the cue generation module generates the cue action information based on the scenario information, the size of the object, and the size of the person, the cue generation module is specifically configured to:
judging whether the two-dimensional character model needs to generate actions or not based on the plot information;
if so, determining the prompt action information based on the plot information, the size of the object and the size of the person, wherein the prompt action information comprises action content and action sequence;
and if not, determining that the prompt action information is kept static.
In another possible implementation manner, when the animation generation module generates a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the cue action information, and the cue line information, the animation generation module is specifically configured to:
generating the two-dimensional animation based on the real-time action information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information or not;
if not, outputting a prompt question, wherein the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again;
receiving a selection result corresponding to the prompt question, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is the same as the scenario information or the scenario information is not received, judging whether the two-dimensional animation is generated completely or not based on the scenario information;
if the two-dimensional character model is not generated or received, circularly executing the steps of receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, generating the two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is the same as the real-time action information or not, if not, outputting a prompt problem, receiving a selection result corresponding to the prompt problem, and if not, judging whether the two-dimensional animation is generated or not based on the plot information until the generation is completed.
In another possible implementation manner, when the animation generation module generates the two-dimensional animation based on the real-time motion information and the real-time voice information, the animation generation module is specifically configured to:
controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
combining the speech and the action.
In another possible implementation manner, when the animation generation module controls the two-dimensional character model to generate a corresponding motion based on the real-time motion information, the animation generation module is specifically configured to:
matching the action coordinates of the two-dimensional character model corresponding to the real-time action information in the storage unit;
determining the action duration of the two-dimensional character model at the action coordinates;
and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
In another possible implementation manner, the animation generation module, when controlling the two-dimensional character model to generate corresponding speech based on the real-time speech information and combining the speech and the motion, is specifically configured to:
matching tone information corresponding to the two-dimensional character model in a storage unit;
generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information;
analyzing the real-time voice information and determining an analysis result;
matching a mouth shape corresponding to the two-dimensional character model in a storage unit based on the analysis result;
and combining the mouth shape with the action and simultaneously outputting the dubbing information.
In a third aspect, the present application provides an electronic device, which adopts the following technical solutions:
an electronic device, comprising:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs configured to: a method of generating a two-dimensional animation according to any one of the possible implementations of the first aspect is performed.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer-readable storage medium, comprising: there is stored a computer program that can be loaded by a processor and that executes a method of generating a two-dimensional animation as shown in any of the possible implementations of the first aspect.
In summary, the present application includes at least one of the following beneficial technical effects:
1. and acquiring plot information, background image information and character image information to obtain some basic information for generating the animation. And matching the corresponding two-dimensional character model in the storage unit based on the character image information so as to achieve the precondition of operation and control of the working personnel. And generating prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information, and further assisting the staff to generate the two-dimensional animation. And receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, and generating a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. By setting the prompt action information and the prompt line information, misoperation is reduced in the process of generating the two-dimensional animation by workers, the times of editing and modifying the two-dimensional animation are reduced, and the efficiency of making the two-dimensional animation is effectively improved;
2. feature extraction is carried out on the background image information to determine the size of an object in the background image information, and the size of a person in the background image information of the two-dimensional person model is determined based on the plot information, so that the size of the person and the size of the object in the background image information can correspond, and the picture is more harmonious. Generating prompt action information based on the scenario information, the size of the object and the size of the person, so that a worker can input real-time action information based on the prompt action information; and prompt line information is generated based on the plot information, so that the condition that the worker forgets the line in the dubbing process is reduced, the times of later modification and editing are reduced, and the efficiency of generating the two-dimensional animation is higher.
Drawings
Fig. 1 is a flowchart illustrating a method for generating a two-dimensional animation according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of an apparatus for generating a two-dimensional animation according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
FIG. 4 is a schematic flow diagram for generating a two-dimensional animation.
Detailed Description
The present application is described in further detail below with reference to figures 1-4.
A person skilled in the art, after reading the present specification, may make modifications to the present embodiments as necessary without inventive contribution, but only within the scope of the claims of the present application are protected by patent laws.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.
The embodiment of the application provides a method for generating a two-dimensional animation, which is executed by an electronic device, wherein the electronic device can be a server or a terminal device, wherein the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud computing service. The terminal device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., but is not limited thereto, the terminal device and the server may be directly or indirectly connected through a wired or wireless communication manner, and the embodiment of the present application is not limited thereto, as shown in fig. 1, the method includes step S101, step S102, step S103, step S104, and step S105, wherein,
step S101, obtaining scenario information, background image information and person image information.
For the embodiment of the application, the electronic device can be obtained from a database or a cloud server. The scenario information, the background image information, and the character image information may be input by the staff in advance. For example:
the electronic equipment obtains script information of the two-dimensional animation from the database, wherein the script information is a story about the lamb playing snow, background information is houses where the lamb lives, stones in front of the houses and snowmen piled up in the lamb, and the character image information is original pictures of the lamb.
In step S102, a corresponding two-dimensional character model is matched in the storage unit based on the character image information.
For the embodiment of the present application, the storage unit may be a database of the electronic device, and may also be a removable storage unit. The two-dimensional character model is a two-dimensional skeleton animation which is established by workers in advance based on original pictures and is about characters, and the actions and expressions of the characters can be changed by changing the parameters of the skeleton animation. Taking step S101 as an example:
the electronic device obtains a two-dimensional character model about the lamb in a database, and the two-dimensional character model comprises parameters of various actions and expressions of the lamb.
And step S103, generating prompting action information and prompting speech information of the two-dimensional character model based on the plot information and the background image information.
For the embodiment of the application, the electronic equipment generates the prompt action information and the prompt speech information of the two-dimensional character model based on the scenario information and the background image information, the prompt action information is used for prompting a worker to input corresponding real-time action information, the prompt speech information is the speech of the character in the scenario information and can be output to a display screen of the electronic equipment in a text mode, and the worker can input real-time voice information based on the prompt speech information, so that the generation efficiency of the two-dimensional animation is higher.
And step S104, receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by the user.
For the embodiment of the application, the electronic device receives real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, the real-time action information can be a signal sent to the electronic device by a worker through a trigger keyboard or other devices, the real-time voice information can be voice information sent by the worker, and the electronic device collects the voice information. For example:
the electronic equipment receives a signal that a user walks rightwards through a character sent by the triggering keyboard, and the electronic equipment collects voice information of 'coming to snow back'.
And step S105, generating a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information.
For the embodiment of the application, the electronic equipment generates the two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. So that the generated two-dimensional animation effect is better and closer to the plot information.
In a possible implementation manner of the embodiment of the present application, the generating of the cue action information and the cue phrase information of the two-dimensional character model based on the scenario information and the background image information in step S103 specifically includes step S1031 (not shown in the figure), step S1032 (not shown in the figure), step S1033 (not shown in the figure), and step S1034 (not shown in the figure), wherein,
in step S1031, feature extraction is performed on the background image information to determine the size of the object in the background image information.
For the embodiment of the application, the electronic device performs feature extraction on the background image information to determine the size of the object in the background image information, performs color filtering on the background image information according to the gray value of the background image information, and extracts the boundary outlines of all the objects in the background image information. After the boundary contour is extracted, the electronic device establishes a rectangular coordinate system by taking any point on the boundary contour as an original point, the electronic device extracts the highest point and the lowest point of the boundary contour in the vertical direction so as to know the height of the object, the electronic device extracts the left boundary point and the right boundary point of the boundary contour in the horizontal direction so as to know the width of the object, and the size of the object is known by determining the height and the width of the object. Taking step S101 as an example:
the electronic equipment extracts the boundary outline of the house where the lamb lives, and then the height of the house of the lamb is 8 cm, and the width of the house of the lamb is 6 cm;
the electronic equipment extracts the boundary contour of the stone in front of the house, and then the height of the stone is 1 cm, and the width of the stone is 1 cm;
the electronic equipment extracts the boundary contour of the snowman of the lamb pile, and then the snowman is 3 cm in height and 2.5 cm in width.
In step S1032, the character size of the two-dimensional character model in the background image information is determined based on the scenario information.
For the embodiment of the present application, the electronic device determines the size of the person of the two-dimensional character model in the background image information based on the scenario information, which includes the introduction of the person and the introduction of other items, taking step S1031 as an example:
the height of the lamb is 1 m, the house where the lamb lives is 1.5 m, and the ratio of the size of the house where the lamb lives to the size of the lamb is 1.5:1, which are calculated by the electronic equipment, in the character introduction of the lamb in the scenario information, so that the electronic equipment determines that the size of the lamb in the background image information is as follows:
8 ÷ 1.5=5.3 cm.
Step S1033, generating a cue action information based on the scenario information, the size of the object, and the size of the person.
For the embodiment of the application, the electronic equipment generates the prompting action information based on the scenario information, the size of the object and the size of the person, because actions needing to be sent out for object persons with different sizes in the scenario information are different, in order to reduce confusion of the staff on the actions, the electronic equipment outputs the related prompting action information.
Step S1034, generating cue line information based on the scenario information.
For the embodiment of the application, the electronic device generates the prompt line information based on the scenario information, and the scenario information includes lines of various characters, for example:
the scenario information comprises the lines of the lamb: when snowing today, we go to the snow bar together! The electronic device may output on the screen "snow today, we go to a snow Bar together! The electronic equipment can also send the prompting speech information to the terminal equipment of the user, so that the staff can input real-time voice information based on the prompting speech information, and the situation that the staff forget words is reduced.
In a possible implementation manner of the embodiment of the application, the step S1033 of generating the prompting action information based on the scenario information, the size of the object, and the size of the person specifically includes the steps S1033a (not shown in the figure), S1033b (not shown in the figure), and S1033c (not shown in the figure), wherein,
step S1033a, it is determined whether the two-dimensional character model needs to generate an action based on the scenario information.
For the embodiment of the application, the electronic device determines whether the two-dimensional character model needs to generate an action based on the scenario information, for example:
the scenario information comprises words such as jumps and the like, the electronic equipment captures the part of speech and the semantics of the lines based on the natural language technology, and the electronic equipment determines that the two-dimensional character model needs to generate actions. The electronic device may also determine whether the two-dimensional model character needs to generate an action in other manners, which is not limited herein.
In step S1033b, if necessary, the presenting action information is determined based on the scenario information, the size of the object, and the size of the person.
The prompt action information comprises action content and action sequence.
For the embodiment of the application, if the electronic equipment judges that the two-dimensional model character needs to generate the action, the electronic equipment determines the action prompt information based on the scenario information, the size of the object and the size of the character. Taking step S1032 and step S1033a as examples:
the electronic equipment determines that the lamb needs to go out of the room based on the plot information, the lamb walks to the side of the snowman through a stone to squat and pile the snowman, the electronic equipment determines that the first action of the lamb is forward straight going based on the plot information, the second action is left turning, the third action is left straight going, the fourth action is jumping, the fifth action is left straight going, the sixth action is squat, and the seventh action is touching. Suppose that a user inputs real-time action information by triggering a keyboard, the action corresponding to the S + D key is forward straight, the action corresponding to the L key is left turn, the action corresponding to the L + D key is left straight, the action corresponding to the J key is jumping, the action corresponding to the D key is squatting, and the action corresponding to the F key is stroking.
The prompt action information can be character information of 'S + D forward straight line, L left turn, L + D left straight line, J jump, L + D left straight line, D squat and F touch' output on a display screen of the electronic equipment in a character form, and also can be character information or voice information sent to terminal equipment of a worker, and limitation is not made here.
The electronic equipment can set a first preset height ratio and is based on the jumping distance of the lamb, and then judges the number of times that the lamb needs to jump through the stone. If the width of the stone is smaller than the 1-time jumping distance of the lamb and larger than the first preset height ratio, determining that the lamb can pass through by jumping for 1 time; and if the width of the stone is less than the 2 jumping distances of the lamb and less than a first preset height ratio, determining that the lamb can pass through the stone by jumping for 2 times.
The electronic equipment judges that the height of a stone in front of the house in the background image information is 1 cm, the width of the stone is 1 cm, the height of the lamb is 5.3 cm, the 1-time jumping distance of the lamb in the background image information is assumed to be 1.5 cm, the first preset height ratio is assumed to be 1:8, the height ratio of the stone to the lamb is 1:5.3 and is larger than the first preset height ratio, and the width of the stone is smaller than the 1-time jumping distance of the lamb, so that the lamb can jump over the stone by jumping once.
In another implementation manner, the electronic device may set a second preset height ratio, where the second preset height ratio is smaller than the first preset height ratio, and if the electronic device is smaller than the second preset height ratio, the electronic device determines that the lamb needs to pass by detour and cannot pass by jump.
In step S1033c, if not, the prompt action information is determined to remain still.
For the embodiment of the application, if the electronic equipment judges that the two-dimensional model character does not need to generate the motion, the electronic equipment determines that the prompting motion information is kept static. Taking step S1033b as an example:
the electronic equipment determines that the lamb snowman does not need to generate other actions after the lamb snowman is squat and touched based on the plot information, and the prompting action information output by the electronic equipment is kept static and used for prompting the staff not to input other real-time action information. The prompt action information may be a text message of "keep still" output on a display screen of the electronic device in a text form, or may be a text message or a voice message sent to a terminal device of a worker, which is not limited herein.
In a possible implementation manner of the embodiment of the present application, the step S105 of generating the two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information, and the prompt line information specifically includes a step S1051 (not shown in the figure), a step S1052 (not shown in the figure), a step S1053 (not shown in the figure), a step S1054 (not shown in the figure), a step S1055 (not shown in the figure), and a step S1056 (not shown in the figure), wherein,
step S1051, a two-dimensional animation is generated based on the real-time motion information and the real-time voice information.
For the embodiment of the application, the electronic equipment receives the real-time action information and the real-time voice information, generates the two-dimensional animation based on the real-time action information and the real-time voice information, and further completes the production of the two-dimensional animation.
Step S1052 determines whether the presentation action information is the same as the real-time action information.
For the embodiment of the application, the staff may input real-time action information for controlling the left turn by triggering the L key, and the electronic device determines whether the real-time action information is the same as the prompt action information based on the real-time action information, taking step S1033b as an example:
if the electronic device receives the L key triggered by the staff, and the action corresponding to the L key is left turning, the electronic device detects that the real-time action information is the same as the L key in the prompt action information. And if the electronic equipment receives a D key triggered by a worker and the action corresponding to the D key is squatting, the electronic equipment judges that the real-time action information is different from the L key in the prompt action information.
And step S1053, if the two are different, outputting a prompt question.
Wherein the prompt question is used for inquiring whether real-time action information and real-time voice information need to be input again.
For the embodiment of the application, if the electronic device determines that the real-time action information is different from the prompt action information, the electronic device outputs a prompt question, the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again, the prompt information can be character information of whether the real-time action information needs to be input again or not sent to terminal equipment of a worker, voice information of whether the real-time action information needs to be input again or not sent by a control loudspeaker device, and prompt information in other forms can also be sent.
And step S1054, receiving a selection result corresponding to the prompt question and determining the selection result.
Wherein the selection result comprises yes and no.
For the embodiment of the application, the electronic device receives the selection result corresponding to the prompt question sent by the user and determines the selection result, the user can select yes or no by triggering the content on the touch screen, and the user can also select yes or no by triggering the keys on the keyboard.
And step S1055, if the two-dimensional animation is the same or the two-dimensional animation is not received, judging whether the generation of the two-dimensional animation is finished or not based on the scenario information.
For the embodiment of the application, if the electronic device determines that the real-time action information is the same as the prompt action information or the electronic device receives no prompt question, and does not need to input the real-time action information and the real-time voice information again, the electronic device determines whether the generation of the two-dimensional animation is completed based on the scenario information, the scenario information may be divided into a plurality of chapters, and only the first chapter and other chapters are generated currently. Taking step S1033b as an example:
after the electronic equipment generates the scenario that the lamb needs to go out of the room and go to the snowman by a stone and squat and pile the snowman, the scenario information may still exist that the lamb runs to the rabbit family to find the rabbit to play, and tells the snowman piled by the rabbit, and the electronic equipment judges that the two-dimensional animation is not generated.
And S1056, if the generation is not completed or the receiving is yes, circularly executing the steps of receiving the real-time action information and the real-time voice information corresponding to the two-dimensional character model input by the user, generating a two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is the same as the real-time action information or not, if not, outputting a prompt problem, receiving a selection result corresponding to the prompt problem, and if the prompt action information is the same as the real-time action information or not, judging whether the generation of the two-dimensional animation is completed or not based on the plot information until the generation is completed.
For the embodiment of the application, as shown in fig. 4, if the electronic device determines that the two-dimensional animation is not generated completely, or if the electronic device receives a selection result of the prompt question, it indicates that the electronic device needs to continue to receive real-time motion information and real-time voice information corresponding to the two-dimensional character model input by the user, generate the two-dimensional animation again based on the real-time motion information and the real-time voice information, determine whether the prompt motion information is the same as the real-time motion information, if not, output the prompt question, receive the selection result corresponding to the prompt question, and if not, determine whether the two-dimensional animation is generated completely based on the scenario information;
and as long as the electronic equipment judges that the two-dimensional animation is not generated completely, the electronic equipment continues to receive the real-time action information and the real-time voice information corresponding to the two-dimensional character model input by the user, and performs a series of operations for generating the animation until the generation of the two-dimensional animation is completed. The two-dimensional animation effect generated is better, the modification times of post-editing are reduced, and the efficiency of making the two-dimensional animation is improved.
In a possible implementation manner of the embodiment of the present application, the step S1051 of generating the two-dimensional animation based on the real-time motion information and the real-time voice information specifically includes a step S10511 (not shown in the figure), a step S10512 (not shown in the figure), and a step S10513 (not shown in the figure), wherein,
in step S10511, the two-dimensional character model is controlled based on the real-time motion information to generate a corresponding motion.
For the embodiment of the application, the electronic device controls the two-dimensional character model to generate corresponding actions based on the real-time action information, such as basic limb actions of walking, running, striding, jumping and the like or facial expressions of joy, anger, sadness and funeral.
Step S10512, controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information.
For the embodiment of the application, the electronic equipment controls the two-dimensional character model to generate corresponding voices, such as voices of hello, goodbye, thank you and the like, based on the real-time voice information input by the user.
Step S10513 combines the voice and the motion.
For embodiments of the present application, the electronic device combines speech and motion to generate a complete two-dimensional animation, such as:
the electronic device combines the jumping action with the hello voice to generate an animation effect of speaking hello while jumping.
In a possible implementation manner of the embodiment of the present application, the step S10511 controls the two-dimensional character model to generate the corresponding motion based on the real-time motion information, and specifically includes the steps S10511a (not shown in the figure), S10511b (not shown in the figure), and S10511c (not shown in the figure), wherein,
in step S10511a, the motion coordinates of the two-dimensional character model corresponding to the real-time motion information are matched in the storage unit.
For the embodiment of the present application, the electronic device matches the motion coordinates of the two-dimensional character model corresponding to the real-time motion information in the storage unit, and the electronic device establishes a rectangular spatial coordinate system with the center of the character as an origin, and represents the coordinates of each limb of the character, for example:
the coordinates of the right foot are (-0.77, 1.29, 0.15), the coordinates of the left foot are (0.77, 1.29, 0.15), and if the real-time motion information received by the electronic device is a jump, the electronic device matches the coordinates of the left foot and the right foot corresponding to the jump in the storage unit, and it is assumed that the coordinates of the right foot after the jump of the person matched by the electronic device in the storage unit are (-0.77, 2.5, 1.15), and the coordinates of the left foot after the jump are (0.77, 2.5, 1.15).
In step S10511b, the duration of the motion of the two-dimensional character model at the motion coordinates is determined.
For the embodiment of the present application, the electronic device determines the motion duration of the two-dimensional character model at the motion coordinates, taking step S10511a as an example:
the electronic equipment knows that the character only performs simple jumping without other complex actions based on the scenario information, the electronic equipment determines that the conventional jumping action is 0.5 second, the electronic equipment determines that the action duration of the two-dimensional character model at the jumping action coordinate is 0.5 second, and the coordinate before jumping is recovered after 0.5 second.
In step S10511c, an action corresponding to the two-dimensional character model is output based on the action coordinates and the action duration.
In the embodiment of the present application, taking step S10511b as an example, the electronic device outputs an action in which the two-dimensional character model changes from still to jumping, and the jumping lasts for 0.5 second and then the character comes to rest again.
In a possible implementation manner of the embodiment of the present application, in step S10512, the two-dimensional character model is controlled to generate corresponding voices based on the real-time voice information, and in step S10513, the voices and the motions are combined, which specifically includes step S10512a (not shown in the figure), step S10512b (not shown in the figure), step S10513a (not shown in the figure), step S10513b (not shown in the figure), and step S10513c (not shown in the figure), wherein,
in step S10512a, the timbre information corresponding to the two-dimensional character model is matched in the storage unit.
For the embodiment of the application, the electronic device matches the tone information corresponding to the two-dimensional character model in the storage unit, the tone information of each character in the storage unit is named by the name of the character or other special characters, and then the electronic device is matched with the name to obtain the tone information corresponding to the character. For example:
the electronic equipment names the tone information according to the name of the person, and the electronic equipment further matches the tone information corresponding to the person, namely the lamb, through the lamb.
In step S10512b, dubbing information corresponding to the two-dimensional character model is generated based on the real-time speech information and the tone information.
For the embodiment of the present application, the electronic device generates dubbing information corresponding to the two-dimensional character model based on the real-time speech information and the tone information, taking step S10512a as an example:
and if the real-time voice information received by the electronic equipment is 'Ihaya', and the electronic equipment is matched with the tone color information of the lamb, the electronic equipment outputs the 'Ihaya' of the tone color information of the lamb as the dubbing of the lamb.
Step S10513a, analyzing the real-time voice information and determining the analysis result.
For the embodiment of the present application, the electronic device receives the real-time speech, and performs phoneme analysis on the real-time speech, for example:
the electronic equipment receives the real-time voice information as follows: and if the user thanks for you too late, the electronic equipment performs phoneme analysis on the speech, and the analysis result is that 10 phonemes exist in the speech: taixienile.
In step S10513b, the mouth shape corresponding to the two-dimensional character model is matched in the storage unit based on the analysis result.
For the embodiment of the present application, the electronic device matches the mouth shape corresponding to the two-dimensional character in the storage unit based on the analysis result, taking step S10513a as an example:
the mouth shape information corresponding to each phoneme exists, the mouth shape corresponding to the 10 phonemes, namely taixienile, is matched in the storage unit by the electronic equipment, and then the electronic equipment generates complete mouth shape information of the sentence "you too thank you" based on the sequence of taixiexinieile in the initial real-time voice information.
Step S10513c combines the mouth shape and the motion, and outputs dubbing information at the same time.
For the embodiment of the present application, the electronic device combines the mouth shape information and the motion, and outputs dubbing information at the same time, for example:
the electronic equipment combines the complete mouth shape information of the language of 'thank you' with the jumping action of the character, and controls the character to send the dubbing information of the language of 'thank you' of the corresponding tone color information, so that a complete animation combining the voice, the mouth shape and the action is generated.
The above embodiments describe a method for generating a two-dimensional animation from the perspective of a method flow, and the following embodiments describe an apparatus for generating a two-dimensional animation from the perspective of a virtual module or a virtual unit, which will be described in detail in the following embodiments.
An embodiment of the present application provides an apparatus 20 for generating a two-dimensional animation, and as shown in fig. 2, the apparatus 20 for generating a two-dimensional animation may specifically include:
an obtaining module 201, configured to obtain scenario information, background image information, and person image information;
a matching module 202 for matching the corresponding two-dimensional character model in the storage unit based on the character image information;
the prompt generating module 203 is used for generating prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information;
the receiving module 204 is configured to receive real-time action information and real-time voice information corresponding to the two-dimensional character model input by the user;
and the animation generation module 205 is used for generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information.
By adopting the above technical scheme, the obtaining module 201 obtains scenario information, background image information and character image information to obtain some basic information for generating animation. The matching module 202 matches the corresponding two-dimensional character model in the storage unit based on the character image information, thereby achieving the precondition of operation and control of the staff. The prompt generation module 203 generates prompt action information and prompt speech information of the two-dimensional character model based on the scenario information and the background image information, and further assists the staff in generating the two-dimensional animation. The receiving module 204 receives real-time motion information and real-time voice information corresponding to the two-dimensional character model input by the user, and the animation generating module 205 generates the two-dimensional animation based on the scenario information, the real-time motion information, the real-time voice information, the prompt motion information and the prompt speech information. By setting the prompt action information and the prompt speech information, misoperation is reduced in the process of generating the two-dimensional animation by workers, the times of clipping and modifying the two-dimensional animation are reduced, and the efficiency of making the two-dimensional animation is effectively improved.
In a possible implementation manner of the embodiment of the present application, when the prompt generating module 203 generates the prompt action information and the prompt speech-line information of the two-dimensional character model based on the scenario information and the background image information, it is specifically configured to:
extracting the characteristics of the background image information to determine the size of an object in the background image information;
determining the character size of the two-dimensional character model in the background image information based on the plot information;
generating prompt action information based on the plot information, the size of the object and the size of the person;
and generating prompting line information based on the plot information.
In a possible implementation manner of the embodiment of the present application, when the prompt generating module 203 generates the prompt action information based on the scenario information, the size of the object, and the size of the person, the prompt generating module is specifically configured to:
judging whether the two-dimensional character model needs to generate actions or not based on the plot information;
if necessary, determining prompting action information based on the plot information, the size of the object and the size of the person, wherein the prompting action information comprises action content and action sequence;
and if not, determining that the prompting action information is kept static.
In a possible implementation manner of this embodiment, when the animation generation module 205 generates a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information, and the prompt speech information, the two-dimensional animation generation module is specifically configured to:
generating a two-dimensional animation based on the real-time action information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information;
if not, outputting a prompt question, wherein the prompt question is used for inquiring whether real-time action information and real-time voice information need to be input again;
receiving a selection result corresponding to the prompt question, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is identical or not received, judging whether the generation of the two-dimensional animation is finished or not based on the scenario information;
if the generation is not completed or the reception is yes, the real-time action information and the real-time voice information corresponding to the two-dimensional character model input by the user are received in a circulating mode, a two-dimensional animation is generated based on the real-time action information and the real-time voice information, whether the prompt action information is the same as the real-time action information or not is judged, if the prompt action information is not the same as the real-time action information, a prompt question is output, a selection result corresponding to the prompt question is received, and if the prompt action information is the same as the real-time action information or the reception is not, whether the generation of the two-dimensional animation is completed or not is judged based on the plot information until the generation is completed.
In a possible implementation manner of the embodiment of the present application, when the animation generation module 205 generates the two-dimensional animation based on the real-time motion information and the real-time voice information, it is specifically configured to:
controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
speech and motion are combined.
In a possible implementation manner of the embodiment of the present application, when the animation generation module 205 controls the two-dimensional character model to generate a corresponding action based on the real-time action information, the animation generation module is specifically configured to:
matching action coordinates of the two-dimensional character model corresponding to the real-time action information in the storage unit;
determining the action duration of the two-dimensional character model at the action coordinates;
and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
In a possible implementation manner of the embodiment of the present application, the animation generation module 205, when controlling the two-dimensional character model to generate corresponding voice based on real-time voice information, combines the voice and the motion, and is specifically configured to:
matching tone information corresponding to the two-dimensional character model in a storage unit;
generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information;
analyzing the real-time voice information and determining an analysis result;
matching mouth shapes corresponding to the two-dimensional character models in the storage unit based on the analysis result;
combines the mouth shape and the action, and simultaneously outputs dubbing information.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In an embodiment of the present application, an electronic device is provided, and as shown in fig. 3, an electronic device 30 shown in fig. 3 includes: a processor 301 and a memory 303. Wherein processor 301 is coupled to memory 303, such as via bus 302. Optionally, the electronic device 30 may also include a transceiver 304. It should be noted that the transceiver 304 is not limited to one in practical applications, and the structure of the electronic device 30 is not limited to the embodiment of the present application.
The Processor 301 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 301 may also be a combination implementing a computing function. E.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, etc.
Bus 302 may include a path that transfers information between the above components. The bus 302 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 302 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
The Memory 303 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired application code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 303 is used for storing application program codes for executing the scheme of the application, and the processor 301 controls the execution. The processor 301 is configured to execute application program code stored in the memory 303 to implement the aspects illustrated in the foregoing method embodiments.
Wherein, the electronic device includes but is not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. But also a server, etc. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the related technology, the electronic equipment in the embodiment of the application acquires the plot information, the background image information and the character image information to obtain some basic information for generating the animation. The electronic equipment matches the corresponding two-dimensional character model in the storage unit based on the character image information, and therefore the precondition of operation and control of workers is achieved. The electronic equipment generates prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information, and further assists workers in generating two-dimensional animation. The electronic equipment receives real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, and generates a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. By setting the prompt action information and the prompt speech information, misoperation is reduced in the process of generating the two-dimensional animation by workers, the times of clipping and modifying the two-dimensional animation are reduced, and the efficiency of making the two-dimensional animation is effectively improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A method of generating a two-dimensional animation, comprising:
obtaining plot information, background image information and figure image information;
matching a corresponding two-dimensional character model in a storage unit based on the character image information;
generating prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information;
receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user;
and generating a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information.
2. The method of claim 1, wherein generating cue action information and cue speech information for the two-dimensional character model based on the scenario information and the background image information comprises:
extracting features of the background image information to determine the size of an object in the background image information;
determining a character size of the two-dimensional character model in the background image information based on the scenario information;
generating the prompt action information based on the plot information, the size of the object and the size of the person;
and generating the prompting line information based on the plot information.
3. The method of claim 2, wherein the generating the cue action information based on the scenario information, the size of the object, and the size of the character comprises:
judging whether the two-dimensional character model needs to generate actions or not based on the plot information;
if so, determining the prompt action information based on the plot information, the size of the object and the size of the person, wherein the prompt action information comprises action content and action sequence;
and if not, determining that the prompt action information is kept static.
4. The method of claim 1, wherein generating the two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the cue action information, and the cue speech information comprises:
generating the two-dimensional animation based on the real-time action information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information;
if not, outputting a prompt question, wherein the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again;
receiving a selection result corresponding to the prompt question, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is identical to the scenario information or the scenario information is not received, judging whether the two-dimensional animation is generated completely or not based on the scenario information;
if the two-dimensional character model is not generated or received, circularly executing the steps of receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, generating the two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is the same as the real-time action information or not, if not, outputting a prompt problem, receiving a selection result corresponding to the prompt problem, and if not, judging whether the two-dimensional animation is generated or not based on the plot information until the generation is completed.
5. The method of claim 4, wherein generating the two-dimensional animation based on the real-time motion information and the real-time voice information comprises:
controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
combining the speech and the action.
6. The method of claim 5, wherein controlling the two-dimensional character model to generate corresponding actions based on the real-time action information comprises:
matching the action coordinates of the two-dimensional character model corresponding to the real-time action information in the storage unit;
determining the action duration of the two-dimensional character model at the action coordinates;
and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
7. The method of claim 5, wherein controlling the two-dimensional character model to generate corresponding speech based on the real-time speech information, and combining the speech and the action comprises:
matching tone information corresponding to the two-dimensional character model in a storage unit;
generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information;
analyzing the real-time voice information and determining an analysis result;
matching a mouth shape corresponding to the two-dimensional character model in a storage unit based on the analysis result;
and combining the mouth shape with the action and simultaneously outputting the dubbing information.
8. An apparatus for generating a two-dimensional animation, comprising:
the acquisition module is used for acquiring plot information, background image information and figure image information;
a matching module for matching a corresponding two-dimensional character model in a storage unit based on the character image information;
the prompt generating module is used for generating prompt action information and prompt speech information of the two-dimensional character model based on the plot information and the background image information;
the receiving module is used for receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user;
and the animation generation module is used for generating a two-dimensional animation based on the plot information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: a method of generating a two-dimensional animation according to any of claims 1 to 7 is performed.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of generating a two-dimensional animation according to any one of claims 1 to 7.
CN202210290746.5A 2022-03-23 2022-03-23 Method, device, electronic equipment and medium for generating two-dimensional animation Active CN114693848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210290746.5A CN114693848B (en) 2022-03-23 2022-03-23 Method, device, electronic equipment and medium for generating two-dimensional animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210290746.5A CN114693848B (en) 2022-03-23 2022-03-23 Method, device, electronic equipment and medium for generating two-dimensional animation

Publications (2)

Publication Number Publication Date
CN114693848A true CN114693848A (en) 2022-07-01
CN114693848B CN114693848B (en) 2023-09-12

Family

ID=82139058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210290746.5A Active CN114693848B (en) 2022-03-23 2022-03-23 Method, device, electronic equipment and medium for generating two-dimensional animation

Country Status (1)

Country Link
CN (1) CN114693848B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001037221A1 (en) * 1999-11-16 2001-05-25 Possibleworlds, Inc. Image manipulation method and system
JP2005038160A (en) * 2003-07-14 2005-02-10 Oki Electric Ind Co Ltd Image generation apparatus, image generating method, and computer readable recording medium
JP2008299493A (en) * 2007-05-30 2008-12-11 Kaoru Sumi Content creation support system and computer program
US7554542B1 (en) * 1999-11-16 2009-06-30 Possible Worlds, Inc. Image manipulation method and system
KR20090126450A (en) * 2008-06-04 2009-12-09 에스케이 텔레콤주식회사 Scenario-based animation service system and method
CN106780673A (en) * 2017-02-13 2017-05-31 杨金强 A kind of animation method and system
EP3176787A1 (en) * 2015-12-01 2017-06-07 Wonderlamp Industries GmbH Method and system for generating an animated movie
CN109333544A (en) * 2018-09-11 2019-02-15 厦门大学 A kind of image exchange method for the marionette performance that spectators participate in
US20190197755A1 (en) * 2016-02-10 2019-06-27 Nitin Vats Producing realistic talking Face with Expression using Images text and voice
CN113436602A (en) * 2021-06-18 2021-09-24 深圳市火乐科技发展有限公司 Virtual image voice interaction method and device, projection equipment and computer medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001037221A1 (en) * 1999-11-16 2001-05-25 Possibleworlds, Inc. Image manipulation method and system
US7554542B1 (en) * 1999-11-16 2009-06-30 Possible Worlds, Inc. Image manipulation method and system
JP2005038160A (en) * 2003-07-14 2005-02-10 Oki Electric Ind Co Ltd Image generation apparatus, image generating method, and computer readable recording medium
JP2008299493A (en) * 2007-05-30 2008-12-11 Kaoru Sumi Content creation support system and computer program
KR20090126450A (en) * 2008-06-04 2009-12-09 에스케이 텔레콤주식회사 Scenario-based animation service system and method
EP3176787A1 (en) * 2015-12-01 2017-06-07 Wonderlamp Industries GmbH Method and system for generating an animated movie
US20190197755A1 (en) * 2016-02-10 2019-06-27 Nitin Vats Producing realistic talking Face with Expression using Images text and voice
CN106780673A (en) * 2017-02-13 2017-05-31 杨金强 A kind of animation method and system
CN109333544A (en) * 2018-09-11 2019-02-15 厦门大学 A kind of image exchange method for the marionette performance that spectators participate in
CN113436602A (en) * 2021-06-18 2021-09-24 深圳市火乐科技发展有限公司 Virtual image voice interaction method and device, projection equipment and computer medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
黎贯宇: "脸谱艺术在动漫插画造型中的运用探微", 《今传媒》 *
黎贯宇: "脸谱艺术在动漫插画造型中的运用探微", 《今传媒》, 5 May 2017 (2017-05-05) *

Also Published As

Publication number Publication date
CN114693848B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN109462776B (en) Video special effect adding method and device, terminal equipment and storage medium
US10586369B1 (en) Using dialog and contextual data of a virtual reality environment to create metadata to drive avatar animation
CN107294838B (en) Animation generation method, device and system for social application and terminal
US20200234478A1 (en) Method and Apparatus for Processing Information
CN109544663B (en) Virtual scene recognition and interaction key position matching method and device of application program
CN111339246B (en) Query statement template generation method, device, equipment and medium
WO2020177190A1 (en) Processing method, apparatus and device
CN106293074B (en) Emotion recognition method and mobile terminal
US11562520B2 (en) Method and apparatus for controlling avatars based on sound
US11423629B2 (en) Automatic rendering of 3D sound
CN112669417A (en) Virtual image generation method and device, storage medium and electronic equipment
CN110162604B (en) Statement generation method, device, equipment and storage medium
CN114419205B (en) Driving method of virtual digital person and training method of pose acquisition model
US20230177755A1 (en) Predicting facial expressions using character motion states
CN113316078B (en) Data processing method and device, computer equipment and storage medium
CN110827789A (en) Music generation method, electronic device and computer-readable storage medium
CN114693848B (en) Method, device, electronic equipment and medium for generating two-dimensional animation
CN114022616B (en) Model processing method and device, electronic equipment and storage medium
CN116580707A (en) Method and device for generating action video based on voice
CN111507139A (en) Image effect generation method and device and electronic equipment
CN114247143A (en) Digital human interaction method, device, equipment and storage medium based on cloud server
KR20140078083A (en) Method of manufacturing cartoon contents for augemented reality and apparatus performing the same
CN113742804A (en) Furniture layout generating method, device, equipment and storage medium
CN107050851B (en) Sound enhancement method and system for game content effect
CN117576982B (en) Spoken language training method and device based on ChatGPT, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240202

Address after: Room 3108, 3rd Floor, Building 35, No. 10 Jiuxianqiao Road, Chaoyang District, Beijing, 100020

Patentee after: Beijing Shrubs Entertainment Cultural Technology Co.,Ltd.

Country or region after: China

Address before: 030082 room 1101, 11 / F, block B, building 1, No. 190, Longxing street, Taiyuan Xuefu Park, Shanxi comprehensive reform demonstration zone, Taiyuan City, Shanxi Province

Patentee before: SHANXI GUANMU CULTURE MEDIUM Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right