CN114693848B - Method, device, electronic equipment and medium for generating two-dimensional animation - Google Patents

Method, device, electronic equipment and medium for generating two-dimensional animation Download PDF

Info

Publication number
CN114693848B
CN114693848B CN202210290746.5A CN202210290746A CN114693848B CN 114693848 B CN114693848 B CN 114693848B CN 202210290746 A CN202210290746 A CN 202210290746A CN 114693848 B CN114693848 B CN 114693848B
Authority
CN
China
Prior art keywords
information
real
action
time
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210290746.5A
Other languages
Chinese (zh)
Other versions
CN114693848A (en
Inventor
黎贯宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shrubs Entertainment Cultural Technology Co ltd
Original Assignee
Shanxi Guanmu Culture Medium Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Guanmu Culture Medium Co ltd filed Critical Shanxi Guanmu Culture Medium Co ltd
Priority to CN202210290746.5A priority Critical patent/CN114693848B/en
Publication of CN114693848A publication Critical patent/CN114693848A/en
Application granted granted Critical
Publication of CN114693848B publication Critical patent/CN114693848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2222Prompting

Abstract

The application relates to a method, a device, electronic equipment and a medium for generating a two-dimensional animation, and relates to the technical field of animation generation; matching the corresponding two-dimensional character model in the storage unit based on the character image information; generating prompting action information and prompting speech information of the two-dimensional character model based on the scenario information and the background image information; receiving real-time action information and real-time voice information corresponding to a two-dimensional character model input by a user; and generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. The application has the effect of improving the production efficiency of the two-dimensional animation.

Description

Method, device, electronic equipment and medium for generating two-dimensional animation
Technical Field
The present application relates to the field of animation generation, and in particular, to a method, an apparatus, an electronic device, and a medium for automatically generating a two-dimensional animation.
Background
Two-dimensional animation has a wide range of applications such as film and television, entertainment, education, advertising, and the like. In the process of two-dimensional animation production, some basic actions, such as basic actions of limbs such as walking, running, striding and jumping, or facial expressions such as happiness, anger, fun, etc., are often required to be produced diagonally, and then a series of complex actions are formed by proper combination of some basic actions.
At present, when a two-dimensional animation is manufactured, a worker can control angles so as to automatically generate the animation, but in the process of generating, the two-dimensional animation is poor in effect due to different operation habits of the worker, repeated editing and modification are needed, and the two-dimensional animation manufacturing efficiency is low.
Disclosure of Invention
In order to improve the production efficiency of two-dimensional animation, the application provides a method, a device, electronic equipment and a medium for generating two-dimensional animation.
In a first aspect, the present application provides a method for generating a two-dimensional animation, which adopts the following technical scheme:
a method of generating a two-dimensional animation, comprising:
acquiring scenario information, background image information and character image information;
matching a corresponding two-dimensional character model in a storage unit based on the character image information;
generating prompting action information and prompting speech information of the two-dimensional character model based on the scenario information and the background image information;
receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user;
and generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt line information.
By adopting the technical scheme, scenario information, background image information and character image information are acquired, and some basic information for generating the animation is obtained. Based on the character image information, the corresponding two-dimensional character model is matched in the storage unit, so that the precondition of the control of the staff is achieved. And generating prompting action information and prompting speech information of the two-dimensional character model based on the scenario information and the background image information, thereby assisting a worker to generate a two-dimensional animation. And receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by the user, and generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. By setting the prompting action information and the prompting speech information, the occurrence of misoperation is reduced in the process of generating the two-dimensional animation by a worker, so that the frequency of editing and modifying the two-dimensional animation is reduced, and the efficiency of manufacturing the two-dimensional animation is effectively improved.
In another possible implementation manner, the generating the prompting action information and the prompting speech information of the two-dimensional character model based on the scenario information and the background image information includes:
Extracting features of the background image information to determine the size of an object in the background image information;
determining a character size of the two-dimensional character model in the background image information based on the scenario information;
generating the prompting action information based on the scenario information, the size of the object and the size of the person;
and generating the prompting speech information based on the scenario information.
By adopting the technical scheme, the feature extraction is carried out on the background image information so as to determine the size of the object in the background image information, and the figure size of the two-dimensional figure model in the background image information is determined based on the scenario information, so that the figure size can correspond to the size of the object in the background image information, and the pictures are more coordinated. Generating prompt action information based on the scenario information, the size of the object and the size of the person, so that a worker can input real-time action information based on the prompt action information; the prompting speech information is generated based on the scenario information, so that the condition that workers forget the speech in the dubbing process is reduced, the number of later modification and editing times is reduced, and the efficiency of generating the two-dimensional animation is higher.
In another possible implementation manner, the generating the prompting action information based on the scenario information, the size of the object, and the character size includes:
judging whether the two-dimensional character model needs to generate actions or not based on the scenario information;
if so, determining the prompting action information based on the scenario information, the size of the object and the size of the person, wherein the prompting action information comprises action content and action sequence;
and if not, determining the prompt action information to be kept still.
By adopting the technical scheme, whether the two-dimensional character model needs to generate actions or not is judged based on the scenario information, if so, the fact that the staff needs to input real-time action information is explained, prompt action information is determined based on the scenario information, the size of the object and the character size before the real-time action information is input, the prompt action information comprises action content and action sequence, and the action content and the action sequence to be input are known in advance, so that the staff can more accurately input the action information. If the two-dimensional character model does not need to generate actions, determining that the prompting action information is kept still so as to prompt a worker not to input the action information. The prompt action information is convenient for the staff to input accurate real-time action information, the effect of the two-dimensional animation is better, and the efficiency of generating the two-dimensional animation is higher.
In another possible implementation manner, the generating the two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt line information includes:
generating the two-dimensional animation based on the real-time motion information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information;
if the real-time action information and the real-time voice information are different, outputting a prompt question, wherein the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again;
receiving a selection result corresponding to the prompt problem, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is the same or no is received, judging whether the generation of the two-dimensional animation is completed or not based on the scenario information;
and if the two-dimensional character model is not generated or yes, circularly executing the steps of receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, generating the two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is identical to the real-time action information, if not, outputting a prompt problem, receiving a selection result corresponding to the prompt problem, and if not, judging whether the two-dimensional animation is generated based on the scenario information until the generation is completed.
By adopting the technical scheme, the two-dimensional animation is generated based on the real-time action information and the real-time voice information. Judging whether the prompt action information is the same as the real-time action information, if not, indicating that the real-time action information is possibly input in error, and further influencing the effect of the two-dimensional animation. And outputting a prompt question to inquire whether the user needs to re-input the real-time action information and the real-time voice information. And receiving a selection result corresponding to the prompt problem, and judging whether the generation of the two-dimensional animation is finished or not based on the scenario information if the prompt action information is judged to be the same as the real-time action information or the re-input is not needed. If the generation is not completed or the re-recording is needed, the real-time action information and the real-time voice information are continuously received, and the two-dimensional animation is generated until the generation of the two-dimensional animation is completed. In the process of generating the two-dimensional animation, whether the real-time action information is the same as the prompt action information is repeatedly judged, and whether the user needs to input again is inquired, so that the two-dimensional animation effect is better, the operation and the control of staff are facilitated, and the generating efficiency of the two-dimensional animation is improved.
In another possible implementation manner, the generating the two-dimensional animation based on the real-time motion information and the real-time voice information includes:
Controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
the speech and the action are combined.
By adopting the technical scheme, the two-dimensional character model is controlled to generate corresponding actions based on the real-time action information, so that the two-dimensional animation is vivid, the two-dimensional task model is controlled to generate corresponding voices based on the real-time voice information, the dubbing of characters in the animation is completed, and the voices and the actions are combined to obtain the complete two-dimensional animation.
In another possible implementation, the controlling the two-dimensional character model to generate the corresponding action based on the real-time action information includes:
matching motion coordinates of the two-dimensional character model corresponding to the real-time motion information in the storage unit;
determining the action duration of the two-dimensional character model at the action coordinates;
and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
By adopting the technical scheme, the motion coordinates of the two-dimensional character model corresponding to the real-time motion information are matched in the storage unit, so that the motion of the two-dimensional character model is determined. And determining the action duration of the two-dimensional character model at the action coordinates, and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration to finish the dynamic effect of the character in the two-dimensional animation.
In another possible implementation manner, the controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information, and combining the voice and the action includes:
matching tone color information corresponding to the two-dimensional character model in a storage unit;
generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information;
analyzing the real-time voice information and determining an analysis result;
matching the mouth shape corresponding to the two-dimensional character model in a storage unit based on the analysis result;
and combining the mouth shape with the action and outputting the dubbing information at the same time.
By adopting the technical scheme, tone information corresponding to the two-dimensional character model is matched in the storage unit, and tone of each animation character is different, so that better dubbing effect can be realized. And generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information. And analyzing the real-time voice information, determining an analysis result, and matching the mouth shape corresponding to the two-dimensional character model in a storage unit based on the analysis result. The mouth shape and the action are combined, so that the character is more vivid, and dubbing information is output at the same time, and further, the generation of the two-dimensional animation is completed.
In a second aspect, the present application provides a device for generating a two-dimensional animation, which adopts the following technical scheme:
an apparatus for generating a two-dimensional animation, comprising:
the acquisition module is used for acquiring scenario information, background image information and character image information;
the matching module is used for matching the corresponding two-dimensional character model in the storage unit based on the character image information;
the prompt generation module is used for generating prompt action information and prompt line information of the two-dimensional character model based on the scenario information and the background image information;
the receiving module is used for receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user;
and the animation generation module is used for generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt line information.
By adopting the technical scheme, the acquisition module acquires scenario information, background image information and character image information to obtain some basic information for generating the animation. The matching module matches the corresponding two-dimensional character model in the storage unit based on the character image information, so that the precondition of the control of the staff is achieved. The prompt generation module generates prompt action information and prompt line information of the two-dimensional character model based on the scenario information and the background image information, and further assists staff to generate two-dimensional animation. The receiving module receives real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, and the animation generating module generates a two-dimensional animation based on scenario information, real-time action information, real-time voice information, prompt action information and prompt speech information. By setting the prompting action information and the prompting speech information, the occurrence of misoperation is reduced in the process of generating the two-dimensional animation by a worker, so that the frequency of editing and modifying the two-dimensional animation is reduced, and the efficiency of manufacturing the two-dimensional animation is effectively improved.
In another possible implementation manner, the prompt generation module is specifically configured to, when generating the prompt action information and the prompt line information of the two-dimensional character model based on the scenario information and the background image information:
extracting features of the background image information to determine the size of an object in the background image information;
determining a character size of the two-dimensional character model in the background image information based on the scenario information;
generating the prompting action information based on the scenario information, the size of the object and the size of the person;
and generating the prompting speech information based on the scenario information.
In another possible implementation manner, the prompt generation module is specifically configured to, when generating the prompt action information based on the scenario information, the size of the object, and the person size:
judging whether the two-dimensional character model needs to generate actions or not based on the scenario information;
if so, determining the prompting action information based on the scenario information, the size of the object and the size of the person, wherein the prompting action information comprises action content and action sequence;
And if not, determining the prompt action information to be kept still.
In another possible implementation manner, the animation generation module is specifically configured to, when generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information, and the prompt speech information:
generating the two-dimensional animation based on the real-time motion information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information;
if the real-time action information and the real-time voice information are different, outputting a prompt question, wherein the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again;
receiving a selection result corresponding to the prompt problem, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is the same or no is received, judging whether the generation of the two-dimensional animation is completed or not based on the scenario information;
and if the two-dimensional character model is not generated or yes, circularly executing the steps of receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, generating the two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is identical to the real-time action information, if not, outputting a prompt problem, receiving a selection result corresponding to the prompt problem, and if not, judging whether the two-dimensional animation is generated based on the scenario information until the generation is completed.
In another possible implementation manner, the animation generation module is specifically configured to, when generating the two-dimensional animation based on the real-time motion information and the real-time voice information:
controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
the speech and the action are combined.
In another possible implementation manner, the animation generation module is specifically configured to, when controlling the two-dimensional character model to generate a corresponding action based on the real-time action information:
matching motion coordinates of the two-dimensional character model corresponding to the real-time motion information in the storage unit;
determining the action duration of the two-dimensional character model at the action coordinates;
and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
In another possible implementation manner, the animation generation module is specifically configured to, when controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information and combining the voice and the action:
Matching tone color information corresponding to the two-dimensional character model in a storage unit;
generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information;
analyzing the real-time voice information and determining an analysis result;
matching the mouth shape corresponding to the two-dimensional character model in a storage unit based on the analysis result;
and combining the mouth shape with the action and outputting the dubbing information at the same time.
In a third aspect, the present application provides an electronic device, which adopts the following technical scheme:
an electronic device, the electronic device comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: a method of generating a two-dimensional animation according to any one of the possible implementations of the first aspect is performed.
In a fourth aspect, the present application provides a computer readable storage medium, which adopts the following technical scheme:
a computer-readable storage medium, comprising: a computer program is stored that can be loaded and executed by a processor to implement a method of generating a two-dimensional animation as shown in any of the possible implementations of the first aspect.
In summary, the present application includes at least one of the following beneficial technical effects:
1. and acquiring scenario information, background image information and character image information to obtain some basic information for generating animation. Based on the character image information, the corresponding two-dimensional character model is matched in the storage unit, so that the precondition of the control of the staff is achieved. And generating prompting action information and prompting speech information of the two-dimensional character model based on the scenario information and the background image information, thereby assisting a worker to generate a two-dimensional animation. And receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by the user, and generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. By setting the prompting action information and the prompting speech information, the occurrence of misoperation is reduced in the process of generating the two-dimensional animation by a worker, so that the frequency of editing and modifying the two-dimensional animation is reduced, and the efficiency of manufacturing the two-dimensional animation is effectively improved;
2. and extracting features of the background image information to determine the size of the object in the background image information, and determining the figure size of the two-dimensional figure model in the background image information based on the scenario information, so that the figure size can correspond to the size of the object in the background image information, and the pictures are more coordinated. Generating prompt action information based on the scenario information, the size of the object and the size of the person, so that a worker can input real-time action information based on the prompt action information; the prompting speech information is generated based on the scenario information, so that the condition that workers forget the speech in the dubbing process is reduced, the number of later modification and editing times is reduced, and the efficiency of generating the two-dimensional animation is higher.
Drawings
FIG. 1 is a flow chart of a method of generating a two-dimensional animation according to an embodiment of the application.
FIG. 2 is a flow chart of an apparatus for generating a two-dimensional animation according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
FIG. 4 is a flow diagram of generating a two-dimensional animation.
Detailed Description
The application is described in further detail below with reference to fig. 1-4.
Modifications of the embodiments which do not creatively contribute to the application may be made by those skilled in the art after reading the present specification, but are protected by patent laws within the scope of the claims of the present application.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, unless otherwise specified, the term "/" generally indicates that the associated object is an "or" relationship.
Embodiments of the application are described in further detail below with reference to the drawings.
The embodiment of the application provides a method for generating two-dimensional animation, which is executed by electronic equipment, wherein the electronic equipment can be a server or terminal equipment, and the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud computing service. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., and the terminal device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein, and as shown in fig. 1, the method includes step S101, step S102, step S103, step S104, and step S105, where,
Step S101, scenario information, background image information, and character image information are acquired.
For the embodiment of the application, the electronic equipment can be obtained from a database or a cloud server. The scenario information, the background image information, and the character image information may be input in advance by a worker. For example:
the electronic equipment acquires script information of the two-dimensional animation from the database, wherein the script information is a story about the snow playing of the goats, the background information is a house where the goats live, stones in front of the house and snowmen in front of the goats, and the character image information is original pictures of the goats.
Step S102, matching the corresponding two-dimensional character model in the storage unit based on the character image information.
For the embodiment of the application, the storage unit may be a database of the electronic device, or may be a removable storage unit. The two-dimensional character model is a two-dimensional bone animation which is established by a worker based on original pictures in advance and related to the character, and the character motion and expression can be changed by changing parameters of the bone animation. Taking step S101 as an example:
the electronic device obtains a two-dimensional character model for the sheep in a database, and parameters of various actions and expressions of the sheep are included in the two-dimensional character model.
And step S103, generating prompting action information and prompting speech information of the two-dimensional character model based on the scenario information and the background image information.
For the embodiment of the application, the electronic equipment generates the prompting action information and the prompting speech information of the two-dimensional character model based on the scenario information and the background image information, the prompting action information is used for prompting a worker to input corresponding real-time action information, the prompting speech information is speech of characters in the scenario information and can be output to a display screen of the electronic equipment in a text form, and the worker can input real-time speech information based on the prompting speech information, so that the generation efficiency of the two-dimensional animation is higher.
Step S104, receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by the user.
For the embodiment of the application, the electronic equipment receives the real-time action information and the real-time voice information corresponding to the two-dimensional character model input by the user, wherein the real-time action information can be a signal sent to the electronic equipment by a worker through triggering a keyboard or other equipment, the real-time voice information can be the voice information sent by the worker, and the electronic equipment collects the voice information. For example:
The electronic equipment receives a signal that a user walks rightwards through a character sent by a trigger keyboard, and the electronic equipment collects voice information of 'getting back and forth when playing snow'.
Step S105, generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt line information.
For the embodiment of the application, the electronic equipment generates a two-dimensional animation based on scenario information, real-time action information, real-time voice information, prompt action information and prompt speech information. So that the generated two-dimensional animation effect is better and more close to the scenario information.
In one possible implementation manner of the embodiment of the present application, the step S103 generates the prompting action information and the prompting speech information of the two-dimensional character model based on the scenario information and the background image information, which specifically includes a step S1031 (not shown in the figure), a step S1032 (not shown in the figure), a step S1033 (not shown in the figure), and a step S1034 (not shown in the figure), where,
in step S1031, feature extraction is performed on the background image information to determine the size of the object in the background image information.
For the embodiment of the application, the electronic equipment performs feature extraction on the background image information to determine the size of the object in the background image information, performs color filtration on the background image information through the gray value of the background image information, and extracts the boundary contours of all the objects in the background image information. After the boundary contour is extracted, the electronic device establishes a rectangular coordinate system by taking any point on the boundary contour as an origin, extracts the highest point and the lowest point in the vertical direction of the boundary contour by the electronic device, further obtains the height of the object, extracts the left boundary point and the right boundary point in the horizontal direction of the boundary contour by the electronic device, further obtains the width of the object, and further obtains the size of the object by determining the height and the width of the object. Taking step S101 as an example:
The electronic equipment extracts the boundary outline of the house where the sheep lives, and then the height of the house of the sheep is 8 cm and the width of the house of the sheep is 6 cm;
the electronic equipment extracts the boundary contour of the stone in front of the house, and then the stone is 1 cm in height and 1 cm in width;
the electronic equipment extracts the boundary outline of the snowman of the small sheep pile, and then the height of the snowman is 3 cm and the width of the snowman is 2.5 cm.
Step S1032, determining the character size of the two-dimensional character model in the background image information based on the scenario information.
For the embodiment of the present application, the electronic device determines the character size of the two-dimensional character model in the background image information based on scenario information, where scenario information includes introduction of a character and introduction of other objects, taking step S1031 as an example:
the figure introduction of the small sheep in the scenario information introduces that the height of the small sheep is 1 meter, the height of the house where the small sheep lives is 1.5 meters, the ratio of the size of the house where the small sheep lives to the size of the small sheep is 1.5:1 calculated by the electronic equipment, and then the electronic equipment determines that the size of the small sheep in the background image information is as follows:
8.1.5=5.3 cm.
Step S1033, generating presentation action information based on the scenario information, the size of the object, and the person size.
For the embodiment of the application, the electronic equipment generates the prompt action information based on the scenario information, the size of the object and the figure size, because actions required to be sent by the figures of the objects with different sizes in the scenario information are different, and in order to reduce confusion of staff on the actions, the electronic equipment outputs related prompt action information.
Step S1034, generating presentation speech information based on the scenario information.
For the embodiment of the application, the electronic device generates the prompting speech information based on the scenario information, wherein the scenario information comprises speech of each character, for example:
the scenario information comprises the typhoons: in the future, we go to play with the snow bar-! The electronic device can output on the screen "snowy today, we go to play with the snow bar-! The electronic equipment can also send the prompting speech information to the terminal equipment of the user, so that the staff can input real-time speech information based on the prompting speech information, and the situation that the staff forgets the speech is reduced.
In one possible implementation manner of the embodiment of the present application, the step S1033 generates the prompting action information based on the scenario information, the size of the object and the figure size, and specifically includes a step S1033a (not shown in the figure), a step S1033b (not shown in the figure), and a step S1033c (not shown in the figure), where,
step S1033a, determining whether the two-dimensional character model needs to generate an action based on the scenario information.
For the embodiment of the application, the electronic device determines whether the two-dimensional character model needs to generate actions based on scenario information, for example:
The scenario information is provided with words such as jumping, the electronic equipment captures the part of speech and the semanteme of the line based on the natural language technology, and the electronic equipment determines that the two-dimensional character model needs to generate actions. The electronic device may also determine whether the two-dimensional model persona needs to generate an action in other ways, which is not limited herein.
Step S1033b, if necessary, determines presentation action information based on scenario information, object size, and character size.
The prompting action information comprises action content and action sequence.
For the embodiment of the application, if the electronic equipment judges that the character of the two-dimensional model needs to generate the action, the electronic equipment determines action prompt information based on the scenario information, the size of the object and the character size. Taking step S1032 and step S1033a as an example:
the electronic equipment determines that the goats need to walk out of a room and go to a snowman by a stone to squat down the snowman based on the plot information, determines that the first action of the goats is forward straight movement, the second action is left turning, the third action is left straight movement, the fourth action is jumping, the fifth action is left straight movement, the sixth action is squat down, and the seventh action is stroking based on the plot information. The user is assumed to input real-time action information by triggering a keyboard, the action corresponding to the S+D key is forward straight, the action corresponding to the L key is left-turning, the action corresponding to the L+D key is leftward straight, the action corresponding to the J key is jumping, the action corresponding to the D key is squatting, and the action corresponding to the F key is stroking.
The prompting action information may be text information that "s+d goes straight forward, L turns left, l+d goes straight left, J jumps, l+d goes straight left, D crouches down, and F touches" is output on the display screen of the electronic device in the form of text, or text information or voice information may be sent to the terminal device of the staff, which is not limited herein.
The electronic device can set a first preset height ratio and judge the times of jumping of the goats when the goats pass through the stone based on the jumping distance of the goats. If the width of the stone is smaller than the 1-time jumping distance of the sheep and larger than the first preset height ratio, determining that the sheep can pass through after jumping for 1 time; if the width of the stone is smaller than the 2 jumping distances of the goats and smaller than the first preset height ratio, determining that the goats can pass through the goats after jumping for 2 times.
The electronic equipment judges that the height of a stone in front of a house in background image information is 1 cm wide and 1 cm, the height of a small sheep is 5.3 cm, the 1-time jumping distance of the small sheep in the background image information is 1.5 cm, the first preset height ratio is 1:8, the height ratio of the stone to the small sheep is 1:5.3 and is larger than the first preset height ratio, the width of the stone is smaller than the 1-time jumping distance of the small sheep, and the small sheep can jump over the stone through one-time jumping.
In another implementation manner, the electronic device may set a second preset height ratio, where the second preset ratio is smaller than the first preset ratio, and if the electronic device is smaller than the second preset ratio, the electronic device determines that the sheep needs to bypass and cannot pass through the jump.
In step S1033c, if not required, the presentation action information is determined to be stationary.
For the embodiment of the application, if the electronic equipment judges that the character of the two-dimensional model does not need to generate action, the electronic equipment determines that the prompt action information is kept still. Taking step S1033b as an example:
after the electronic equipment determines that the snowman of the sheep piles on the basis of the scenario information does not need to generate other actions after squatting and stroking, the prompting action information output by the electronic equipment is kept still and is used for prompting the staff not to input other real-time action information. The prompting action information may be text information which is output on a display screen of the electronic device in a text form, or may be text information or voice information which is sent to a terminal device of a worker, and is not limited herein.
One possible implementation manner of the embodiment of the present application, in step S105, a two-dimensional animation is generated based on scenario information, real-time action information, real-time voice information, prompt action information, and prompt line information, which specifically includes step S1051 (not shown in the figure), step S1052 (not shown in the figure), step S1053 (not shown in the figure), step S1054 (not shown in the figure), step S1055 (not shown in the figure), and step S1056 (not shown in the figure), where,
Step S1051, a two-dimensional animation is generated based on the real-time motion information and the real-time voice information.
For the embodiment of the application, the electronic equipment receives the real-time action information and the real-time voice information, and generates the two-dimensional animation based on the real-time action information and the real-time voice information, thereby completing the production of the two-dimensional animation.
Step S1052, determine whether the prompt action information is the same as the real-time action information.
For the embodiment of the application, the worker can input the real-time action information for controlling the left turn by triggering the L key, and the electronic device judges whether the real-time action information is the same as the prompt action information or not based on the real-time action information, taking step S1033b as an example:
assuming that the electronic equipment receives an L key triggered by a worker, and the action corresponding to the L key is left turn, the electronic equipment detects that the real-time action information is the same as the L key in the prompt action information. Assuming that the electronic equipment receives a D key triggered by a worker, and the action corresponding to the D key is squatting, the electronic equipment judges that the real-time action information is different from the L key in the prompt action information.
In step S1053, if they are not the same, a prompt question is output.
Wherein the prompt question is used to ask whether real-time action information and real-time voice information need to be re-input.
For the embodiment of the application, if the electronic device judges that the real-time action information and the prompt action information are different, the electronic device outputs a prompt question, and the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again, and the prompt information can be text information of whether the real-time action information and the real-time voice information need to be input again or not sent to the terminal device of the staff, can be voice information of whether the real-time action information and the prompt action information need to be input again or not sent by the control loudspeaker device, and can also be prompt information in other forms.
Step S1054, receive the selection result corresponding to the prompt question, and determine the selection result.
Wherein the selection result includes yes and no.
For the embodiment of the application, the electronic equipment receives the selection result corresponding to the prompt problem sent by the user, determines the selection result, and the user can select yes or no by triggering the content on the touch screen, and also can select yes or no by triggering the key on the keyboard.
Step S1055, if the two-dimensional animation is the same or not, judging whether the generation of the two-dimensional animation is completed or not based on the scenario information.
For the embodiment of the application, if the electronic equipment judges that the real-time action information is the same as the prompt action information, or if the electronic equipment receives the selection result of the prompt problem, the electronic equipment does not need to input the real-time action information and the real-time voice information again, the electronic equipment judges whether the generation of the two-dimensional animation is completed or not based on the scenario information, the scenario information is possibly divided into a plurality of chapters, and only the first chapter and other chapters are generated currently. Taking step S1033b as an example:
After the electronic equipment completes the scenario that the goats need to come out of a room and go to the snowman by a stone to squat down and pile the snowman, the scenario information can also exist that the goats run to the bunny to find the bunny to play and teach the snowman piled by the bunny, and the electronic equipment judges that the two-dimensional animation is not completed.
Step S1056, if the generation is not completed or yes, the steps of receiving the real-time action information and the real-time voice information corresponding to the two-dimensional character model input by the user, generating the two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is the same as the real-time action information, if not, outputting the prompt problem, receiving the selection result corresponding to the prompt problem, and if not, judging whether the generation of the two-dimensional animation is completed based on the scenario information until the generation is completed.
For the embodiment of the present application, as shown in fig. 4, if the electronic device determines that the two-dimensional animation is not generated, or if the electronic device receives the selection result of the prompting problem, it indicates that the electronic device needs to continuously receive the real-time motion information and the real-time voice information corresponding to the two-dimensional character model input by the user, and generates the two-dimensional animation based on the real-time motion information and the real-time voice information again, and determines whether the prompting motion information is the same as the real-time motion information, if not, the prompting problem is output, the selection result corresponding to the prompting problem is received, and if the prompting problem is the same or not, it determines whether the two-dimensional animation is generated based on scenario information;
And as long as the electronic equipment judges that the two-dimensional animation is not generated, the electronic equipment continuously receives the real-time action information and the real-time voice information corresponding to the two-dimensional character model input by the user, and performs a series of operations for generating the animation until the two-dimensional animation is generated. So that the generated two-dimensional animation effect is better, the number of modification times of later editing is reduced, and the efficiency of manufacturing the two-dimensional animation is improved.
In one possible implementation manner of the embodiment of the present application, the step S1051 generates a two-dimensional animation based on the real-time motion information and the real-time voice information, which specifically includes a step S10511 (not shown in the figure), a step S10512 (not shown in the figure), and a step S10513 (not shown in the figure), where,
in step S10511, the two-dimensional character model is controlled to generate a corresponding motion based on the real-time motion information.
For the embodiment of the application, the electronic equipment controls the two-dimensional character model to generate corresponding actions based on the real-time action information, such as basic actions of limbs such as walking, running, striding, jumping and the like or facial expressions of some joysticks and fun.
Step S10512, the two-dimensional character model is controlled to generate corresponding voices based on the real-time voice information.
For the embodiment of the application, the electronic equipment controls the two-dimensional character model to generate corresponding voices, such as voices of hello, bye, thank you and the like, based on the real-time voice information input by the user.
Step S10513, combine the voice and the motion.
For embodiments of the present application, the electronic device combines speech and motion to generate a complete two-dimensional animation, such as:
the electronic equipment combines the jumping action and the sound of the hello so as to generate the animation effect of speaking the hello while jumping.
In one possible implementation manner of the embodiment of the present application, step S10511 includes step S10511a (not shown in the figure), step S10511b (not shown in the figure), and step S10511c (not shown in the figure) when the two-dimensional character model is controlled to generate the corresponding motion based on the real-time motion information, wherein,
in step S10511a, the motion coordinates of the two-dimensional character model corresponding to the real-time motion information are matched in the storage means.
For the embodiment of the application, the electronic device matches the motion coordinates of the two-dimensional character model corresponding to the real-time motion information in the storage unit, and the electronic device establishes a space rectangular coordinate system with the center of the character as the origin, and represents the coordinates of each limb of the character, for example:
the coordinates of the right foot (-0.77,1.29,0.15) and the coordinates of the left foot (-0.77,1.29,0.15), and when the electronic device receives the real-time motion information as a jump, the electronic device matches the coordinates of the left foot and the right foot corresponding to the jump in the storage unit, and assuming that the coordinates of the right foot (-0.77,2.5,1.15) and the coordinates of the left foot (-0.77,2.5,1.15) after the jump of the person matched in the storage unit by the electronic device.
In step S10511b, the duration of the motion of the two-dimensional character model at the motion coordinates is determined.
For the embodiment of the present application, the electronic device determines the action duration of the two-dimensional character model at the action coordinates, taking step S10511a as an example:
the electronic equipment knows based on the scenario information that the character only performs simple jumping and has no other complex actions, the electronic equipment determines that the conventional jumping action is 0.5 seconds, and the electronic equipment determines that the two-dimensional character model resumes the coordinates before jumping after the action duration of the jumping action coordinates is 0.5 seconds and 0.5 seconds.
Step S10511c, outputting an action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
In accordance with an embodiment of the present application, taking step S10511b as an example, the electronic device outputs an action that the two-dimensional character model is from rest to jump, and the character is then rest after the jump lasts for 0.5 seconds.
In one possible implementation manner of the embodiment of the present application, the step S10512 controls the two-dimensional character model to generate the corresponding voice based on the real-time voice information, and the step S10513 combines the voice and the action, which specifically includes a step S10512a (not shown in the figure), a step S10512b (not shown in the figure), a step S10513a (not shown in the figure), a step S10513b (not shown in the figure), and a step S10513c (not shown in the figure), where,
In step S10512a, tone color information corresponding to the two-dimensional character model is matched in the storage unit.
For the embodiment of the application, the electronic equipment matches tone information corresponding to the two-dimensional character model in the storage unit, the tone information of each character in the storage unit is named by the name of the character or other special characters, and then the electronic equipment matches the name to obtain the tone information corresponding to the character. For example:
the electronic equipment names tone color information by the name of the character, and the electronic equipment matches tone color information corresponding to the character of the sheep through the 'sheep'.
In step S10512b, dubbing information corresponding to the two-dimensional character model is generated based on the real-time voice information and the tone information.
For the embodiment of the present application, the electronic device generates dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information, taking step S10512a as an example:
and when the electronic equipment receives the real-time voice information as 'you are' and the electronic equipment is matched with the tone information of the sheep, the electronic equipment outputs the 'you are' of the tone information of the sheep as the dubbing to the sheep.
In step S10513a, the real-time voice information is analyzed and the analysis result is determined.
For the embodiment of the application, the electronic device receives the real-time voice, and performs phoneme analysis on the real-time voice, for example:
the electronic equipment receives real-time voice information as follows: if you are thank, the electronic device performs phoneme analysis on the sentence, and the analysis result is that 10 phonemes exist in the sentence: taixienile.
Step S10513b matches the mouth shape corresponding to the two-dimensional character model in the storage unit based on the analysis result.
For the embodiment of the present application, the electronic device matches the mouth shape corresponding to the two-dimensional character in the storage unit based on the analysis result, taking step S10513a as an example:
and each phoneme has corresponding mouth shape information, the electronic equipment matches the mouth shapes corresponding to 10 phonemes of the taixiensis in the storage unit, and then the electronic equipment generates complete mouth shape information of the sentence of 'Taixie you' based on the sequence of the taixiexifile in the initial real-time voice information.
Step S10513c, the mouth shape and the motion are combined, and the dubbing information is output.
For the embodiment of the application, the electronic device combines the mouth shape information and the action and simultaneously outputs dubbing information, for example:
the electronic equipment combines the complete mouth shape information of the sentence which is "thank you" with the jumping action of the character, and simultaneously controls the character to send the dubbing information of the sentence which is "thank you" for the corresponding tone color information, so as to generate a complete animation combining voice, mouth shape and action.
The above-described embodiments describe a method for generating a two-dimensional animation from the viewpoint of a method flow, and the following embodiments describe an apparatus for generating a two-dimensional animation from the viewpoint of a virtual module or a virtual unit, which will be described in detail below.
An embodiment of the present application provides an apparatus 20 for generating a two-dimensional animation, as shown in fig. 2, where the apparatus 20 for generating a two-dimensional animation may specifically include:
an acquisition module 201, configured to acquire scenario information, background image information, and character image information;
a matching module 202 for matching the corresponding two-dimensional character model in the storage unit based on the character image information;
a prompt generation module 203, configured to generate prompt action information and prompt speech information of the two-dimensional character model based on scenario information and background image information;
the receiving module 204 is configured to receive real-time motion information and real-time voice information corresponding to the two-dimensional character model input by a user;
the animation generation module 205 is configured to generate a two-dimensional animation based on scenario information, real-time motion information, real-time voice information, prompt motion information, and prompt speech information.
By adopting the above technical scheme, the acquisition module 201 acquires scenario information, background image information and character image information, and obtains some basic information for generating an animation. The matching module 202 matches the corresponding two-dimensional character model in the storage unit based on the character image information, thereby achieving the precondition of the operation of the staff. The prompt generation module 203 generates prompt action information and prompt line information of the two-dimensional character model based on the scenario information and the background image information, thereby assisting the staff to generate the two-dimensional animation. The receiving module 204 receives real-time motion information and real-time voice information corresponding to the two-dimensional character model input by the user, and the animation generating module 205 generates a two-dimensional animation based on scenario information, real-time motion information, real-time voice information, prompt motion information, and prompt speech information. By setting the prompting action information and the prompting speech information, the occurrence of misoperation is reduced in the process of generating the two-dimensional animation by a worker, so that the frequency of editing and modifying the two-dimensional animation is reduced, and the efficiency of manufacturing the two-dimensional animation is effectively improved.
In one possible implementation manner of the embodiment of the present application, when generating the prompting action information and the prompting speech information of the two-dimensional character model based on the scenario information and the background image information, the prompting generation module 203 is specifically configured to:
extracting features of the background image information to determine the size of an object in the background image information;
determining a character size of the two-dimensional character model in the background image information based on the scenario information;
generating prompting action information based on the scenario information, the size of the object and the size of the person;
and generating prompting speech information based on the scenario information.
In one possible implementation manner of the embodiment of the present application, when generating the prompt action information based on scenario information, the size of the object and the figure size, the prompt generation module 203 is specifically configured to:
judging whether the two-dimensional character model needs to generate actions or not based on the scenario information;
if so, determining prompt action information based on the scenario information, the size of the object and the size of the person, wherein the prompt action information comprises action content and action sequence;
if not, determining the prompting action information to be kept still.
In one possible implementation manner of the embodiment of the present application, when generating a two-dimensional animation based on scenario information, real-time action information, real-time voice information, prompt action information and prompt speech information, the animation generation module 205 is specifically configured to:
Generating a two-dimensional animation based on the real-time motion information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information;
if the real-time action information and the real-time voice information are different, outputting a prompt question, wherein the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again;
receiving a selection result corresponding to the prompt problem, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is the same or not, judging whether the generation of the two-dimensional animation is completed or not based on the scenario information;
and if the two-dimensional character model is not generated or the two-dimensional character model is received, circularly executing the steps of receiving the real-time action information and the real-time voice information corresponding to the two-dimensional character model input by the user, generating the two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is the same as the real-time action information, if the prompt action information is not the same as the real-time action information, outputting a prompt problem, receiving a selection result corresponding to the prompt problem, and if the prompt problem is the same or not, judging whether the two-dimensional animation is generated based on the scenario information until the generation is completed.
In one possible implementation manner of the embodiment of the present application, the animation generation module 205 is specifically configured to, when generating a two-dimensional animation based on real-time motion information and real-time voice information:
Controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
the speech and actions are combined.
In one possible implementation manner of the embodiment of the present application, the animation generation module 205 is specifically configured to, when controlling the two-dimensional character model to generate the corresponding motion based on the real-time motion information:
matching motion coordinates of the two-dimensional character model corresponding to the real-time motion information in a storage unit;
determining the action duration of the two-dimensional character model at the action coordinates;
and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
In one possible implementation manner of the embodiment of the present application, the animation generation module 205 is specifically configured to, when controlling the two-dimensional character model to generate corresponding voice based on real-time voice information and combining the voice and the action:
matching tone color information corresponding to the two-dimensional character model in a storage unit;
generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information;
analyzing the real-time voice information and determining an analysis result;
matching a mouth shape corresponding to the two-dimensional character model in a storage unit based on the analysis result;
The mouth shape and the action are combined, and dubbing information is output at the same time.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In an embodiment of the present application, as shown in fig. 3, an electronic device 30 shown in fig. 3 includes: a processor 301 and a memory 303. Wherein the processor 301 is coupled to the memory 303, such as via a bus 302. Optionally, the electronic device 30 may also include a transceiver 304. It should be noted that, in practical applications, the transceiver 304 is not limited to one, and the structure of the electronic device 30 is not limited to the embodiment of the present application.
The processor 301 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. Processor 301 may also be a combination of implementing computing functions. For example, comprising one or more combinations of microprocessors, a combination of a DSP and a microprocessor, and the like.
Bus 302 may include a path to transfer information between the components. Bus 302 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect Standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. Bus 302 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 3, but not only one bus or one type of bus.
The Memory 303 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired application code in the form of instructions or data structures and that can be accessed by a computer.
The memory 303 is used for storing application program codes for executing the inventive arrangements and is controlled to be executed by the processor 301. The processor 301 is configured to execute the application code stored in the memory 303 to implement what is shown in the foregoing method embodiments.
Among them, electronic devices include, but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. But may also be a server or the like. The electronic device shown in fig. 3 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
Embodiments of the present application provide a computer-readable storage medium having a computer program stored thereon, which when run on a computer, causes the computer to perform the corresponding method embodiments described above. Compared with the related art, the electronic equipment in the embodiment of the application acquires the scenario information, the background image information and the character image information to obtain some basic information for generating the animation. The electronic equipment matches the corresponding two-dimensional character model in the storage unit based on the character image information, so that the precondition of the control of the staff is achieved. The electronic equipment generates prompting action information and prompting speech information of the two-dimensional character model based on the scenario information and the background image information, and further assists staff to generate two-dimensional animation. The electronic equipment receives real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, and generates a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information. By setting the prompting action information and the prompting speech information, the occurrence of misoperation is reduced in the process of generating the two-dimensional animation by a worker, so that the frequency of editing and modifying the two-dimensional animation is reduced, and the efficiency of manufacturing the two-dimensional animation is effectively improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (6)

1. A method of generating a two-dimensional animation comprising:
acquiring scenario information, background image information and character image information;
Matching a corresponding two-dimensional character model in a storage unit based on the character image information;
generating prompting action information and prompting speech information of the two-dimensional character model based on the scenario information and the background image information; the prompting action information is used for prompting to input corresponding real-time action information, and the prompting action information comprises action content and action sequence; the prompting speech information is speech of characters in the scenario information and is used for inputting real-time speech information based on the prompting speech information;
receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user;
generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information;
wherein the generating the prompting action information and the prompting speech information of the two-dimensional character model based on the scenario information and the background image information includes:
extracting features of the background image information to determine the size of an object in the background image information;
determining a character size of the two-dimensional character model in the background image information based on the scenario information;
Generating the prompting action information based on the scenario information, the size of the object and the size of the person;
generating the prompting speech information based on the scenario information;
wherein the generating the hint action information based on the scenario information, the size of the object, and the character size includes:
judging whether the two-dimensional character model needs to generate actions or not based on the scenario information;
if so, determining the prompting action information based on the scenario information, the size of the object and the size of the person;
if not, determining that the prompt action information is kept still;
the generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information includes:
generating the two-dimensional animation based on the real-time motion information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information;
if the real-time action information and the real-time voice information are different, outputting a prompt question, wherein the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again;
Receiving a selection result corresponding to the prompt problem, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is the same or no is received, judging whether the generation of the two-dimensional animation is completed or not based on the scenario information;
if the two-dimensional character model is not generated or yes, circularly executing the steps of receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, generating the two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is identical to the real-time action information, if not, outputting a prompt problem, receiving a selection result corresponding to the prompt problem, and if not, judging whether the two-dimensional animation is generated based on the scenario information until the generation is completed;
the generating the two-dimensional animation based on the real-time motion information and the real-time voice information includes:
controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
the speech and the action are combined.
2. The method of generating a two-dimensional animation according to claim 1, wherein the controlling the two-dimensional character model to generate the corresponding motion based on the real-time motion information comprises:
matching motion coordinates of the two-dimensional character model corresponding to the real-time motion information in the storage unit;
determining the action duration of the two-dimensional character model at the action coordinates;
and outputting the action corresponding to the two-dimensional character model based on the action coordinates and the action duration.
3. The method of generating a two-dimensional animation according to claim 1, wherein the controlling the two-dimensional character model based on the real-time voice information to generate corresponding voice, combining the voice and the action, comprises:
matching tone color information corresponding to the two-dimensional character model in a storage unit;
generating dubbing information corresponding to the two-dimensional character model based on the real-time voice information and the tone information;
analyzing the real-time voice information and determining an analysis result;
matching the mouth shape corresponding to the two-dimensional character model in a storage unit based on the analysis result;
And combining the mouth shape with the action and outputting the dubbing information at the same time.
4. An apparatus for generating a two-dimensional animation, comprising:
the acquisition module is used for acquiring scenario information, background image information and character image information;
the matching module is used for matching the corresponding two-dimensional character model in the storage unit based on the character image information;
the prompt generation module is used for generating prompt action information and prompt line information of the two-dimensional character model based on the scenario information and the background image information;
the receiving module is used for receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user;
the animation generation module is used for generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt line information;
wherein the generating the prompting action information and the prompting speech information of the two-dimensional character model based on the scenario information and the background image information includes:
extracting features of the background image information to determine the size of an object in the background image information;
Determining a character size of the two-dimensional character model in the background image information based on the scenario information;
generating the prompting action information based on the scenario information, the size of the object and the size of the person;
generating the prompting speech information based on the scenario information;
the generating the prompting action information based on the scenario information, the size of the object, and the character size includes:
judging whether the two-dimensional character model needs to generate actions or not based on the scenario information;
if so, determining the prompting action information based on the scenario information, the size of the object and the size of the person, wherein the prompting action information comprises action content and action sequence;
if not, determining that the prompt action information is kept still;
the generating a two-dimensional animation based on the scenario information, the real-time action information, the real-time voice information, the prompt action information and the prompt speech information includes:
generating the two-dimensional animation based on the real-time motion information and the real-time voice information;
judging whether the prompt action information is the same as the real-time action information;
If the real-time action information and the real-time voice information are different, outputting a prompt question, wherein the prompt question is used for inquiring whether the real-time action information and the real-time voice information need to be input again;
receiving a selection result corresponding to the prompt problem, and determining the selection result, wherein the selection result comprises yes and no;
if the two-dimensional animation is the same or no is received, judging whether the generation of the two-dimensional animation is completed or not based on the scenario information;
if the two-dimensional character model is not generated or yes, circularly executing the steps of receiving real-time action information and real-time voice information corresponding to the two-dimensional character model input by a user, generating the two-dimensional animation based on the real-time action information and the real-time voice information, judging whether the prompt action information is identical to the real-time action information, if not, outputting a prompt problem, receiving a selection result corresponding to the prompt problem, and if not, judging whether the two-dimensional animation is generated based on the scenario information until the generation is completed;
the generating the two-dimensional animation based on the real-time motion information and the real-time voice information includes:
controlling the two-dimensional character model to generate corresponding actions based on the real-time action information;
Controlling the two-dimensional character model to generate corresponding voice based on the real-time voice information;
the speech and the action are combined.
5. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: a method of generating a two-dimensional animation according to any of claims 1-3.
6. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a method of generating a two-dimensional animation according to any of claims 1-3.
CN202210290746.5A 2022-03-23 2022-03-23 Method, device, electronic equipment and medium for generating two-dimensional animation Active CN114693848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210290746.5A CN114693848B (en) 2022-03-23 2022-03-23 Method, device, electronic equipment and medium for generating two-dimensional animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210290746.5A CN114693848B (en) 2022-03-23 2022-03-23 Method, device, electronic equipment and medium for generating two-dimensional animation

Publications (2)

Publication Number Publication Date
CN114693848A CN114693848A (en) 2022-07-01
CN114693848B true CN114693848B (en) 2023-09-12

Family

ID=82139058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210290746.5A Active CN114693848B (en) 2022-03-23 2022-03-23 Method, device, electronic equipment and medium for generating two-dimensional animation

Country Status (1)

Country Link
CN (1) CN114693848B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001037221A1 (en) * 1999-11-16 2001-05-25 Possibleworlds, Inc. Image manipulation method and system
JP2005038160A (en) * 2003-07-14 2005-02-10 Oki Electric Ind Co Ltd Image generation apparatus, image generating method, and computer readable recording medium
JP2008299493A (en) * 2007-05-30 2008-12-11 Kaoru Sumi Content creation support system and computer program
US7554542B1 (en) * 1999-11-16 2009-06-30 Possible Worlds, Inc. Image manipulation method and system
KR20090126450A (en) * 2008-06-04 2009-12-09 에스케이 텔레콤주식회사 Scenario-based animation service system and method
CN106780673A (en) * 2017-02-13 2017-05-31 杨金强 A kind of animation method and system
EP3176787A1 (en) * 2015-12-01 2017-06-07 Wonderlamp Industries GmbH Method and system for generating an animated movie
CN109333544A (en) * 2018-09-11 2019-02-15 厦门大学 A kind of image exchange method for the marionette performance that spectators participate in
CN113436602A (en) * 2021-06-18 2021-09-24 深圳市火乐科技发展有限公司 Virtual image voice interaction method and device, projection equipment and computer medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017137947A1 (en) * 2016-02-10 2017-08-17 Vats Nitin Producing realistic talking face with expression using images text and voice

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001037221A1 (en) * 1999-11-16 2001-05-25 Possibleworlds, Inc. Image manipulation method and system
US7554542B1 (en) * 1999-11-16 2009-06-30 Possible Worlds, Inc. Image manipulation method and system
JP2005038160A (en) * 2003-07-14 2005-02-10 Oki Electric Ind Co Ltd Image generation apparatus, image generating method, and computer readable recording medium
JP2008299493A (en) * 2007-05-30 2008-12-11 Kaoru Sumi Content creation support system and computer program
KR20090126450A (en) * 2008-06-04 2009-12-09 에스케이 텔레콤주식회사 Scenario-based animation service system and method
EP3176787A1 (en) * 2015-12-01 2017-06-07 Wonderlamp Industries GmbH Method and system for generating an animated movie
CN106780673A (en) * 2017-02-13 2017-05-31 杨金强 A kind of animation method and system
CN109333544A (en) * 2018-09-11 2019-02-15 厦门大学 A kind of image exchange method for the marionette performance that spectators participate in
CN113436602A (en) * 2021-06-18 2021-09-24 深圳市火乐科技发展有限公司 Virtual image voice interaction method and device, projection equipment and computer medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
脸谱艺术在动漫插画造型中的运用探微;黎贯宇;《今传媒》;20170505;全文 *

Also Published As

Publication number Publication date
CN114693848A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN109462776B (en) Video special effect adding method and device, terminal equipment and storage medium
US11810258B2 (en) Marker-based augmented reality authoring tools
US20210029305A1 (en) Method and apparatus for adding a video special effect, terminal device and storage medium
CN109144610B (en) Audio playing method and device, electronic device and computer readable storage medium
US11562520B2 (en) Method and apparatus for controlling avatars based on sound
CN109600559B (en) Video special effect adding method and device, terminal equipment and storage medium
CN112511850A (en) Wheat connecting method, live broadcast display method, device, equipment and storage medium
US20230177755A1 (en) Predicting facial expressions using character motion states
CN113316078B (en) Data processing method and device, computer equipment and storage medium
CN110827789A (en) Music generation method, electronic device and computer-readable storage medium
CN114693848B (en) Method, device, electronic equipment and medium for generating two-dimensional animation
KR20200042143A (en) Dancing room service system and method thereof
CN111104964B (en) Method, equipment and computer storage medium for matching music with action
CN116580707A (en) Method and device for generating action video based on voice
CN112973130B (en) Playback model construction method, device, equipment and storage medium of virtual scene
CN109299378A (en) Methods of exhibiting, device, terminal and the storage medium of search result
CN112015945B (en) Method, system and device for displaying expression image on sound box in real time
JP5318016B2 (en) GAME SYSTEM, GAME SYSTEM CONTROL METHOD, AND PROGRAM
CN110781820B (en) Game character action generating method, game character action generating device, computer device and storage medium
CN111429949B (en) Pitch line generation method, device, equipment and storage medium
KR20230162062A (en) Neural network accompaniment extraction from songs
CN114247143A (en) Digital human interaction method, device, equipment and storage medium based on cloud server
CN114422844A (en) Bullet screen material generation method, bullet screen material recommendation device, bullet screen material recommendation equipment, bullet screen material recommendation medium and bullet screen material recommendation product
KR20220053021A (en) video game overlay
CN110215704A (en) Game open method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240202

Address after: Room 3108, 3rd Floor, Building 35, No. 10 Jiuxianqiao Road, Chaoyang District, Beijing, 100020

Patentee after: Beijing Shrubs Entertainment Cultural Technology Co.,Ltd.

Country or region after: China

Address before: 030082 room 1101, 11 / F, block B, building 1, No. 190, Longxing street, Taiyuan Xuefu Park, Shanxi comprehensive reform demonstration zone, Taiyuan City, Shanxi Province

Patentee before: SHANXI GUANMU CULTURE MEDIUM Co.,Ltd.

Country or region before: China