CN117292026A - Animation generation method and device and electronic equipment - Google Patents

Animation generation method and device and electronic equipment Download PDF

Info

Publication number
CN117292026A
CN117292026A CN202311015881.XA CN202311015881A CN117292026A CN 117292026 A CN117292026 A CN 117292026A CN 202311015881 A CN202311015881 A CN 202311015881A CN 117292026 A CN117292026 A CN 117292026A
Authority
CN
China
Prior art keywords
picture
action
target
animation
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311015881.XA
Other languages
Chinese (zh)
Inventor
李宇城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311015881.XA priority Critical patent/CN117292026A/en
Publication of CN117292026A publication Critical patent/CN117292026A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an animation generation method, an animation generation device and electronic equipment, wherein a plurality of action pictures are acquired in response to specified operation; the specifying operation includes: drawing operation and/or picture selection operation; in response to the animation generation instruction, identifying a target object in the action picture, and generating a skeleton picture corresponding to the action picture according to the action of the target object; binding a role model of a preset virtual role with a skeleton object to generate a target picture; video animation is generated based on the target picture. In the mode, a user can select a mode of moving pictures or drawing simple drawings according to needs, upload a plurality of moving pictures, bind bones and models based on preset actions of target objects in the moving pictures, generate video animation matched with the preset actions, reduce the manufacturing and research cost of the animation, enrich animation contents, meet personalized requirements of the user, realize rapid updating iteration, occupy fewer storage controls, and improve the performance of equipment.

Description

Animation generation method and device and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an animation generating method, an animation generating device, and an electronic device.
Background
Currently, in some games, players may generate various animations in the game, such as dance animations and the like. In the related art, corresponding animations can be generated based on the actions in the action library, but the types of the actions in the action library are less, so that the generated animations are not rich enough to meet the personalized creation of the player. The method can record the video danced by the professional, and generate corresponding animation by capturing the motion in the video, but the recorded requirements and specifications are more, the motion of the person is required, the environment and the lamplight are required, the motion capture is inaccurate, the updating iteration is slow, and the cost is high. In addition, both the action library and the recorded video require a large amount of memory space, which reduces device performance.
Disclosure of Invention
In view of the above, the present invention aims to provide an animation generating method, an animation generating device and an electronic device, so as to reduce the cost of animation production and research and development, enrich animation content, satisfy the personalized requirements of users, realize rapid updating and iteration, occupy fewer storage controls, and improve the performance of the device.
In a first aspect, an embodiment of the present invention provides an animation generation method, including: responding to the appointed operation, and acquiring a plurality of action pictures; wherein the specifying operation includes: drawing operation and/or picture selection operation; the action picture comprises a target object, wherein the target object has a preset action; in response to the animation generation instruction, identifying a target object in the action picture, and generating a skeleton picture corresponding to the action picture according to the action of the target object; wherein, the skeleton picture comprises skeleton objects; the actions of the skeleton object are matched with those of the target object; binding a role model of a preset virtual role with a skeleton object to generate a target picture; the target picture comprises a role model, and the action of the role model is matched with the action of the skeleton object; generating a video animation based on the target picture; continuously changing the actions of the character model in the video animation; at least some of the continuously varying actions match the actions of the target object.
In a second aspect, an embodiment of the present invention provides an animation generating apparatus, including: the action picture acquisition module is used for responding to the appointed operation and acquiring a plurality of action pictures; wherein the specifying operation includes: drawing operation and/or picture selection operation; the action picture comprises a target object, wherein the target object has a preset action; the skeleton picture generation module is used for responding to the animation generation instruction, identifying a target object in the action picture and generating a skeleton picture corresponding to the action picture according to the action of the target object; wherein, the skeleton picture comprises skeleton objects; the actions of the skeleton object are matched with those of the target object; the target picture generation module is used for binding a role model of a preset virtual role with a skeleton object to generate a target picture; the target picture comprises a role model, and the action of the role model is matched with the action of the skeleton object; the video animation generation module is used for generating video animation based on the target picture; continuously changing the actions of the character model in the video animation; at least some of the continuously varying actions match the actions of the target object.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the animation generation method of any of the first aspects.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement the animation generation method of any of the first aspects.
The embodiment of the invention has the following beneficial effects:
the invention provides an animation generation method, an animation generation device and electronic equipment, wherein a plurality of action pictures are acquired in response to specified operation; the specifying operation includes: drawing operation and/or picture selection operation; in response to the animation generation instruction, identifying a target object in the action picture, and generating a skeleton picture corresponding to the action picture according to the action of the target object; binding a role model of a preset virtual role with a skeleton object to generate a target picture; video animation is generated based on the target picture. In the mode, a user can select a mode of moving pictures or drawing simple drawings according to needs, upload a plurality of moving pictures, bind bones and models based on preset actions of target objects in the moving pictures, generate video animation matched with the preset actions, reduce the manufacturing and research cost of the animation, enrich animation contents, meet personalized requirements of the user, realize rapid updating iteration, occupy fewer storage controls, and improve the performance of equipment.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an animation generation method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a motion picture according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a bone picture according to an embodiment of the present invention;
Fig. 4 is a schematic diagram of a target picture according to an embodiment of the present invention;
FIG. 5 is a diagram of a graphical user interface provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of an animation generating device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Currently, in some games, players may generate various animations in the game, such as dance animations and the like. In the related art, corresponding animations can be generated based on the actions in the action library, but the types of the actions in the action library are less, so that the generated animations are not rich enough to meet the personalized creation of the player. The method can record the video danced by the professional, and generate corresponding animation by capturing the motion in the video, but the recording requirements and specifications are more, the requirements on the motion of the person and the light are met (for example, the recording of the whole body is required, the color of the clothes of the dancer cannot be the same as that of the environment, the speed of the dancer is slower than that of the normal dancer, and the like), and the inaccurate updating iteration of the motion capture is slower, and the cost is higher. Taking the traditional dance capture resource as an example: thirty minutes of traditional dance resources, the cost constitutes the cost of canopy fees, equipment, actors, dancers, data processing and the like. And the conversion to game animation takes weeks to months to complete depending on factors such as the complexity of the work design, team size, technology, etc. of the subsequent floor to game. In addition, both the action library and the recorded video require a large amount of memory space, which reduces device performance. Based on the above, the animation generation method, the animation generation device and the electronic equipment provided by the embodiment of the invention, the technology can be applied to equipment such as mobile phones, computers, notebooks, tablet computers and the like.
For the sake of understanding the present embodiment, first, a detailed description will be given of an animation generation method disclosed in the present embodiment, as shown in fig. 1, where the method includes the following steps:
step S102, a plurality of action pictures are acquired in response to a specified operation; wherein the specifying operation includes: drawing operation and/or picture selection operation; the action picture comprises a target object, wherein the target object has a preset action;
the drawing operation may be drawing of various styles of drawings, such as drawing of a simple drawing, drawing of a chinese drawing, drawing of a sketch, and the like. Of course, in order to reduce the drawing level of the user, the drawing operation described above is generally drawing a simple stroke. The above-mentioned picture selection operation may be a photographing operation or a selection operation of a currently stored picture. The target object included in the motion picture may be a person in general, or may be an animal (such as a monkey, panda, etc.), or may be an object (such as a football, basketball, hula hoop, etc.). In addition, the target object may be plural. The preset motion of the target object is usually a dance motion, but may be other motions, such as a combat motion, a yoga motion, and the like. The number of the plurality of action pictures is a designated number or multiple of the designated number, wherein the designated number can be 4 or 8, and the designated number can be preset according to actual needs.
Specifically, the user can select to upload a plurality of action pictures from the locally stored pictures, and can also draw through a drawing interface provided by the application, so that a plurality of pictures need to be drawn, and the drawn plurality of pictures are uploaded. The target object in the motion picture may be a whole body, a half body (usually, an upper half body, or a lower half body of course), or a partial close-up (e.g., a facial close-up, a hand close-up, etc.).
In actual implementation, the drawing and photographing functions are usually set in the application, and the function of selecting pictures stored in a gallery is also set. The application can be game APP, short video APP, live APP, video production APP and the like.
Step S104, in response to the animation generation instruction, identifying a target object in the action picture, and generating a skeleton picture corresponding to the action picture according to the action of the target object; wherein, the skeleton picture comprises skeleton objects; the actions of the skeleton object are matched with those of the target object;
after a server acquires a plurality of action pictures and receives an animation generation instruction, firstly, a target object in the action picture is identified through a ControlNet plug-in (an AI drawing plug-in) in a Stable Diffusion (a text-to-image generation model based on a potential Diffusion model), if the action picture comprises a plurality of image contents, as shown in fig. 2, one character is arranged in the middle position of the action picture, but some small characters are arranged in the edge region, and only the middle target object is identified at the moment. I.e. the identification range in fig. 2. It should be noted that if the action picture includes a plurality of target objects with similar sizes, for example, two basketball players, two target objects will be identified at this time, and a ball will be identified.
After the target object in the motion picture is identified, the skeletal pose of the target object may be extracted according to the skeletal structure of the target object, and the skeletal pose may be generated into a skeletal picture so that the motion of the skeletal object in the skeletal picture matches the motion of the target object. Illustratively, as shown in FIG. 3, a corresponding bone picture generated by identifying a target object in a plurality of action pictures.
Step S106, binding a role model of a preset virtual role with a skeleton object to generate a target picture; the target picture comprises a role model, and the action of the role model is matched with the action of the skeleton object;
the preset virtual character may be a game character, or may be various virtual expressions, virtual special effects, etc. provided in the application. Specifically, data of a role model of a virtual role preset by a user are obtained, file addresses for storing skeleton pictures are input through a control Net plug-in, and target pictures corresponding to a plurality of action pictures are generated in batches. Exemplary, as shown in fig. 4, is a target picture corresponding to the action picture.
Step S108, generating video animation based on the target picture; continuously changing the actions of the character model in the video animation; at least some of the continuously varying actions match the actions of the target object.
Specifically, the target picture can be converted into a sequence frame, and then the video animation is generated. If the action of the target object in the action picture is a dance action, the finally generated video animation is a dance video; if the action of the target object in the action picture is basketball shooting action, the finally generated video animation is basketball shooting video; if the action of the target object in the action picture is yoga action, the finally generated video animation is yoga video and the like.
The embodiment of the invention provides an animation generation method, which responds to a specified operation to acquire a plurality of action pictures; the specifying operation includes: drawing operation and/or picture selection operation; in response to the animation generation instruction, identifying a target object in the action picture, and generating a skeleton picture corresponding to the action picture according to the action of the target object; binding a role model of a preset virtual role with a skeleton object to generate a target picture; video animation is generated based on the target picture. In the mode, a user can select a mode of moving pictures or drawing simple drawings according to needs, upload a plurality of moving pictures, bind bones and models based on preset actions of target objects in the moving pictures, generate video animation matched with the preset actions, reduce the manufacturing and research cost of the animation, enrich animation contents, meet personalized requirements of the user, realize rapid updating iteration, occupy fewer storage controls, and improve the performance of equipment.
The step of acquiring a plurality of action pictures in response to the specified operation, and one possible implementation manner is as follows:
providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a drawing area and a confirmation control; responding to drawing operation aiming at a drawing area, and determining a drawing figure corresponding to the drawing operation; and responding to the triggering operation aiming at the confirmation control, determining the simple drawing as the action picture, and acquiring the action picture.
The graphical user interface shown in fig. 5 includes a drawing area and a confirmation control, and a user may perform drawing operation in the drawing area, and after drawing is completed, clicking the confirmation control may obtain the picture by the application back end, and may display the currently drawn picture in a lower "+" area. In actual implementation, the user may click on "make a simple drawing" to activate the drawing area, and drawing may be performed in the drawing area.
Another possible embodiment:
providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a picture uploading control; responding to triggering operation for a picture uploading control, and displaying a thumbnail of a pre-stored action picture; and responding to the selection operation of the thumbnail of the action picture, and acquiring a plurality of selected action pictures.
For example, as shown in fig. 5, the user may click directly on the "+" sign below the drawing area (i.e., the picture upload control described above), display a thumbnail of a pre-stored action picture, and may select one or more pictures.
Another possible embodiment:
providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a photographing control; responding to triggering operation aiming at a photographing control, and opening a photographing function; and responding to the photographing instruction, determining a photographed picture, determining the photographed picture as a motion picture, and acquiring the motion picture.
For example, as shown in fig. 5, the user may click on the "photograph" control to open the camera of the terminal device, take a photograph, and the target object in the last taken photograph usually has a preset action, which may be the photographing user himself or may be other objects. Clicking confirmation after shooting is completed, and uploading the shot photo.
In the mode, video animation meeting the personalized requirements of the user can be quickly manufactured by simply drawing simple strokes, shooting photos or selecting pre-stored photos without recording videos or searching action pictures suitable for the user from an action library, so that the personalized requirements of the user can be met, and the storage space of the equipment is reduced.
The step of identifying the target object in the motion picture and generating the skeleton picture corresponding to the motion picture according to the motion of the target object may be implemented as follows:
in response to the animation generation instruction, identifying a target object in the action picture, and determining a picture area including the target object in the action picture as a first action picture; adjusting the picture size and the proportion of the first action picture to obtain a second action picture; and extracting the gesture of the target object in the second action picture to obtain a skeleton picture of the second action picture.
Because the duty ratio of the target object in the pictures is different in different action pictures, after the first action picture is determined after the target object is identified, the size difference between the different first action pictures can be larger, and the generation of the subsequent video animation can be influenced. In order to improve the video animation effect, after the first motion picture is obtained, the picture size and the proportion of the first motion picture are generally adjusted according to a preset size value. For example, the picture size of the first action picture may be adjusted to 100×200, the ratio may be 16:19, etc., and the specific size value may be preset by the user, or the back end may be automatically adjusted according to the actual effect. And after the adjustment is finished, obtaining a second action picture, and extracting action bones of the target object in the second action picture to obtain a bone picture of the second action picture.
Binding a character model of a preset virtual character with a skeleton object, and after the step of generating a target picture, the method further comprises the following steps:
(1) Binding the role skin of the preset virtual role with the role model included in the target picture.
Typically, the role skin of the virtual role has a default role skin, and if the user wants to improve the role effect and dance effect of the virtual role, the role skin of the virtual role can be preset. The role skin can be obtained by a user through spending virtual resources, can be obtained through making activities, and can also be self-contained by the virtual role. Specifically, according to the data of the role skin of the virtual role, the role skin of the virtual role is bound into the role model in the target picture, and conventionally understood as wearing preset clothes on the role model. For example, if the action in the action picture is a dance action, the skin of the character of the virtual character is a uniform, the finally generated video animation is usually in the form of a male dancing, a modern dancing and the like, and if the skin of the character is replaced by a classical dancing and the like, the finally presented dance is usually in the form of a classical dancing and the like.
In the mode, the clothing of the virtual roles in the video animation can be preset, the personalized requirement of the user is further improved, the user can select any role skin to obtain the virtual roles in different styles, and then the video animation in different styles is generated.
(2) And adjusting the action of the character model in the target picture according to the preset action style so as to enable the action of the character model after adjustment to be matched with the preset action style.
The action style described above generally includes a dance style. The dance style includes classical dance, folk dance, modern dance, contemporary dance and ballet dance, and the different dance styles include a plurality of different dance types, such as male dance, flail dance, lock dance, jazz, etc., further such as latin dance, steel dance, belly dance, etc. Typically, different action styles, and corresponding actions will all differ. For example, the same is true of the waist-crossing action, and if the style of the male dancing is preset, the crotch of the virtual character does not twist, and if the style of the female dancing, the crotch of the virtual character twists. The user can generate video animations of various virtual roles, different role skins and different action styles aiming at the same set of action pictures. In the mode, through the preset action style, the video animation conforming to the action style can be generated, and the richness of the video animation and the personalized requirements of users are further improved.
The step of generating the video animation based on the target picture comprises the following steps: and converting the target picture into a sequence frame, and generating video animation corresponding to the plurality of action pictures. And automatically generating a sequence frame from the target picture group through a script for converting the picture into the video, and outputting a video format to obtain the video animation.
After the step of converting the target picture into the sequence frame and generating the video animation corresponding to the plurality of action pictures, the method further comprises the following steps:
(1) If the motion of the character model is discontinuous in two consecutive sequence frames, at least one target sequence frame is generated between the two consecutive sequence frames such that the motion of the character model is continuous when the video animation is played.
Considering that the number of motion pictures uploaded by a user is limited, if only the motion pictures uploaded by the user are used, the generated video animation has fewer frames, which affects the effect of the video animation, especially when the motion phase difference between two motion pictures is large, the video animation has the problem of frame skip and blocking, so that whether the difference between the motions of the character models in the continuous sequence frames is large needs to be recognized, and if the difference is large, a frame supplementing operation needs to be performed, namely at least one target sequence frame is generated between the two continuous sequence frames, so that the motions of the character models are continuous when the video animation is played. For example, the left hand of the character model in the first sequence frame is on the left side of the head, the left hand of the character model in the second sequence frame is on the right side of the head, a plurality of positions are selected from the left side to the right side, a plurality of target sequence frames are generated between the first sequence frame and the second sequence frame, and the left hand of the character model in the plurality of target sequence frames is respectively at the determined different positions so as to play the video animation. The animation effect of the video animation is further improved by a frame supplementing mode.
(2) And adjusting the playing time and the playing speed of each sequence frame according to the preset action speed and/or action beat.
The play rate of the video animation can be set through the preset action speed, such as one-time speed, two-time speed or 0.5-time speed, and the play duration of each sequence frame is different at different action speeds. The motion beats can determine the continuously played sequence frames, for example, the motion is 8 beats, and usually after the 8 motion plays, the next sequence frame is not played immediately, but the sequence frame corresponding to the next 8 beats is played after being paused. That is, the video animation takes 8 beats as nodes, and the plurality of sequence frames are divided into a plurality of groups, each group represents 8 beats, and when the video animation is played, the video animation is played one group by one group.
In the mode, not only is the manufacturing and research and development cost and the labor input reduced, but also the creation threshold of UGC (User Generated Content, namely user generated content) is reduced, and the secondary transmission of game content is greatly promoted. For example, the game characters have more content creation extensions, the fast floor dance video is fast transmitted on the short video platform after the player can obtain the video, the game is beneficial to the transmission of the game, and the player is beneficial to the creation of multiple times (such as ghost animals, dubbing and the like). For example, in short video APP, many 15 seconds of dance challenges this mode is popular, and is a popular song-playing mode for idol. If the virtual character is used for challenges, many people ask what app to use or how to make such dance video.
Corresponding to the above method embodiment, an embodiment of the present invention provides an animation generating device, as shown in fig. 6, including:
a motion picture acquisition module 61 for acquiring a plurality of motion pictures in response to a designation operation; wherein the specifying operation includes: drawing operation and/or picture selection operation; the action picture comprises a target object, wherein the target object has a preset action;
the skeleton picture generation module 62 is configured to identify a target object in the motion picture in response to the animation generation instruction, and generate a skeleton picture corresponding to the motion picture according to the motion of the target object; wherein, the skeleton picture comprises skeleton objects; the actions of the skeleton object are matched with those of the target object;
a target picture generation module 63, configured to bind a role model of a preset virtual role with a bone object, and generate a target picture; the target picture comprises a role model, and the action of the role model is matched with the action of the skeleton object;
a video animation generation module 64 for generating a video animation based on the target picture; continuously changing the actions of the character model in the video animation; at least some of the continuously varying actions match the actions of the target object.
The embodiment of the invention provides an animation generation device, which responds to a specified operation to acquire a plurality of action pictures; the specifying operation includes: drawing operation and/or picture selection operation; in response to the animation generation instruction, identifying a target object in the action picture, and generating a skeleton picture corresponding to the action picture according to the action of the target object; binding a role model of a preset virtual role with a skeleton object to generate a target picture; video animation is generated based on the target picture. In the mode, a user can select a mode of moving pictures or drawing simple drawings according to needs, upload a plurality of moving pictures, bind bones and models based on preset actions of target objects in the moving pictures, generate video animation matched with the preset actions, reduce the manufacturing and research cost of the animation, enrich animation contents, meet personalized requirements of the user, realize rapid updating iteration, occupy fewer storage controls, and improve the performance of equipment.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a drawing area and a confirmation control; the action picture acquisition module is also used for: responding to drawing operation aiming at a drawing area, and determining a drawing figure corresponding to the drawing operation; and responding to the triggering operation aiming at the confirmation control, determining the simple drawing as the action picture, and acquiring the action picture.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a picture uploading control; the action picture acquisition module is also used for: responding to triggering operation for a picture uploading control, and displaying a thumbnail of a pre-stored action picture; and responding to the selection operation of the thumbnail of the action picture, and acquiring a plurality of selected action pictures.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a photographing control; the action picture acquisition module is also used for: responding to triggering operation aiming at a photographing control, and opening a photographing function; and responding to the photographing instruction, determining a photographed picture, determining the photographed picture as a motion picture, and acquiring the motion picture.
The bone picture generation module is further used for: in response to the animation generation instruction, identifying a target object in the action picture, and determining a picture area including the target object in the action picture as a first action picture; adjusting the picture size and the proportion of the first action picture to obtain a second action picture; and extracting the gesture of the target object in the second action picture to obtain a skeleton picture of the second action picture.
The device also comprises a role skin binding module for: binding the role skin of the preset virtual role with the role model included in the target picture.
The device further comprises an action adjusting module for: and adjusting the action of the character model in the target picture according to the preset action style so as to enable the action of the character model after adjustment to be matched with the preset action style.
The video animation generation module is further used for: and converting the target picture into a sequence frame, and generating video animation corresponding to the plurality of action pictures.
The device also comprises a frame supplementing module for: if the motion of the character model is discontinuous in two consecutive sequence frames, at least one target sequence frame is generated between the two consecutive sequence frames such that the motion of the character model is continuous when the video animation is played.
The device further comprises a play adjusting module for: and adjusting the playing time and the playing speed of each sequence frame according to the preset action speed and/or action beat.
The number of the plurality of action pictures is a specified number or a multiple of the specified number.
The animation generation device provided by the embodiment of the invention has the same technical characteristics as the animation generation method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The present embodiment also provides an electronic device including a processor and a memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the above-described animation generation method. The electronic device may be a server or a terminal device.
Referring to fig. 7, the electronic device includes a processor 100 and a memory 101, the memory 101 storing machine executable instructions that can be executed by the processor 100, the processor 100 executing the machine executable instructions to implement the animation generation method described above.
Further, the electronic device shown in fig. 7 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 7, but not only one bus or type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The processor 100 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The processor in the electronic device may implement the following operations in the animation generation method by executing machine-executable instructions:
responding to the appointed operation, and acquiring a plurality of action pictures; wherein the specifying operation includes: drawing operation and/or picture selection operation; the action picture comprises a target object, wherein the target object has a preset action; in response to the animation generation instruction, identifying a target object in the action picture, and generating a skeleton picture corresponding to the action picture according to the action of the target object; wherein, the skeleton picture comprises skeleton objects; the actions of the skeleton object are matched with those of the target object; binding a role model of a preset virtual role with a skeleton object to generate a target picture; the target picture comprises a role model, and the action of the role model is matched with the action of the skeleton object; generating a video animation based on the target picture; continuously changing the actions of the character model in the video animation; at least some of the continuously varying actions match the actions of the target object. In the mode, a user can select a mode of moving pictures or drawing simple drawings according to needs, upload a plurality of moving pictures, bind bones and models based on preset actions of target objects in the moving pictures, generate video animation matched with the preset actions, reduce the manufacturing and research cost of the animation, enrich animation contents, meet personalized requirements of the user, realize rapid updating iteration, occupy fewer storage controls, and improve the performance of equipment.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a drawing area and a confirmation control; a step of acquiring a plurality of action pictures in response to a specified operation, comprising: responding to drawing operation aiming at a drawing area, and determining a drawing figure corresponding to the drawing operation; and responding to the triggering operation aiming at the confirmation control, determining the simple drawing as the action picture, and acquiring the action picture.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a picture uploading control; a step of acquiring a plurality of action pictures in response to a specified operation, comprising: responding to triggering operation for a picture uploading control, and displaying a thumbnail of a pre-stored action picture; and responding to the selection operation of the thumbnail of the action picture, and acquiring a plurality of selected action pictures.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a photographing control; a step of acquiring a plurality of action pictures in response to a specified operation, comprising: responding to triggering operation aiming at a photographing control, and opening a photographing function; and responding to the photographing instruction, determining a photographed picture, determining the photographed picture as a motion picture, and acquiring the motion picture.
The step of identifying the target object in the motion picture and generating the skeleton picture corresponding to the motion picture according to the motion of the target object comprises the following steps: in response to the animation generation instruction, identifying a target object in the action picture, and determining a picture area including the target object in the action picture as a first action picture; adjusting the picture size and the proportion of the first action picture to obtain a second action picture; and extracting the gesture of the target object in the second action picture to obtain a skeleton picture of the second action picture.
Binding a character model of a preset virtual character with a skeleton object, and after the step of generating a target picture, the method further comprises the following steps: binding the role skin of the preset virtual role with the role model included in the target picture.
Binding a character model of a preset virtual character with a skeleton object, and after the step of generating a target picture, the method further comprises the following steps: and adjusting the action of the character model in the target picture according to the preset action style so as to enable the action of the character model after adjustment to be matched with the preset action style.
The step of generating the video animation based on the target picture comprises the following steps: and converting the target picture into a sequence frame, and generating video animation corresponding to the plurality of action pictures.
After the step of converting the target picture into the sequence frame and generating the video animation corresponding to the plurality of action pictures, the method further comprises the following steps: if the motion of the character model is discontinuous in two consecutive sequence frames, at least one target sequence frame is generated between the two consecutive sequence frames such that the motion of the character model is continuous when the video animation is played.
After the step of converting the target picture into the sequence frame and generating the video animation corresponding to the plurality of action pictures, the method further comprises the following steps: and adjusting the playing time and the playing speed of each sequence frame according to the preset action speed and/or action beat.
The number of the plurality of action pictures is a specified number or a multiple of the specified number.
The present embodiments also provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the above-described animation generation method.
The machine-executable instructions stored on the machine-readable storage medium may, by executing the machine-executable instructions, implement the following operations in the animation generation method:
responding to the appointed operation, and acquiring a plurality of action pictures; wherein the specifying operation includes: drawing operation and/or picture selection operation; the action picture comprises a target object, wherein the target object has a preset action; in response to the animation generation instruction, identifying a target object in the action picture, and generating a skeleton picture corresponding to the action picture according to the action of the target object; wherein, the skeleton picture comprises skeleton objects; the actions of the skeleton object are matched with those of the target object; binding a role model of a preset virtual role with a skeleton object to generate a target picture; the target picture comprises a role model, and the action of the role model is matched with the action of the skeleton object; generating a video animation based on the target picture; continuously changing the actions of the character model in the video animation; at least some of the continuously varying actions match the actions of the target object. In the mode, a user can select a mode of moving pictures or drawing simple drawings according to needs, upload a plurality of moving pictures, bind bones and models based on preset actions of target objects in the moving pictures, generate video animation matched with the preset actions, reduce the manufacturing and research cost of the animation, enrich animation contents, meet personalized requirements of the user, realize rapid updating iteration, occupy fewer storage controls, and improve the performance of equipment.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a drawing area and a confirmation control; a step of acquiring a plurality of action pictures in response to a specified operation, comprising: responding to drawing operation aiming at a drawing area, and determining a drawing figure corresponding to the drawing operation; and responding to the triggering operation aiming at the confirmation control, determining the simple drawing as the action picture, and acquiring the action picture.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a picture uploading control; a step of acquiring a plurality of action pictures in response to a specified operation, comprising: responding to triggering operation for a picture uploading control, and displaying a thumbnail of a pre-stored action picture; and responding to the selection operation of the thumbnail of the action picture, and acquiring a plurality of selected action pictures.
Providing a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a photographing control; a step of acquiring a plurality of action pictures in response to a specified operation, comprising: responding to triggering operation aiming at a photographing control, and opening a photographing function; and responding to the photographing instruction, determining a photographed picture, determining the photographed picture as a motion picture, and acquiring the motion picture.
The step of identifying the target object in the motion picture and generating the skeleton picture corresponding to the motion picture according to the motion of the target object comprises the following steps: in response to the animation generation instruction, identifying a target object in the action picture, and determining a picture area including the target object in the action picture as a first action picture; adjusting the picture size and the proportion of the first action picture to obtain a second action picture; and extracting the gesture of the target object in the second action picture to obtain a skeleton picture of the second action picture.
Binding a character model of a preset virtual character with a skeleton object, and after the step of generating a target picture, the method further comprises the following steps: binding the role skin of the preset virtual role with the role model included in the target picture.
Binding a character model of a preset virtual character with a skeleton object, and after the step of generating a target picture, the method further comprises the following steps: and adjusting the action of the character model in the target picture according to the preset action style so as to enable the action of the character model after adjustment to be matched with the preset action style.
The step of generating the video animation based on the target picture comprises the following steps: and converting the target picture into a sequence frame, and generating video animation corresponding to the plurality of action pictures.
After the step of converting the target picture into the sequence frame and generating the video animation corresponding to the plurality of action pictures, the method further comprises the following steps: if the motion of the character model is discontinuous in two consecutive sequence frames, at least one target sequence frame is generated between the two consecutive sequence frames such that the motion of the character model is continuous when the video animation is played.
After the step of converting the target picture into the sequence frame and generating the video animation corresponding to the plurality of action pictures, the method further comprises the following steps: and adjusting the playing time and the playing speed of each sequence frame according to the preset action speed and/or action beat.
The number of the plurality of action pictures is a specified number or a multiple of the specified number.
The animation generation method, the animation generation device and the computer program product of the electronic device provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, and the instructions included in the program codes can be used for executing the method described in the previous method embodiment, and specific implementation can be referred to the method embodiment and will not be repeated here.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (14)

1. A method of animation generation, the method comprising:
responding to the appointed operation, and acquiring a plurality of action pictures; wherein the specifying operation includes: drawing operation and/or picture selection operation; the action picture comprises a target object, wherein the target object has a preset action;
identifying a target object in the action picture in response to an animation generation instruction, and generating a skeleton picture corresponding to the action picture according to the action of the target object; wherein, the skeleton picture comprises skeleton objects; the actions of the bone object are matched with the actions of the target object;
binding a role model of a preset virtual role with the skeleton object to generate a target picture; the target picture comprises the character model, and the action of the character model is matched with the action of the skeleton object;
generating a video animation based on the target picture; the actions of the character models in the video animation are continuously changed; at least some of the continuously varying actions match the actions of the target object.
2. The method of claim 1, wherein a graphical user interface is provided by the terminal device, the graphical user interface including a drawing area and a confirmation control;
A step of acquiring a plurality of action pictures in response to a specified operation, comprising:
responding to drawing operation aiming at the drawing area, and determining a simple drawing corresponding to the drawing operation;
and responding to the triggering operation of the confirmation control, determining the simple drawing as a motion picture, and acquiring the motion picture.
3. The method according to claim 1, wherein a graphical user interface is provided by the terminal device, and the graphical user interface includes a picture upload control;
a step of acquiring a plurality of action pictures in response to a specified operation, comprising:
responding to the triggering operation of the picture uploading control, and displaying a thumbnail of a pre-stored action picture;
and responding to the selection operation of the thumbnail of the action picture, and acquiring a plurality of selected action pictures.
4. The method of claim 1, wherein a graphical user interface is provided by the terminal device, the graphical user interface including a photographing control;
a step of acquiring a plurality of action pictures in response to a specified operation, comprising:
responding to triggering operation aiming at the photographing control, and opening a photographing function;
and responding to a photographing instruction, determining a photographed picture, determining the photographed picture as a motion picture, and acquiring the motion picture.
5. The method according to claim 1, wherein the step of identifying a target object in the motion picture and generating a bone picture corresponding to the motion picture according to the motion of the target object comprises:
in response to an animation generation instruction, identifying a target object in the action picture, and determining a picture area including the target object in the action picture as a first action picture;
adjusting the picture size and the proportion of the first action picture to obtain a second action picture;
and extracting the gesture of the target object in the second action picture to obtain a skeleton picture of the second action picture.
6. The method of claim 1, wherein after the step of binding the character model of the preset virtual character with the skeletal object to generate the target picture, the method further comprises:
binding the role skin of the preset virtual role with the role model included in the target picture.
7. The method of claim 1, wherein after the step of binding the character model of the preset virtual character with the skeletal object to generate the target picture, the method further comprises:
And adjusting the action of the character model in the target picture according to a preset action style so as to enable the adjusted action of the character model to be matched with the preset action style.
8. The method of claim 1, wherein the step of generating a video animation based on the target picture comprises:
and converting the target picture into a sequence frame, and generating video animation corresponding to the plurality of action pictures.
9. The method of claim 8, wherein after the step of converting the target picture into a sequence of frames and generating the video animation corresponding to the plurality of motion pictures, the method further comprises:
if the motion of the character model is discontinuous in two consecutive sequence frames, generating at least one target sequence frame between the two consecutive sequence frames such that the motion of the character model is continuous when the video animation is played.
10. The method of claim 8, wherein after the step of converting the target picture into a sequence of frames and generating the video animation corresponding to the plurality of motion pictures, the method further comprises:
and adjusting the playing time and the playing speed of each sequence frame according to the preset action speed and/or action beat.
11. The method of claim 1, wherein the number of the plurality of action pictures is a specified number or a multiple of the specified number.
12. An animation generation device, the device comprising:
the action picture acquisition module is used for responding to the appointed operation and acquiring a plurality of action pictures; wherein the specifying operation includes: drawing operation and/or picture selection operation; the action picture comprises a target object, wherein the target object has a preset action;
the skeleton picture generation module is used for responding to the animation generation instruction, identifying a target object in the action picture and generating a skeleton picture corresponding to the action picture according to the action of the target object; wherein, the skeleton picture comprises skeleton objects; the actions of the bone object are matched with the actions of the target object;
the target picture generation module is used for binding a role model of a preset virtual role with the skeleton object to generate a target picture; the target picture comprises the character model, and the action of the character model is matched with the action of the skeleton object;
the video animation generation module is used for generating video animation based on the target picture; the actions of the character models in the video animation are continuously changed; at least some of the continuously varying actions match the actions of the target object.
13. An electronic device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the animation generation method of any of claims 1-11.
14. A computer readable storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to implement the animation generation method of any of claims 1-11.
CN202311015881.XA 2023-08-11 2023-08-11 Animation generation method and device and electronic equipment Pending CN117292026A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311015881.XA CN117292026A (en) 2023-08-11 2023-08-11 Animation generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311015881.XA CN117292026A (en) 2023-08-11 2023-08-11 Animation generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117292026A true CN117292026A (en) 2023-12-26

Family

ID=89256035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311015881.XA Pending CN117292026A (en) 2023-08-11 2023-08-11 Animation generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117292026A (en)

Similar Documents

Publication Publication Date Title
US11533456B2 (en) Group display system
US10776981B1 (en) Entertaining mobile application for animating a single image of a human body and applying effects
US20190267041A1 (en) System and method for generating probabilistic play analyses from sports videos
US20180301169A1 (en) System and method for generating a highlight reel of a sporting event
WO2020029523A1 (en) Video generation method and apparatus, electronic device, and storage medium
CN109951628A (en) Model building method, photographic method, device, storage medium and terminal
CN112154658A (en) Image processing apparatus, image processing method, and program
KR20190025691A (en) How and where to make a video
CN105450911B (en) Image processing apparatus, image processing method
CN114390193B (en) Image processing method, device, electronic equipment and storage medium
CN114363689B (en) Live broadcast control method and device, storage medium and electronic equipment
US11875567B2 (en) System and method for generating probabilistic play analyses
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
JP5878523B2 (en) Content processing apparatus and integrated circuit, method and program thereof
CN102238335B (en) Photographic device and image data generating method
CN117292026A (en) Animation generation method and device and electronic equipment
US20080122867A1 (en) Method for displaying expressional image
CN116212368A (en) Method and device for controlling scene establishment in game and electronic equipment
JP2020137050A (en) Imaging device, imaging method, imaging program, and learning device
JPWO2019187493A1 (en) Information processing equipment, information processing methods, and programs
KR102532848B1 (en) Method and apparatus for creating avatar based on body shape
WO2023011356A1 (en) Video generation method and electronic device
US20240163527A1 (en) Video generation method and apparatus, computer device, and storage medium
US20240153199A1 (en) A Athletes Perspective Sports Game Broadcast System Based On Vr Technology
CN114866849B (en) Video playing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination