CN113744372A - Animation generation method, device and equipment - Google Patents

Animation generation method, device and equipment Download PDF

Info

Publication number
CN113744372A
CN113744372A CN202110875167.2A CN202110875167A CN113744372A CN 113744372 A CN113744372 A CN 113744372A CN 202110875167 A CN202110875167 A CN 202110875167A CN 113744372 A CN113744372 A CN 113744372A
Authority
CN
China
Prior art keywords
target
actions
scene
associated object
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110875167.2A
Other languages
Chinese (zh)
Inventor
张迎凯
何文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202110875167.2A priority Critical patent/CN113744372A/en
Publication of CN113744372A publication Critical patent/CN113744372A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Abstract

The embodiment of the invention provides an animation generation method, an animation generation device and animation generation equipment, wherein the method is applied to a service program, a target scene is loaded in the service program, and the method comprises the following steps: triggering an action execution instruction in the target scene, wherein the action execution instruction is used for instructing the virtual role to execute a group of target actions in the target scene; responding to the action execution instruction, and acquiring first association data between a group of target actions and a target scene; a set of target images for performing a set of target actions by the virtual character is generated based on the first associated data to arrive at a target animation including the set of target images. According to the method, the first associated data are obtained and applied to the generation process of the target animation, so that the target action can be adaptively matched with various scenes in the target animation, the animation generation efficiency is improved, the action track smoothness is improved, and a more real and natural visual effect is brought to a user.

Description

Animation generation method, device and equipment
Technical Field
The invention relates to the technical field of images, in particular to an animation generation method, device and equipment.
Background
At present, in order to make the action of the virtual character have a more real and natural visual effect, the action of the virtual character is usually made by an animator matching with the scene where the virtual character is located.
Taking the climbing action as an example, an animator can make a section of animation including the climbing action according to the scene where the virtual character is located. If the scene in which the virtual character is positioned changes, the problem that the animation made before is not matched with the changed scene can occur. In the conventional animation scheme, an animator carries out animation again according to changed scenes, but the scheme causes large workload for the animator and low efficiency in generating new animation. And the animation that is made before is analyzed by adopting the Inverse Kinematics (IK) technology, and although a new animation of the virtual character in the changed scene can be generated based on the analysis result, the new animation generated by the scheme has obvious motion jitter, unsmooth motion track and poor visual effect.
In summary, how to match the actions of the virtual character in the new animation with the changed scenery to improve the animation generation efficiency and improve the visual effect of the new animation becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides an animation generation method, device and equipment, which are used for enabling actions of virtual characters to be adaptive to various scenes.
In a first aspect, an embodiment of the present invention provides an animation generation method, where the method is applied to a service program, where a target scene is loaded in the service program, and the method includes:
triggering an action execution instruction in the target scene, wherein the action execution instruction is used for instructing the virtual role to execute a group of target actions in the target scene;
responding to the action execution instruction, and acquiring first association data between a group of target actions and a target scene;
a set of target images for performing a set of target actions by the virtual character is generated based on the first associated data to arrive at a target animation including the set of target images.
In a second aspect, an embodiment of the present invention provides an animation generation apparatus loaded with a target scene, including:
the trigger module is used for triggering an action execution instruction in the target scene, and the action execution instruction is used for indicating the virtual role to execute a group of target actions in the target scene;
the first acquisition module is used for responding to the action execution instruction and acquiring first associated data between a group of target actions and a target scene;
and the generating module is used for generating a group of target images for executing a group of target actions by the virtual character based on the first associated data so as to obtain a target animation comprising the group of target images.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores executable code, and when the executable code is executed by the processor, the processor is enabled to implement at least the animation generation method in the first aspect.
An embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to implement at least the animation generation method of the first aspect.
In the embodiment of the invention, an action execution instruction in a target scene loaded by a service program is triggered first, and the action execution instruction is used for instructing a virtual role to execute a group of target actions in the target scene. Furthermore, after the action execution instruction in the target scene is triggered, the service program responds to the action execution instruction to acquire first associated data between a group of target actions executed by the virtual character and the target scene. Because the first association data can embody the intrinsic association between the virtual character executing the set of target actions and the target scene, a set of target images of the set of target actions executed by the virtual character can be generated through the first association data, so that the set of target actions executed by the virtual character can be accurately matched with the target scene in the target images, and finally the target animation comprising the set of target images is obtained. In the embodiment of the invention, after the action execution instruction in the target scene is triggered, the first associated data between a group of target actions executed by the virtual character and the target scene is obtained, and the internal association between the virtual character and the target scene embodied by the first associated data is applied to the generation process of the target animation, so that the target actions can be matched with the target scene in the target animation, the action track smoothness is effectively improved, a more real and natural visual effect is brought to a user, the matching process of the target actions and the target scene can be realized without artificially remaking the actions of the virtual character, and the animation generation efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a flow chart of a method for generating an animation according to an embodiment of the present invention;
fig. 2a is a schematic diagram of a reference image according to an embodiment of the present invention;
FIG. 2b is a schematic diagram of another reference image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an animation generation process according to an embodiment of the invention;
FIG. 4 is a schematic diagram of another animation generation process provided by an embodiment of the invention;
FIG. 5 is a schematic diagram of another animation generation process according to an embodiment of the invention;
fig. 6 is a schematic structural diagram of an animation generation apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device corresponding to the animation generation apparatus provided in the embodiment shown in fig. 6.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The animation generation scheme provided by the embodiment of the invention can be executed by an electronic device, and the electronic device can be a terminal device such as a smart phone, a tablet computer, a PC (personal computer), a notebook computer and the like. In an alternative embodiment, the electronic device may have a service installed thereon for executing an animation generation scheme.
The animation generation scheme provided by the embodiment of the invention is suitable for manufacturing scenes containing various actions of animations. The various actions are for example climbing actions, jumping actions, running actions. In an alternative embodiment, various actions are performed by the virtual character in the animation.
The following describes a technical problem to be actually solved by the technical solution provided by the embodiment of the present invention, taking an action executed by a virtual character as a climbing action as an example:
the animator can make a section of animation containing the climbing action according to the scenery in the scene where the virtual character is located. If the scene in which the virtual character is positioned changes, the problem that the animation made before is not matched with the changed scene can occur. For example, the distance between the crossbars of the ladder in the scene where the virtual character is located is increased, which may cause the limbs to penetrate into the crossbars or suspend in the air during the climbing process of the virtual character. If the animator still reproduces the animation according to the changed scene, the workload of the animator is large, and the generation efficiency of the new animation is low. And the IK technology is adopted to analyze the animation made before to determine the skeleton position of the virtual character, and although the action in the original animation can be directly transferred to the changed scene to generate a new animation based on the analysis result, the action of the virtual character in the new animation is obvious in jitter, the action track is not smooth, and the visual effect is poor.
In summary, how to match the actions of the virtual character in the new animation with the changed scenery to improve the animation generation efficiency and improve the visual effect of the new animation becomes a technical problem to be solved urgently.
In view of the above-mentioned technical problems, embodiments of the present invention provide an animation generation method, apparatus, and device. In summary, the solution idea of the animation generation scheme provided by the embodiment of the present invention is:
firstly, the scheme triggers an action execution instruction in a target scene loaded in a service program, wherein the action execution instruction is used for indicating a virtual role to execute a group of target actions in the target scene. Furthermore, after triggering the action execution instruction in the target scene, the service program responds to the action execution instruction to acquire association data between a set of target actions executed by the virtual character and the target scene, and the association data is referred to as first association data in the text for distinguishing. The first association data can embody the intrinsic association between the virtual character executing the set of target actions and the target scene, such as the relative position relationship between the virtual character executing the set of target actions and the associated scenes in the target scene, and is referred to as a target association object in order to distinguish the associated scenes in the target scene. Therefore, a group of target images of a group of target actions executed by the virtual character can be generated through the first associated data, so that the group of target actions executed by the virtual character can be accurately matched with a target scene in the target images, and finally, a target animation comprising the group of target images is obtained.
In the scheme, after an action execution instruction in a target scene is triggered, a group of first associated data between the target action executed by the virtual character and the target scene is obtained, and the internal association embodied by the first associated data is applied to the generation process of the target animation, so that the target action can be adaptively matched with different scenes in various target animations, the smoothness of an action track can be improved, and a more real and natural visual effect is brought to a user. And by applying the first associated data to the generation process of the target animation, the workload brought by the traditional animation production scheme is reduced, and the animation generation efficiency is improved.
Having described the basic concepts of animation generation schemes, various non-limiting embodiments of the present invention are described in detail below.
The following describes the execution process of the animation generation method with reference to the following embodiments.
Fig. 1 is a flowchart of an animation generation method according to an embodiment of the present invention, and as shown in fig. 1, the animation generation method includes the following steps:
101. triggering an action execution instruction in the target scene, wherein the action execution instruction is used for instructing the virtual role to execute a group of target actions in the target scene;
102. responding to the action execution instruction, and acquiring first association data between a group of target actions and a target scene;
103. a set of target images for performing a set of target actions by the virtual character is generated based on the first associated data to arrive at a target animation including the set of target images.
The animation generation method in the embodiment of the invention is applied to a service program, and a target scene is loaded in the service program.
The target animation is the animation that is finally generated. The scene in which the virtual character is located (i.e., the scene shown in the target animation) is referred to as a target scene, and the action that the virtual character needs to perform is referred to as a target action. Target actions include, for example, climbing actions, jumping actions, running actions.
First, an action execution instruction in a target scene is triggered. The action execution instruction is used for instructing the virtual character to execute a set of target actions in the target scene.
In an alternative embodiment, the action execution instructions in the target scenario may be controlled by setting control items in the service program. The action execution instruction may be triggered if the user selects the control item. The control items are embodied in the form of keys or sliders in the service program, for example.
Alternatively, in another alternative embodiment, the action execution instruction in the target scene may also be controlled by setting a trigger condition in the service program. If the service program is a game client and the trigger condition is that the virtual character completes a certain preset task, the virtual character can be regarded as reaching the trigger condition when completing the certain preset task, and then the action execution instruction is triggered.
After triggering the action execution instruction in the target scene, the service program needs to acquire a set of first association data between the target action and the target scene in response to the action execution instruction.
In practical application, after the action execution instruction is triggered, the action to be executed by the virtual role can be calculated in real time. Optionally, first association data between a pre-generated set of target actions and the target scene may be retrieved.
Specifically, obtaining first association data between a set of target actions and a target scene may be implemented as:
determining a plurality of body key points of the virtual character when executing a group of target actions, and acquiring displacement tracks of the plurality of body key points of the virtual character relative to a target associated object in a target scene as first associated data.
As can be seen from the above manner of obtaining the first associated data, the first associated data actually represents the relative position relationship between the plurality of body key points of the virtual character and the associated object in the target scene when the set of target actions is executed, and thus the first associated data may reflect the inherent association between the set of target actions and the target scene. This intrinsic association helps determine the actual pose of the virtual character when performing the target action and can therefore be applied to the target animation generation process in the following.
Wherein, the associated object in the target scene refers to an object in the target scene, such as a ladder, a rock, a chair, which is contacted when the virtual character performs a set of target actions. For the sake of distinction, the associated objects in the target scene are referred to herein as target associated objects.
In the embodiment of the invention, the plurality of body key points of the virtual character comprise limbs. The limbs include the hands and feet. In an alternative embodiment, the extremities may be represented by joints such as wrists, ankles, etc. Taking the example that the plurality of body key points comprise limbs, the first correlation data comprises displacement trajectories of the limbs relative to the target scene object.
For different types of virtual characters, a plurality of body key points corresponding to the virtual characters when executing a set of target actions can be preset. For example, for a human-shaped virtual character, joints such as wrists and ankles of the virtual character may be set as a plurality of body key points in advance.
In order to understand the meaning of the first association data, first, how to obtain the displacement trajectories of the plurality of body key points of the virtual character relative to the target association object in the target scene as the first association data is described in detail below with reference to the accompanying drawings:
firstly, before acquiring displacement tracks of a plurality of body key points of the virtual character relative to a target associated object in a target scene as first associated data, acquiring a set of reference images for executing a set of target actions in a reference scene by the virtual character. Alternatively, the virtual character in the reference image performing the set of target actions may be the same virtual character as the virtual character in the target scene described above.
The reference image should also have better image quality, such as better image sharpness and brightness. In practical application, in order to ensure the continuity and the integrity of target actions, a group of target actions is taken as an action cycle. Therefore, in order to capture more motion details and finally to enhance the visual effect of the target animation to a greater extent, the set of reference images should be animation segments corresponding to one motion cycle. If the group of reference images includes a plurality of frame images, the interval between the frame images should be as small as possible.
It will be appreciated that a set of target actions can essentially represent a complete action with well-defined definitions, such as a climbing action, a jumping action.
To facilitate understanding of the meaning of the set of target actions and the set of reference images, for example, assume that the reference associated object in the reference scene is a ladder. Assume that the reference image is an animation that includes a climbing action performed by the virtual character on the ladder. Assume that the animation includes two image frames of fig. 2a and 2b, respectively. The contact of the limbs of the virtual character with the crossbar in this example can be considered as the virtual character climbing onto the corresponding crossbar. Based on this, if the set of target actions includes the action cycle of the virtual character climbing from the rail 2 to the rail 1 of the ladder, the set of reference images may be a segment of the animation, which is cut from the animation and has the starting frame of fig. 2a and the last frame of fig. 2 b.
Of course, in addition to intercepting a set of reference images from an animation, in some alternative embodiments, a segment of animation segment with better image quality corresponding to a certain target motion may be preset as a reference image according to a target motion that may need to be generated in an actual application, so that a preset set of reference images corresponding to the target motion is directly used for the certain target motion that needs to be generated currently.
Furthermore, after acquiring a set of reference images, it is necessary to extract a set of associated data between the target motion and the reference scene from the set of reference images. For the sake of distinction, the association data between a set of target actions and a reference scene is referred to herein as second association data. Wherein the second associated data comprises displacement trajectories of a plurality of body key points of the virtual character relative to associated objects in the reference scene. Objects in the reference scene that are contacted by the virtual character in the reference scene when performing the set of target actions are referred to herein as reference associated objects.
Specifically, extracting, from the set of reference images, associated data between a set of target actions and a reference scene may be specifically implemented as:
calibrating a plurality of contact points corresponding to a group of climbing actions on the reference associated object; and acquiring displacement tracks of limbs relative to a plurality of contact points on the reference associated object when the virtual character performs a climbing action from a group of reference images.
To facilitate understanding of the meaning of the second associated data, it is illustrated herein in connection with the accompanying drawings how to extract the second associated data between a set of target actions and a reference scene from a set of reference images:
assume that a set of target actions performed by a virtual character in a reference scene in a set of reference images includes a set of climbing actions. Assuming that the plurality of body key points of the virtual character include limbs, the reference associated object in the reference scene includes a ladder, and for distinction, the ladder in the reference scene is referred to herein as a first ladder. Optionally, the crossbars in the first ladder are equally spaced, i.e. the crossbars in the first ladder are evenly distributed.
Based on the above assumptions, a plurality of contact points formed by a plurality of rails corresponding to a set of climbing actions on the first ladder are calibrated. Specifically, when the virtual character performs a group of climbing actions, the limbs and the crossbars form a plurality of contact points, and the contact points are calibrated respectively. And then, acquiring displacement tracks of the limbs relative to the contact points when the virtual character performs a climbing action from a group of reference images. Optionally, the displacement trajectory of the limbs of the virtual character relative to the plurality of contact points comprises: a plurality of contact points corresponding to the limbs of the virtual character respectively, and displacement tracks between the contact points respectively.
For example, 3 rails on the first ladder that the left hand contacts while the virtual character performs a set of climbing actions are calibrated, resulting in rail a1, rail b1, rail c1 as shown in fig. 3. From a set of reference images, a displacement trajectory t1 of the left hand with respect to the above 3 rails is extracted as shown in fig. 3. It should be noted that, in practical applications, since the left hand of the virtual character will contact the 3 bars, the displacement trajectory t1 illustrated in fig. 3 has contact points with all of the bar a1, the bar b1, and the bar c1, which are not illustrated in fig. 3. Similarly, similar contact points are not illustrated in fig. 4 and 5 referred to hereinafter.
Through the above processing on the group of reference images, the displacement trajectories of the limbs of the virtual character included in the second associated data relative to the reference associated object in the reference scene can be obtained, and the displacement trajectories are also one of the bases of a group of target actions executed by the subsequently generated virtual character in the target scene, that is, the displacement trajectories are one of the bases of the subsequently generated group of target images.
It should be noted that the names of the first associated data and the second associated data are only used for distinguishing the associated data between a group of target actions and different scenes, and do not represent the sequential order of the two associated data.
Besides the limbs, the displacement trajectories of other body key points relative to the reference associated object can be obtained through the steps, and the steps are not limited herein.
Finally, after extracting second associated data between a group of target actions and a reference scene, obtaining displacement trajectories of a plurality of body key points relative to a target associated object in the target scene as first associated data, which may be specifically implemented as:
and transforming the second associated data based on the corresponding relation between the reference associated object and the target associated object to obtain first associated data.
The corresponding relation between the reference associated object and the target associated object comprises the corresponding relation between each contact point on the reference associated object and each contact point on the target associated object.
Specifically, the second associated data is transformed based on the corresponding relationship between the reference associated object and the target associated object to obtain the first associated data, and the method may be implemented as follows:
and determining the ratio of the distance between each contact point on the target associated object to the distance between each contact point on the corresponding reference associated object based on the corresponding relation between each contact point on the reference associated object and each contact point on the target associated object, and carrying out scaling transformation on the second associated data based on the ratio to obtain a group of first associated data between the climbing action and the target associated object.
According to the generation process of the first associated data, the relative position relationship (namely, the second associated data) between the plurality of body key points of the virtual character and the reference associated object in the reference scene is converted into the relative position relationship (namely, the first associated data) between the plurality of body key points of the virtual character and the target associated object in the target scene, so that a group of target actions executed by the virtual character can be matched with the target associated object in the target scene, namely, the target actions can be adapted to various scenes, and the method is beneficial to bringing a more real and natural visual effect to a user.
In the embodiment of the invention, the target related object in the target scene is obtained by deforming the reference related object in the reference scene, so that certain difference exists between the two in certain attributes. Some attributes are, for example, shape, material. The way of deformation is different for different objects.
In an alternative embodiment, assuming that the reference related object in the reference scene is a sponge mat a, a sponge mat b made of a material 2 can be obtained by changing the material of the sponge mat a from a material 1 to a material 2, and the sponge mat b is the target related object in the target scene. If a group of target actions are jumping actions on the sponge mat, the movement tracks of the two feet of the virtual character when jumping on the sponge mat a are transformed based on the corresponding relation between the sponge mat a and the sponge mat b on the elasticity modulus, and the movement tracks of the two feet of the virtual character when jumping on the sponge mat b can be obtained. Assuming that the modulus of elasticity of material 2 is greater than the modulus of elasticity of material 1, the depth of penetration of the feet of the virtual character into the sponge mat b is greater.
In another alternative implementation, assuming that the reference associated object in the reference scene is a first ladder, a second ladder with uneven ladder rung spacing distribution may be obtained by changing the rung spacing distribution of the first ladder from a uniform distribution to an uneven distribution, the second ladder being the target associated object in the target scene.
How to transform the second associated data based on the corresponding relationship between the reference associated object and the target associated object to obtain the first associated data is illustrated as follows:
assume that the set of target actions includes a set of climbing actions. Assume that the plurality of body key points of the virtual character include limbs. Assume that the reference associated object in the reference scene is the first ladder. Assume that the target-associated object in the target scene is a second ladder.
Based on the above assumptions, referring to the correspondence between each contact point on the associated object and each contact point on the target associated object includes: the corresponding relation between each cross bar in the second ladder and each cross bar in the first ladder, namely, each cross bar in the second ladder is in one-to-one correspondence with each cross bar in the first ladder. Optionally, the corresponding relationship between each crossbar in the second ladder and each crossbar in the first ladder at least includes: each rail in the first ladder that is in contact with a respective extremity of the virtual character corresponds to each rail in the second ladder that is in contact with a respective extremity of the virtual character.
Optionally, there are multiple methods for acquiring the corresponding relationship between each crossbar in the second ladder and each crossbar in the first ladder, where one of the methods may be implemented as: after calibrating a plurality of crossbars corresponding to a group of climbing actions on a first ladder, determining the crossbars j with the same arrangement sequence as the crossbars i on a second ladder for the calibrated crossbars i on the first ladder, and establishing the corresponding relation between the crossbars j on the second ladder and the crossbars i on the first ladder. Wherein the crossbar i is any one of a plurality of crossbars calibrated on the first ladder. The order of arrangement refers to the order of arrangement of all the rails on each ladder, for example, the order of arrangement of all the rails on the second ladder from low to high. The process of acquiring the corresponding relation between other crossbars in the second ladder and other crossbars in the first ladder is consistent with the process of acquiring the corresponding relation between the crossbars j in the second ladder and the crossbars i in the first ladder, and is not repeated.
Further, based on the above assumptions, the crossbar pitch of two ladders can be regarded as the contact point pitch. The ratio of the distance between each contact point on the target associated object to the distance between each contact point on the corresponding reference associated object is specifically the ratio of the distance between each crossbar in the second ladder to the distance between each crossbar in the corresponding first ladder. And then, based on the ratio of the distance between each ledger in the second ladder (namely the target associated object) and the distance between each ledger in the corresponding first ladder (namely the reference associated object), scaling and transforming the second associated data to obtain a group of first associated data between the climbing action and the second ladder.
Continuing with the above assumptions, for example, the rails contacted by the left hand of the virtual character in the first ladder are labeled as rail a1, rail b1, rail c1 in that order as shown in fig. 3. Based on the correspondence relationship between the crossbars in the second ladder and the crossbars in the first ladder, 3 crossbars corresponding to the 3 crossbars in the second ladder, namely, a crossbar a2, a crossbar b2 and a crossbar c2 in this order, can be obtained as shown in fig. 3. Further, a spacing H between rail a2 and rail b2 in a second ladder is determinedab2The distance H between the cross bar a1 and the cross bar b1 of the corresponding first ladderab1Ratio H ofab2/Hab1And determining the distance H between the cross bar b2 and the cross bar c2 in the second ladderbc2The distance H between the cross bar b1 and the cross bar c1 of the corresponding first ladderbc1Ratio H ofbc2/Hbc1
Further, if Hab2/Hab1And Hbc2/Hbc1All are less than 1, the displacement locus t1 of the left hand relative horizontal bars a1, b1 and c1 of the virtual character shown in the figure 3 is subjected to reduction transformation to obtain the figure4, the displacement trajectory t2 of the left hand of the virtual character relative to the bars a2, b2, c 2.
Or, if Hab2/Hab1Greater than 1, and Hbc2/Hbc1If the displacement trajectory is smaller than 1, the upper half of the displacement trajectory t1 of the left-hand opposite crossbars a1, b1 of the virtual character shown in fig. 3 is subjected to magnification transformation, the lower half of the displacement trajectory t1 of the left-hand opposite crossbars b1, c1 of the virtual character shown in fig. 3 is subjected to reduction transformation, and finally, the transformed displacement trajectories are combined to obtain the displacement trajectory t3 of the left-hand opposite crossbars a2, b2, c2 of the virtual character shown in fig. 5.
Similar to the process of obtaining the displacement tracks of the left hand of the virtual character relative to the plurality of crossbars, the displacement tracks of the right hand, the left foot and the right foot of the virtual character relative to the plurality of crossbars are obtained, so that the displacement tracks of the left hand, the right hand, the left foot and the right foot of the virtual character relative to the plurality of crossbars are used as the first associated data.
The above describes a process of transforming the second associated data to obtain the first associated data based on the corresponding relationship between the reference associated object and the target associated object.
In practical applications, it is not always necessary to pay attention to the displacement trajectory of the limbs of the virtual character relative to the target associated object for different actions. For example, in the jumping motion, the displacement tracks of the feet of the virtual character relative to the target associated object, namely, the two ankle joints, are mainly concerned.
In addition to the limbs described above, the plurality of body key points of the virtual character in the embodiment of the present invention further includes a root joint. In an alternative embodiment, one joint is designated as a root joint from among the plurality of joints of the virtual character. The designated root joint may be any joint other than the above-described wrist, ankle, and the like.
Optionally, obtaining displacement trajectories of the plurality of body key points relative to the target associated object in the target scene as the first associated data may be further specifically implemented as:
and acquiring the displacement speed or displacement distance of the root joint relative to the target associated object as first associated data.
The first associated data further includes a movement track of the root joint relative to the target associated object in the target scene, that is, a displacement speed of the root joint relative to the target associated object, and a displacement distance of the root joint relative to the target associated object. The movement track of the root joint relative to the target associated object in the target scene can reflect the whole movement track of the virtual character. The calculation methods of the displacement trajectory of the root joint are different for target associated objects in different scenes.
Still taking the target associated object as the second ladder as an example, how to acquire the displacement speed or displacement distance of the root joint relative to the target associated object as the first associated data is described below:
in an alternative embodiment, acquiring the displacement velocity of the root joint relative to the target associated object as the first associated data may be implemented as:
and determining a plurality of contact points corresponding to the group of climbing actions on the target associated object, and further, if the distances between the contact points in the target associated object are equal, taking the ratio of the total distance between the contact points and the duration corresponding to the group of climbing actions as the displacement speed of the root joint relative to the target associated object.
In the following, the target-related object is taken as an example of the second ladder, and it is assumed that the distance between the crossbars in the second ladder is equal, that is, the crossbars in the second ladder are uniformly distributed. Then, a plurality of contact points on the target-associated object corresponding to a set of climbing actions, i.e. a plurality of rails of the second ladder, are:
and the ratio of the total distance between the crossbars to the time corresponding to the climbing action is used as the displacement speed of the root joint of the virtual character relative to the second ladder. For obtaining the speed V of displacement of the root joint of the virtual character relative to the second ladderrootThe formula of (1) is as follows:
Vroot=Hrung*2/Tladder
wherein HrungAnd 2 represents the total crossbar spacing corresponding to a group of target actions. T isladderThe time duration corresponding to a set of climbing actions, i.e., the time duration corresponding to the execution of a set of target actions.
In another possible embodiment, obtaining the displacement distance of the root joint relative to the target associated object as the first associated data may be implemented as:
and determining a plurality of contact points corresponding to a group of climbing actions on the target associated object, and further, if the distances among the plurality of contact points in the target associated object are not equal, taking the average value of the distance difference of the limbs of the virtual character in the two frames of images as the displacement distance of the root joint relative to the target associated object in the two frames of images.
It will be appreciated that the displacement distance may be a lateral distance or a longitudinal distance. For example, the distance of displacement of the root joint relative to the ladder, and in fact, the height of displacement of the root joint relative to the ladder.
In the following, the target related object is still taken as an example of the second ladder, and it is assumed that the distance between the crossbars in the second ladder is not equal, that is, the crossbars in the second ladder are unevenly distributed. Then, a plurality of contact points on the target-associated object corresponding to a set of climbing actions, i.e. a plurality of rails of the second ladder, are:
and regarding a group of two frames of images corresponding to the climbing action, taking the average value of the distance difference of the limbs of the virtual character in the two frames of images as the displacement distance of the root joint of the virtual character in the two frames of images relative to the second ladder. For obtaining a displacement distance (delta H) of a root joint of the virtual character in the two images relative to the second ladderrootThe formula of (1) is as follows:
⊿Hroot=(⊿Hleftfoot+⊿Hrightfoot+⊿Hlefthand+⊿Hrighthand)/4
wherein (delta H)leftfoot、⊿Hrightfoot、⊿Hlefthand、⊿HrighthandThe distance difference between the left ankle, the right ankle, the left wrist and the right wrist in the two images is respectively corresponding to the left ankle, the right ankle, the left wrist and the right wrist.
The average value of the distance difference of the four limbs of the virtual character in the two frames of images is used as the displacement distance of the root joint of the virtual character relative to the second ladder in the two frames of images, so that the root joint of the virtual character can be matched with the four limbs of the virtual character to perform displacement, and the dynamic adjustment of the displacement track of the root joint of the virtual character is realized.
Two procedures of how to acquire the displacement trajectory of the root joint with respect to the target associated object as the first associated data are described above.
It should be understood that, in the case that the plurality of body key points of the virtual character include limbs and root joints, if the target associated object in the target scene changes, the displacement trajectory of the limbs relative to the target associated object in the first associated data and the displacement trajectory of the root joints relative to the target associated object change synchronously. Of course, in practical applications, if the first associated data further includes associated data corresponding to other body key points of the virtual character, the associated data also needs to be changed synchronously.
After the first associated data is acquired, a set of target images for performing a set of target actions by the virtual character are generated based on the first associated data to obtain a target animation including the set of target images.
In practical application, in order to ensure continuity and integrity of target actions, a group of target actions are usually taken as one action cycle, and therefore, similar to a group of reference images, the group of target images should be animation segments corresponding to one action cycle, so that the target animation has a more natural and smooth visual effect. If the group of target images includes a plurality of frame images, the interval between the frame images should be as small as possible.
It is understood that in practical applications, the target actions may include multiple sets of target actions. The plurality of sets of target actions may vary depending on the target scene. For example, in the process of the virtual character performing the climbing action, if the distance between the ladders in the target scene changes for multiple times, the climbing action may be divided into multiple groups, and first association data between each group of climbing action and the target scene is acquired, so that each group of target images is generated based on the first association data corresponding to each group of climbing action, so as to obtain a target animation including one group of target images.
Based on the description of the first association data, the internal association between the virtual character executing the group of target actions and the target scene can be embodied, so that the virtual character in the target image can be matched with the target scene based on the first association data, the animation generation efficiency is improved, the smoothness of the action track of the virtual character in the target image is improved, the target animation has a more natural and real visual effect, and the visual experience of a user is greatly improved.
In addition, because the process of acquiring the first associated data and the process of generating the target animation based on the first associated data do not need to adopt the action of manually remaking the virtual character, the workload caused by manually remaking a new animation in the traditional scheme can be reduced, and the animation generation efficiency is effectively improved.
In summary of the above description of the animation generation method shown in fig. 1, in order to more intuitively understand the execution process of the animation generation method, the following embodiments are combined to exemplarily describe how to generate a target animation.
In an alternative embodiment, such a practical application scenario is assumed: an animation is generated that performs a climbing action on ladder a (i.e., a goal animation).
And triggering an action execution instruction in a target scene loaded by the client by selecting an action execution option in the client, wherein the action execution instruction is used for instructing a virtual climber (namely a virtual character) to execute a group of climbing actions on a ladder a in the target scene. Furthermore, in response to the action execution instruction, the client acquires, for a group of sub-climbing actions currently executed by the virtual climber between the ledgers 1 to 2 of the ladder a, a displacement trajectory (i.e., first association data) between the limbs of the virtual climber and the ladder a when the group of climbing actions is executed. Based on the displacement trajectory, a set of images (i.e., target images) of the virtual climber performing a climbing action between rails 2 to 3 of ladder a is generated. And analogizing in turn, generating each group of images corresponding to the virtual climbing between the crossbars on the ladder a when the virtual climbing executes the climbing action in real time, and combining to obtain an animation of a section of virtual climber executing the climbing action on the ladder a.
In another alternative embodiment, such a practical application scenario is also assumed: an animation of the jump between exposed rocks in the river (i.e., a target animation) is generated.
After a role m controlled by a user in a client executes a specified task, an action execution instruction in a target scene loaded by the client is triggered, and the action execution instruction is used for instructing the role m (namely a virtual role) to execute a group of jumping actions on river-side rocks. In response to the action execution instruction, the client acquires, for a set of jumping actions currently performed by the character m between the rocks 1 and 2, a displacement trajectory (i.e., first association data) between the two feet of the character m and the two rocks when the set of jumping actions is performed. Based on the displacement trajectory, a set of images (i.e., target images) of the character m performing a jumping motion between the rocks 2 to 3 is generated. And analogizing in sequence, generating each group of images corresponding to the character m when the character m performs the jumping motion among the rocks exposed on the river surface in real time, and combining to obtain a section of animation of the character m performing the jumping motion on the rocks exposed on the river surface.
In the execution process of the animation generation method shown in fig. 1, by acquiring first association data between a group of target actions executed by a virtual character and a target scene and applying the internal association between the virtual character and the target scene embodied by the first association data to the generation process of the target animation, the target actions can be matched with the target scene in the target animation, which is beneficial to improving animation generation efficiency, improving smoothness of action tracks, and bringing a more real and natural visual effect to a user. In addition, the matching process of the target action and the target scene can be realized without manually re-making the action of the virtual character in the execution process of the animation generation method, so that the workload caused by the traditional animation production scheme is effectively reduced, and the animation generation efficiency is improved.
The animation generation apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these animation generation devices can each be configured using commercially available hardware components through the steps taught by the present solution.
Fig. 6 is a schematic structural diagram of an animation generating apparatus according to an embodiment of the present invention. As shown in fig. 6, the animation generation device is loaded with a target scene, and includes: the device comprises a triggering module 11, a first obtaining module 12 and a generating module 13.
A triggering module 11, configured to trigger an action execution instruction in the target scene, where the action execution instruction is used to instruct a virtual character to execute a set of target actions in the target scene;
a first obtaining module 12, configured to obtain, in response to the action execution instruction, first association data between the set of target actions and the target scene;
a generating module 13, configured to generate a set of target images for performing the set of target actions by the virtual character based on the first associated data, so as to obtain a target animation including the set of target images.
Optionally, the first obtaining module 12 is specifically configured to: determining a plurality of body keypoints for the virtual character when performing the set of target actions; and acquiring displacement tracks of the plurality of body key points relative to a target associated object in the target scene as the first associated data.
Optionally, the animation generation apparatus further includes a second obtaining module.
The second obtaining module is specifically configured to: before the first obtaining module obtains displacement tracks of the plurality of body key points relative to a target associated object in the target scene as the first associated data, obtaining a group of reference images of a group of target actions executed by the virtual character in a reference scene; extracting second association data between the set of target actions and the reference scene from the set of reference images, wherein the second association data comprises displacement tracks of a plurality of body key points of the virtual character relative to a reference association object in the reference scene.
The first obtaining module 12 is specifically configured to, when obtaining displacement trajectories of the plurality of body key points relative to the target associated object in the target scene as the first associated data: and transforming the second associated data extracted by the second acquisition module based on the corresponding relation between the reference associated object and the target associated object to obtain the first associated data.
Optionally, the plurality of body key points comprise limbs, and the first correlation data comprises displacement trajectories of the limbs relative to the target scene object.
Optionally, the set of target actions comprises a set of climbing actions.
The second obtaining module, when extracting second associated data between the group of target actions and the reference scene from the group of reference images, is specifically configured to: calibrating a plurality of contact points on the reference associated object corresponding to the group of climbing actions; and acquiring displacement tracks of limbs relative to a plurality of contact points on the reference associated object when the virtual character performs the climbing action group from the group of reference images.
Optionally, the correspondence between the reference associated object and the target associated object includes a correspondence between each contact point on the reference associated object and each contact point on the target associated object.
The first obtaining module 12 is specifically configured to, when the second associated data is transformed based on the corresponding relationship between the reference associated object and the target associated object to obtain the first associated data: determining the ratio of the distance between each contact point on the target associated object to the distance between each contact point on the corresponding reference associated object based on the corresponding relation between each contact point on the reference associated object and each contact point on the target associated object; and carrying out scaling transformation on the second associated data based on the comparison value to obtain first associated data between the group of climbing actions and the target associated object.
Optionally, the plurality of body keypoints further comprises a root joint.
The first obtaining module 12 is specifically configured to, when obtaining displacement trajectories of the plurality of body key points relative to the target associated object in the target scene as the first associated data: and acquiring the displacement speed or displacement distance of the root joint relative to the target associated object as the first associated data.
Optionally, the set of target actions comprises a set of climbing actions.
The first obtaining module 12 is specifically configured to, when obtaining a displacement velocity of the root joint relative to the target related object as the first related data: determining a plurality of contact points on the target-associated object corresponding to the set of climbing actions; and if the distances between the contact points in the target associated object are equal, taking the ratio of the total distance between the contact points and the corresponding duration of the group of climbing actions as the displacement speed of the root joint relative to the target associated object.
Optionally, the set of target actions comprises a set of climbing actions.
The first obtaining module 12 is specifically configured to, when obtaining a displacement distance of the root joint relative to the target associated object as the first associated data: determining a plurality of contact points on the target-associated object corresponding to the set of climbing actions; if the distances among the contact points in the target associated object are not equal, taking the average value of the distance difference of the limbs of the virtual character in the two frames of images as the displacement distance of the root joint relative to the target associated object in the two frames of images for the two frames of images corresponding to the group of climbing actions.
The animation generation apparatus shown in fig. 6 may execute the methods provided in the foregoing embodiments, and parts not described in detail in this embodiment may refer to the related descriptions of the foregoing embodiments, which are not described again here.
In one possible design, the animation generation apparatus shown in fig. 6 may be implemented as an electronic device.
As shown in fig. 7, the electronic device may include: a processor 21 and a memory 22. Wherein the memory 22 has stored thereon executable code which, when executed by the processor 21, at least makes the processor 21 capable of implementing the animation generation method as provided in the previous embodiments. The electronic device may further include a communication interface 23 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which, when executed by a processor of a wireless router, causes the processor to perform the animation generation method provided in the foregoing embodiments.
The above-described apparatus embodiments are merely illustrative, wherein the various modules illustrated as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments can be implemented by adding necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects may well be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including, but not limited to, disk storage, CD-ROM, optical storage, etc.).
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An animation generation method applied to a service program loaded with a target scene, the method comprising:
obtaining a set of reference images for a virtual character to perform a set of target actions in a reference scene;
extracting second association data between the group of target actions and the reference scene from the group of reference images, wherein the second association data comprise displacement tracks of a plurality of body key points of the virtual character relative to a reference association object in the reference scene;
based on the corresponding relation between the reference associated object and a target associated object in the target scene, transforming the second associated data to obtain first associated data between the group of target actions and the target scene;
generating a set of target images for performing the set of target actions by the virtual character based on the first association data to obtain a target animation including the set of target images.
2. The method of claim 1, wherein the plurality of body key points comprise limbs, and the first correlation data comprises displacement trajectories of limbs relative to the target scene object.
3. The method of claim 2, wherein the set of target actions comprises a set of climbing actions;
the extracting, from the set of reference images, second association data between the set of target actions and the reference scene includes:
calibrating a plurality of contact points on the reference associated object corresponding to the group of climbing actions;
and acquiring displacement tracks of limbs relative to a plurality of contact points on the reference associated object when the virtual character performs the climbing action group from the group of reference images.
4. The method according to claim 3, wherein the correspondence between the reference associated object and the target associated object comprises correspondence between each contact point on the reference associated object and each contact point on the target associated object;
the transforming the second associated data based on the corresponding relationship between the reference associated object and the target associated object to obtain the first associated data includes:
determining the ratio of the distance between each contact point on the target associated object to the distance between each contact point on the corresponding reference associated object based on the corresponding relation between each contact point on the reference associated object and each contact point on the target associated object;
and carrying out scaling transformation on the second associated data based on the comparison value to obtain first associated data between the group of climbing actions and the target associated object.
5. The method of claim 1, wherein the plurality of body keypoints comprises a root joint;
the method further comprises the following steps:
and acquiring the displacement speed or displacement distance of the root joint relative to the target associated object as the first associated data.
6. The method of claim 5, wherein the set of target actions comprises a set of climbing actions;
the acquiring a displacement speed of the root joint relative to the target associated object as the first associated data includes:
determining a plurality of contact points on the target-associated object corresponding to the set of climbing actions;
and if the distances between the contact points in the target associated object are equal, taking the ratio of the total distance between the contact points and the corresponding duration of the group of climbing actions as the displacement speed of the root joint relative to the target associated object.
7. The method of claim 5, wherein the set of target actions comprises a set of climbing actions;
the acquiring a displacement distance of the root joint relative to the target associated object as the first associated data includes:
determining a plurality of contact points on the target-associated object corresponding to the set of climbing actions;
if the distances among the contact points in the target associated object are not equal, taking the average value of the distance difference of the limbs of the virtual character in the two frames of images as the displacement distance of the root joint relative to the target associated object in the two frames of images for the two frames of images corresponding to the group of climbing actions.
8. The method of claim 1, further comprising:
triggering an action execution instruction in the target scene, wherein the action execution instruction is used for instructing a virtual role to execute a group of target actions in the target scene;
and responding to the action execution instruction, and acquiring the first associated data.
9. An animation generation apparatus loaded with a target scene, comprising:
a second acquisition module for acquiring a set of reference images for the virtual character to perform a set of target actions in a reference scene; extracting second association data between the group of target actions and the reference scene from the group of reference images, wherein the second association data comprise displacement tracks of a plurality of body key points of the virtual character relative to a reference association object in the reference scene;
a first obtaining module, configured to transform the second associated data based on a correspondence between the reference associated object and a target associated object in the target scene to obtain first associated data between the group of target actions and the target scene;
a generating module for generating a set of target images for the virtual character to perform the set of target actions based on the first associated data to obtain a target animation including the set of target images.
10. An electronic device, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the animation generation method as claimed in any of claims 1 to 8.
CN202110875167.2A 2020-05-15 2020-05-15 Animation generation method, device and equipment Pending CN113744372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110875167.2A CN113744372A (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110875167.2A CN113744372A (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment
CN202010413388.3A CN111768474B (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010413388.3A Division CN111768474B (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment

Publications (1)

Publication Number Publication Date
CN113744372A true CN113744372A (en) 2021-12-03

Family

ID=72720723

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202010413388.3A Active CN111768474B (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment
CN202110875177.6A Pending CN113744373A (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment
CN202110875167.2A Pending CN113744372A (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202010413388.3A Active CN111768474B (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment
CN202110875177.6A Pending CN113744373A (en) 2020-05-15 2020-05-15 Animation generation method, device and equipment

Country Status (1)

Country Link
CN (3) CN111768474B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489408A (en) * 2022-02-11 2022-05-13 百果园技术(新加坡)有限公司 Animation processing system, method, device and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888724A (en) * 2021-09-30 2022-01-04 北京字节跳动网络技术有限公司 Animation display method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267942A1 (en) * 2013-10-24 2016-09-15 Visible Ink Television Ltd Motion tracking system
CN109191548A (en) * 2018-08-28 2019-01-11 百度在线网络技术(北京)有限公司 Animation method, device, equipment and storage medium
CN109224437A (en) * 2018-08-28 2019-01-18 腾讯科技(深圳)有限公司 The exchange method and terminal and storage medium of a kind of application scenarios
CN109731330A (en) * 2019-01-31 2019-05-10 腾讯科技(深圳)有限公司 The display methods and device of picture, storage medium, electronic device
CN110264554A (en) * 2019-06-24 2019-09-20 网易(杭州)网络有限公司 Processing method, device, storage medium and the electronic device of animation information
CN111028317A (en) * 2019-11-14 2020-04-17 腾讯科技(深圳)有限公司 Animation generation method, device and equipment for virtual object and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004341567A (en) * 2003-05-12 2004-12-02 Namco Ltd Image generating system, animation generating method, script, information storage medium, and method for preparing script for animation image generation
CN101504774A (en) * 2009-03-06 2009-08-12 暨南大学 Animation design engine based on virtual reality
CN102157009A (en) * 2011-05-24 2011-08-17 中国科学院自动化研究所 Method for compiling three-dimensional human skeleton motion based on motion capture data
TWI473036B (en) * 2012-06-29 2015-02-11 Reallusion Inc The system and method of automatic adaptation of terrain to virtual terrain
CN106582012B (en) * 2016-12-07 2018-12-11 腾讯科技(深圳)有限公司 Climbing operation processing method and device under a kind of VR scene
CN107067451B (en) * 2017-04-07 2021-07-13 阿里巴巴(中国)有限公司 Method and device for realizing dynamic skeleton in animation
US10210391B1 (en) * 2017-08-07 2019-02-19 Mitsubishi Electric Research Laboratories, Inc. Method and system for detecting actions in videos using contour sequences
CN108182719A (en) * 2017-12-28 2018-06-19 北京聚力维度科技有限公司 The traveling animation producing method and device of the adaptive obstacle terrain of artificial intelligence
CN109282810A (en) * 2018-09-05 2019-01-29 广州市蚺灵科技有限公司 A kind of snake-shaped robot Attitude estimation method of inertial navigation and angular transducer fusion
CN110298907B (en) * 2019-07-04 2023-07-25 广州西山居网络科技有限公司 Virtual character action control method and device, computing equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267942A1 (en) * 2013-10-24 2016-09-15 Visible Ink Television Ltd Motion tracking system
CN109191548A (en) * 2018-08-28 2019-01-11 百度在线网络技术(北京)有限公司 Animation method, device, equipment and storage medium
CN109224437A (en) * 2018-08-28 2019-01-18 腾讯科技(深圳)有限公司 The exchange method and terminal and storage medium of a kind of application scenarios
CN109731330A (en) * 2019-01-31 2019-05-10 腾讯科技(深圳)有限公司 The display methods and device of picture, storage medium, electronic device
CN110264554A (en) * 2019-06-24 2019-09-20 网易(杭州)网络有限公司 Processing method, device, storage medium and the electronic device of animation information
CN111028317A (en) * 2019-11-14 2020-04-17 腾讯科技(深圳)有限公司 Animation generation method, device and equipment for virtual object and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
申良柱: "基于ECS 的Cocos2d-X 引擎优化设计与实现", CNKI 优秀硕士学位论文全文库,信息科技, 3 September 2019 (2019-09-03), pages 40 - 56 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489408A (en) * 2022-02-11 2022-05-13 百果园技术(新加坡)有限公司 Animation processing system, method, device and storage medium
CN114489408B (en) * 2022-02-11 2023-11-24 百果园技术(新加坡)有限公司 Animation processing system, method, device and storage medium

Also Published As

Publication number Publication date
CN113744373A (en) 2021-12-03
CN111768474B (en) 2021-08-20
CN111768474A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
Yannakakis Game AI revisited
US10839954B2 (en) Dynamic exercise content
CN106780681B (en) Role action generation method and device
CN109815776B (en) Action prompting method and device, storage medium and electronic device
CN111768474B (en) Animation generation method, device and equipment
US20220080260A1 (en) Pose comparison systems and methods using mobile computing devices
CN104281265B (en) A kind of control method of application program, device and electronic equipment
CN106709976B (en) Skeleton animation generation method and device
US10049483B2 (en) Apparatus and method for generating animation
US10108855B2 (en) Fitness device-based simulator and simulation method using the same
US11816772B2 (en) System for customizing in-game character animations by players
US10885691B1 (en) Multiple character motion capture
US20230394735A1 (en) Enhanced animation generation based on video with local phase
CN111080752A (en) Action sequence generation method and device based on audio and electronic equipment
CN114093021A (en) Dance video motion extraction method and device, computer equipment and storage medium
US11830121B1 (en) Neural animation layering for synthesizing martial arts movements
Tharatipyakul et al. Pose estimation for facilitating movement learning from online videos
CN112891947B (en) Jump animation processing method, apparatus, electronic device and computer readable medium
CN114968044B (en) Picture display method and device, electronic equipment and storage medium
CN110300118A (en) Streaming Media processing method, device and storage medium
Lin et al. Temporal IK: Data-Driven Pose Estimation for Virtual Reality
Pantuwong A tangible interface for 3D character animation using augmented reality technology
CN114155325A (en) Virtual character animation generation method and system
CN112752146A (en) Video quality evaluation method and device, computer equipment and storage medium
CN114356100B (en) Body-building action guiding method, body-building action guiding device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination