CN110264554B - Method and device for processing animation information, storage medium and electronic device - Google Patents
Method and device for processing animation information, storage medium and electronic device Download PDFInfo
- Publication number
- CN110264554B CN110264554B CN201910549094.0A CN201910549094A CN110264554B CN 110264554 B CN110264554 B CN 110264554B CN 201910549094 A CN201910549094 A CN 201910549094A CN 110264554 B CN110264554 B CN 110264554B
- Authority
- CN
- China
- Prior art keywords
- target
- animation
- gesture
- information
- animation information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000003860 storage Methods 0.000 title claims abstract description 73
- 238000012545 processing Methods 0.000 title claims abstract description 41
- 230000009471 action Effects 0.000 claims abstract description 71
- 238000013507 mapping Methods 0.000 claims abstract description 55
- 230000004927 fusion Effects 0.000 claims description 46
- 239000000872 buffer Substances 0.000 claims description 30
- 239000011159 matrix material Substances 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 16
- 230000010365 information processing Effects 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 12
- 238000003672 processing method Methods 0.000 abstract description 12
- 230000008569 process Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 16
- 238000011156 evaluation Methods 0.000 description 14
- 230000033001 locomotion Effects 0.000 description 13
- 230000007704 transition Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000003139 buffering effect Effects 0.000 description 9
- 230000009466 transformation Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 239000007853 buffer solution Substances 0.000 description 6
- 125000004122 cyclic group Chemical group 0.000 description 6
- 210000000988 bone and bone Anatomy 0.000 description 5
- 238000004880 explosion Methods 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 239000000243 solution Substances 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000009191 jumping Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000036544 posture Effects 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000010926 purge Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a processing method and device of animation information, a storage medium and an electronic device. The method comprises the following steps: acquiring target animation information associated with a virtual object at a first moment, wherein the target animation information comprises information of target animations obtained by fusing a plurality of first animations, and the target animations are used for generating target gestures corresponding to target actions executed by the virtual object in a virtual scene; determining that the target animation is in a stable state according to the target animation information, and establishing a target mapping relation cache of first sub-target animation information and target gestures in the target animation information; and determining that second sub-target animation information associated with the virtual object at a second moment corresponds to the first sub-target animation information, and controlling the gesture of the action executed by the virtual object at the second moment to be the target gesture, wherein the second moment is a moment after the first moment. The invention achieves the effect of improving the efficiency of processing the animation information.
Description
Technical Field
The present invention relates to the field of computers, and in particular, to a method and apparatus for processing animation information, a storage medium, and an electronic apparatus.
Background
Currently, in order to achieve rich and smooth animation, it is usually implemented by an animation fusion system such as an animation tree and an animation map. Although complex animation logic can be implemented by animation trees and animation graphs, performance problems can be more easily caused by the animation system, and the animation system can be optimized by reducing the depth of the animation trees or the animation graphs and reducing the number of the animations participating in fusion at the same time.
However, reducing the depth of the animation tree or animation map limits the play of animation tree or animation map programming; reducing the number of animations that are simultaneously involved in the fusion has a certain impact on the presentation effect of the final animation, resulting in inefficient processing of animation information.
Aiming at the problem of low efficiency of processing animation information in the prior art, no effective solution is proposed at present.
Disclosure of Invention
The invention mainly aims to provide a processing method, a processing device, a storage medium and an electronic device for animation information, so as to at least solve the technical problem of low efficiency of processing the animation information.
In order to achieve the above object, according to one aspect of the present invention, there is provided a processing method of animation information. The method comprises the following steps: acquiring target animation information associated with a virtual object at a first moment, wherein the target animation information comprises information of target animations obtained by fusing a plurality of first animations, and the target animations are used for generating target gestures corresponding to target actions executed by the virtual object in a virtual scene; determining that the target animation is in a stable state according to the target animation information, and establishing a target mapping relation cache of first sub-target animation information and target gestures in the target animation information; and determining that second sub-target animation information associated with the virtual object at a second moment corresponds to the first sub-target animation information, and controlling the gesture of the action executed by the virtual object at the second moment to be a target gesture, wherein the second moment is a moment after the first moment.
Optionally, the first sub-target animation information includes at least one of: the method comprises the steps of target operation information, current playing progress and a plurality of first animations, wherein the target operation is used for controlling a virtual object to execute a target action, and the current playing progress is determined through the playing time and the playing speed of the target animations.
Optionally, establishing the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information includes: the first sub-target animation is determined as a target key of the target key value pair, and the target gesture is determined as a target value of the target key value pair.
Optionally, before establishing the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information, the method further includes: determining a fusion proportion of each first animation used for fusing the target animation, keeping unchanged in a target time period, and determining that the target animation is in a stable state, wherein the target animation information comprises each first animation.
Optionally, when the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information is established, the method further comprises one of the following steps: caching a skeleton matrix of each animation frame of the target animation according to the target frame rate; determining that the difference between the target gesture and the gesture cached last time is greater than a first target threshold value, and caching the target gesture; and determining a target storage position associated with the current playing progress of the target animation from a plurality of storage positions, and caching the target gesture of the virtual object at the current playing progress to the target storage position, wherein each storage position is used for caching the gesture of the virtual object at the playing progress associated with each storage position, and each two adjacent storage positions allow the distance between animation frames corresponding to the stored gesture to be larger than a second target threshold.
Optionally, the method further comprises one of: determining that the difference value between the current playing progress and the first playing progress is within a third target threshold value, and discarding the target gesture of the virtual object at the current playing progress, wherein the first playing progress is the playing progress of the target animation corresponding to the gesture stored last time by the target storage position; and determining that the difference value between the current playing progress and the second playing progress is within a fourth target threshold, and discarding the target gesture of the virtual object at the current playing progress, wherein the second playing progress is the minimum playing progress of the target animation corresponding to the gesture allowed to be stored at the next storage position adjacent to the target storage position.
Optionally, the target animation information includes at least one of: information for controlling a virtual object to perform a target operation of a target action; identification information of a plurality of first animations; a blend ratio for each first animation; the length of the target animation; the current playing progress of the target animation; the play rate of the target animation.
Optionally, determining that the target animation is in a stable state according to the target animation information, and establishing the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information includes: and determining that the target animation is in a stable state according to the target animation information, determining that the first sub-target animation information and the target gesture are not cached in the target cache position, and establishing a target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information cached in the target cache position.
Optionally, after establishing the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information, the method further comprises: and determining that the capacity occupied by the data stored in the target cache position exceeds the target capacity, and deleting the first sub-target animation information and the target gesture when the target gesture exceeds the target time.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a processing apparatus of animation information. The device comprises: the virtual object generation device comprises an acquisition unit, a virtual object generation unit and a virtual scene generation unit, wherein the acquisition unit is used for acquiring target animation information associated with the virtual object at a first moment, the target animation information comprises information of target animations obtained by fusing a plurality of first animations, and the target animations are used for generating target gestures corresponding to target actions executed by the virtual object in the virtual scene; the first processing unit is used for determining that the target animation is in a stable state according to the target animation information, and establishing a target mapping relation cache of first sub-target animation information and target gestures in the target animation information; and the second processing unit is used for determining that second sub-target animation information and first sub-target animation information which are associated with the virtual object at a second moment are corresponding to each other, and controlling the gesture of the action executed by the virtual object at the second moment to be the target gesture, wherein the second moment of the second sub-target animation information is a moment after the first moment.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a storage medium. The storage medium stores a computer program, wherein the computer program is configured to execute the processing method of the animation information of the embodiment of the present invention at the time of execution.
In order to achieve the above object, according to another aspect of the present invention, there is also provided an electronic device. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to execute the processing method of the animation information according to the embodiment of the invention.
According to the method and the device, the target animation information associated with the virtual object at the first moment is acquired, wherein the target animation information comprises information of target animations obtained by fusing a plurality of first animations, and the target animations are used for generating target gestures corresponding to target actions executed by the virtual object in a virtual scene; determining that the target animation is in a stable state according to the target animation information, and establishing a target mapping relation cache of first sub-target animation information and target gestures in the target animation information; and determining that second sub-target animation information associated with the virtual object at a second moment corresponds to the first sub-target animation information, and controlling the gesture of the action executed by the virtual object at the second moment to be a target gesture, wherein the second moment is a moment after the first moment. That is, the current gesture of the virtual object is cached under the reproducible condition through the target animation information, and the target gesture is directly called without recalculation under the condition that the same animation information appears later, so that the calculation cost is reduced, the technical effect of improving the processing efficiency of the animation information is achieved, and the technical problem of low processing efficiency of the animation information is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a processing method of animation information according to an embodiment of the present invention;
fig. 2 is a flowchart of a processing method of animation information according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the overhead of an animation system, according to an embodiment of the invention;
FIG. 4 is a flow chart of a method of animation caching, according to an embodiment of the invention;
FIG. 5 is a flow chart of a method of introducing an animation buffer of a stable rating function of an animation system, according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a storage structure for caching data according to an embodiment of the invention;
FIG. 7 is a flowchart of an animation buffering method according to the related art;
FIG. 8 is a flow chart of an animation buffering method according to an embodiment of the present invention; and
fig. 9 is a schematic diagram of a processing apparatus of animation information according to an embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the present application described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method embodiments provided by the embodiments of the present application may be performed in a mobile terminal, a computer terminal, or similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of a mobile terminal according to a processing method of animation information according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a processing method of animation information in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, to implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for processing Animation information running on the mobile terminal is provided, which may be a method for optimizing the efficiency of an Animation System (Animation System). The animation system is a system for controlling a virtual object to realize various motion expressions by accepting an input operation of a user. The virtual object is internally provided with a plurality of limited animations, and the animation system fuses and transits according to a certain proportion based on the plurality of limited animations in the virtual object according to the input operation of a user, so as to obtain actions under various conditions.
Fig. 2 is a flowchart of a processing method of animation information according to an embodiment of the present invention. As shown in fig. 2, the process includes the steps of:
Step S202, obtaining target animation information associated with the virtual object at a first moment, wherein the target animation information comprises information of target animations obtained by fusing a plurality of first animations, and the target animations are used for generating target gestures corresponding to target actions executed by the virtual object in a virtual scene.
In the technical solution provided in the above step S202 of the present invention, the virtual object may be a virtual control object in a virtual scene, for example, a game character in a game scene, the target animation information associated with the virtual object may be effective state information of an animation system obtained through a stable evaluation function (Animation State Evaluation), including information of a target animation obtained by fusing a plurality of first animations, where the stable evaluation function may be a component in an engine of the animation system, and the animation is an art resource corresponding to a specific action, may be made by 3dsmax software or maya software, may be generated by action capturing, may be a basis for synthesizing other actions, and may be a limited number of representative actions, for example, forward and leftward. In addition, because of the existence of the transition of the animation, the current animation needs to be fused with the previous animation, and the previous animation is different, so that the skeletal representation corresponding to the same user input can be different, when the user switches from one animation to another animation, in order to avoid the jump of the gesture of the virtual object, the transition of the animation is introduced, during the transition of the animation, the gesture of the virtual object is fused with the front and back animations, taking the moving as an example, and in the process of moving the virtual object from the static state, an acceleration process exists, and the acceleration action art is not specially manufactured and is realized through the transition of the animation.
In this embodiment, the information related to the target animation may be the target animation information, for example, the virtual object is triggered to perform the target action corresponding to the target animation by the operation instruction input by the user, and the target animation information may be the operation instruction, where the target animation is used to generate the target Pose (else) corresponding to the target action performed by the virtual object in the virtual scene, and the target action represents the target Pose and the change process of the target Pose of the virtual object under various situations, which are the transitional and fused representation of the multiple first animations. For example, the "walk" is an action, the "walk forward" may be formed by fusing the "walk forward" animation and the "walk left" animation according to a certain proportion, and for example, the target animation of the hit action during the walking and applying process for the virtual object is obtained by fusing two walking animations, one applying animation and one hit animation.
The target animation information of this embodiment may also be a list of names of a plurality of first animations, a length of a target animation, a play progress of a target animation, a play rate of a target animation, or the like.
Step S204, determining that the target animation is in a stable state according to the target animation information, and establishing a target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information.
In the technical solution provided in the above step S204 of the present invention, after the target animation information associated with the virtual object at the first moment is obtained, it is determined that the target animation is in a stable state according to the target animation information, and a target mapping relationship cache of the first sub-target animation information and the target gesture in the target animation information is established. Alternatively, the embodiment determines whether the target animation is in a steady state through the target animation information, and determines whether the target animation is in a steady state through the steady evaluation function. Since the proportion of the animation involved in the fusion is unchanged in the stable animation, if the fusion proportion of each first animation involved in the fusion is unchanged, it can be determined that the target animation is in a stable state, and further, it is determined that the target animation information and the target gesture corresponding to the corresponding target action are reproducible in a subsequent time, that is, the target animation information and the target gesture corresponding to the corresponding target action have repeatability, and the current state information of the animation system is reproducible. Optionally, the action repeated with the cycle is mostly reproducible, such as a long walking action, a running action, a creeping action, an attack action, etc. are reproducible.
If the target animation is determined to be in a stable state through the target animation information, a target mapping relation cache of first sub-target animation information and target gestures in the target animation information is established, the first sub-target animation information and the data of the target gestures in the target animation information can be used as cache data and stored according to the target mapping cache relation, and the data of the target gestures can be skeleton transformation information (which can be matrix or quaternion) of each frame of animation. Optionally, in this embodiment, a target mapping relation cache of first sub-target animation information and a target gesture in the target animation information is established on a target storage position through an animation cache system (Animation Cache System), where the target mapping relation cache enables the first sub-target animation information and the target gesture to have a one-to-one correspondence relation, and the first sub-target animation information may include an input operation of a user, an animation playing progress, an animation participating in fusion, and the like, and the target gesture corresponding to the target action may be quickly found through a table look-up according to the target mapping relation cache.
In step S206, it is determined that the second sub-target animation information associated with the virtual object at the second time corresponds to the first sub-target animation information, and the gesture of the action performed by the virtual object at the second time is controlled to be the target gesture.
In the technical solution provided in the above step S206 of the present invention, after determining that the target animation is in a stable state according to the target animation information and establishing a target mapping relationship cache of the first sub-target animation information and the target gesture in the target animation information, determining that the second sub-target animation information associated with the virtual object at the second moment corresponds to the first sub-target animation information, and controlling the gesture of the action executed by the virtual object at the second moment to be the target gesture.
The second moment of the embodiment is the moment after the first moment, the second moment corresponds to the second sub-target animation information and the first sub-target animation information associated with the virtual object, and may be that the second sub-target animation information and the first sub-target animation information associated with the virtual object at the second moment are the same, for example, the user inputs the same input operation, the animations are played to the same progress and the animations participating in fusion are the same, the gesture of the action executed by the virtual object at the second moment may be determined to be the target gesture, that is, if the input operation, the playing progress of the animations are consistent and the animations participating in fusion are the same, the gesture of the final representation of the stable animation is uniquely determined and may be directly obtained from the cache, so that the calculation process of the target action is completely skipped, for example, the process of updating the animation tree and sampling interpolation of the animation data may be directly skipped, and the running speed of the animation system may be greatly optimized without affecting the complexity and the fusion effect of the animation tree/animation map, and the better optimization effect may be obtained without affecting the design of the animation tree/animation flow and the animation flow.
In this embodiment, the buffer system cannot buffer all the data of the animation, but the data of the animation that can be stably reproduced can be buffered, thereby avoiding the problem of explosion of the data buffer. In practical games, most of the running, running and jumping actions are stably reproduced. After the method is used, the performance of the animation system can be remarkably improved.
Through the steps S202 to S206, the method includes acquiring target animation information associated with a virtual object at a first moment, where the target animation information includes information of a target animation obtained by fusing a plurality of first animations, and the target animation is used to generate a target gesture corresponding to a target action executed by the virtual object in a virtual scene; determining that the target animation is in a stable state according to the target animation information, and establishing a target mapping relation cache of first sub-target animation information and target gestures in the target animation information; and determining that second sub-target animation information associated with the virtual object at a second moment corresponds to the first sub-target animation information, and controlling the gesture of the action executed by the virtual object at the second moment to be a target gesture, wherein the second moment is a moment after the first moment. That is, the current gesture of the virtual object is cached under the reproducible condition through the target animation information, and the target gesture is directly called without recalculation under the condition that the same animation information appears later, so that the calculation cost is reduced, the technical effect of improving the processing efficiency of the animation information is achieved, and the technical problem of low processing efficiency of the animation information is solved.
As an alternative embodiment, the first sub-object animation information includes at least one of: the method comprises the steps of target operation information, current playing progress and a plurality of first animations, wherein the target operation is used for controlling a virtual object to execute a target action, and the current playing progress is determined through the playing time and the playing speed of the target animations.
In this embodiment, the first sub-target animation information and the target gesture have a target mapping relationship cache, and may be used to find the target gesture according to the target mapping relationship cache, where the target gesture may include information of a target operation, the target operation may be an input operation for controlling the virtual object to execute a target action, the current playing progress may be a playing progress of the target animation at the first moment, may be determined by a playing time and a playing rate of the target animation, and the first sub-target animation information may also be a plurality of first animations participating in fusion.
And establishing a target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information, wherein the target mapping relation cache can be value=f (input, clips, phase), wherein Value can be used for representing the target gesture, f () can be used for representing the target mapping relation, input can be used for representing information of target operation, and phase can be used for representing the current playing progress.
Alternatively, the embodiment introduces a time variable, and combines the animation playing time and the playing speed, so that the playing progress can be calculated. When the playing progress of the second moment is determined according to the playing time and the playing speed of the target animation, if the target action is a non-cyclic action, for example, if the target action is an attack action, a target product between a target difference between the second moment and the first moment and a playing speed of the target animation is obtained; acquiring a target quotient between a target volume and the length of a target animation; acquiring a first sum of a target quotient and a target animation between current playing progress; the minimum value between the target value and the first sum is determined as the playing progress at the second moment, and the playing progress can be expressed by the following formula:
phase1=min (phase0+v (t 1-t 0)/duration, 1.0), where phase1 may be used to represent the playing progress at the second moment, phase0 may be used to represent the current playing progress, v may be used to represent the playing rate of the target animation, t1 may be used to represent the second moment, t0 may be used to represent the first moment, duration may be used to represent the length of the target animation, and 1.0 is a preset target value. The min operation is to avoid that the value of Phase1 is greater than 1, so that the value of Phase1 is between [0,1 ].
In this embodiment, the loop motion is to continue playing from scratch after the playing is completed, for example, a walking motion, a running motion, etc. Under the condition that the target motion is a cyclic motion, acquiring a target product between a target difference between the second moment and the first moment and a playing rate of the target animation; acquiring a target quotient between a target volume and the length of a target animation; obtaining a second sum of the current playing progress of the target quotient and the current playing progress of the target animation; taking the remainder of the second sum according to the target value to obtain a first value; the first value is determined as a first playing progress, and can be expressed by the following formula:
phase 1= (phase 0+v (t 1-t 0)/duration) mod1.0, where phase1 may be used to represent the playing progress at the second moment, phase0 may be used to represent the current playing progress, v may be used to represent the playing rate of the target animation, t1 is used to represent the second moment, t0 may be used to represent the first moment, duration may be used to represent the length of the target animation, and the target value is 1.0.
For example, the target animation is a cyclic animation, the length of which is 2 seconds (duration=2), the play rate is 1 (v=1), and the play starts from 0 (phase0=0). The motion picture playing progress after 3 (t1-t0=3) seconds is: phase 1= (0+1×3/2) mod 1.0=1.5 mod 1.0=0.5, that is, after 3 seconds, the animation play progress is just 0.5.
The playing progress at the second moment is determined through the method, and the gesture of the action executed by the virtual object at the second moment is the target gesture under the condition that the information of the target operation, the playing progress of the animation and the animation involved in fusion, which are associated with the virtual object at the second moment, are respectively the same as the cached information of the target operation, the current playing progress and the plurality of first animations.
After the time variable is introduced, the cached first sub-target animation information and the target gesture can be cut into the animation playing process at any moment when the target animation information and the target gesture corresponding to the corresponding target action are reproducible, and the original flow can be used for calculating the animation at any moment. Because the information of the target operation, the current playing progress, the time and the first animation participating in fusion are all completely stored, whether the cache is switched or not does not cause the representation and state switching of the animation system, and therefore the efficiency of processing the animation information is improved.
When the time reaches the second time, the second time may be the first time. The playing speed v of the target animation, the current playing progress phase0 and the length duration of the target animation as well as the current time t0 can be obtained from an engine of the animation system. This embodiment introduces a time variable t, which may be expressed as value=f (input, t, clips) with respect to the entire target animation. The embodiment can ensure the effectiveness and smooth transition of the cache data by introducing a stable evaluation function and a time variable.
As an optional implementation manner, step S204, the step of establishing the target mapping relationship cache of the first sub-target animation information and the target gesture in the target animation information includes: the first sub-target animation is determined as a target key of the target key value pair, and the target gesture is determined as a target value of the target key value pair.
In this embodiment, when the first sub-target animation information and the target pose target mapping relationship cache is established, information of the target operation, the current playing progress of the target animation, a plurality of first animations and target poses may be stored in the form of key-value pairs. The information of the target operation, the current playing progress of the target animation and the plurality of first animations may be determined as target keys (keys) of the target Key Value pairs, and the target gestures may be determined as target values (values) of the target Key Value pairs, so that when the second sub-target animation information associated with the virtual object is the same as the first sub-target animation information, the corresponding target Key may be searched for first, and the target gesture may be searched for through the target Key as the gesture of the action performed by the virtual object at the second moment.
As an optional implementation manner, before establishing the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information in step S204, the method further includes: determining a fusion proportion of each first animation used for fusing the target animation, keeping unchanged in a target time period, and determining that the target animation is in a stable state, wherein the target animation information comprises each first animation.
In this embodiment, before the target mapping relation cache of the first sub-target animation information and the target pose in the target animation information is established, it is necessary to determine whether the target animation is in a steady state, that is, whether the animation system is in a steady state. The fusion proportion of each first animation used for fusing the target animation is obtained, and the fusion proportion is the animation fusion weight. And judging whether the fusion proportion of each first animation participating in fusion is kept unchanged in the target time period. If the fusion proportion of each first animation is judged to be kept unchanged in the target time period, the target animation is determined to be in a stable state, for example, for the animation which runs while attacking, the fusion proportion of the animation which runs is 1/3, the fusion proportion of the animation which attacks is 2/3, and if the fusion proportion of the animation which runs in the target time period is 1/3, the fusion proportion of the animation which attacks is 2/3, the target animation is determined to be in a stable state, a target mapping relation cache of first sub-target animation information and target gesture in target animation information is established, so that the gesture of the action which is executed by the virtual object at the second moment is the target gesture under the condition that the second sub-target animation information and the first sub-target animation information which are related to the virtual object at the second moment are the same, thereby completely skipping the calculation process of the target action, greatly optimizing the running speed of the animation system, and improving the processing efficiency of the animation information.
As an optional implementation manner, when the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information is established, the processing method of the animation information further comprises one of the following steps: caching a skeleton matrix of each animation frame of the target animation according to the target frame rate; determining that the difference between the target gesture and the gesture cached last time is greater than a first target threshold value, and caching the target gesture; and determining a target storage position associated with the current playing progress of the target animation from a plurality of storage positions, and caching the target gesture of the virtual object at the current playing progress to the target storage position, wherein each storage position is used for caching the gesture of the virtual object at the playing progress associated with each storage position, and each two adjacent storage positions allow the distance between animation frames corresponding to the stored gesture to be larger than a second target threshold.
In this embodiment, the skeleton matrix of each animation frame of the target animation is cached at the target frame rate with the animation system stable, so that the overhead of interpolation of the animation matrix can be completely skipped. The higher the target frame rate, that is, the higher the animation frame rate, the higher the animation quality, and the larger the corresponding animation volume, and conversely, the smaller the animation volume. The target frame rate may be set to be about 30 frames/second in general, and may be set to be 60 frames/second if the requirements for the motion quality are relatively high, which may be determined according to the characteristics of the actual game.
Optionally, in a case where a difference between the data of the target pose and the data of the last cached pose is greater than a first target threshold, the data of the target pose is cached, thereby avoiding too dense data. Optionally, the cached data is the result of a bone update. In an actual game, the difference between the data of the two bone updates may be very small, if the data are cached, the data of the target gesture may be meaningless and waste space is caused, and the embodiment may detect the difference between the data of the target gesture and the data of the gesture cached last time, and cache the data of the target gesture if the difference is greater than the first target threshold. Alternatively, if the time difference between the current time and the last cached posture data is smaller, the difference between the target posture data and the last cached posture data can be considered smaller, and the data can be saved without being cached, so that the effective data is stored by using the storage space to the maximum extent, and the updating speed is optimized.
Optionally, the embodiment may also pre-allocate the buffer space to make the update of the buffer data quick. A sufficient number of storage locations are pre-allocated, which may be buckets (buckets). The buffer data is built gradually when the engine runs, when one buffer data appears, the buffer data is buffered to the corresponding storage position, each storage position is used for buffering the gesture of the virtual object when the playing progress is associated with each storage position, and the interval between the animation frames corresponding to the stored gesture is allowed to be larger than the second target threshold value by every two adjacent storage positions, so that the stored data is dispersed, and the data is prevented from being too dense. In the embodiment, for the current playing progress, a target storage position associated with the current playing progress of the target animation is determined from a plurality of storage positions, and the target gesture of the virtual object at the current playing progress is cached to the target storage position, so that the storage data in the target storage position can be updated.
As an alternative embodiment, the method further comprises one of the following: determining that the difference value between the current playing progress and the first playing progress is within a third target threshold value, and discarding the target gesture of the virtual object at the current playing progress, wherein the first playing progress is the playing progress of the target animation corresponding to the gesture stored last time by the target storage position; and determining that the difference value between the current playing progress and the second playing progress is within a fourth target threshold, and discarding the target gesture of the virtual object at the current playing progress, wherein the second playing progress is the minimum playing progress of the target animation corresponding to the gesture allowed to be stored at the next storage position adjacent to the target storage position.
In this embodiment, under the condition that the animation system is in a stable state, buffering of target gestures corresponding to all playing progress is not needed, if the first playing progress is the playing progress of the target animation corresponding to the gesture stored last time by the target storage position, under the condition that a difference value between the current playing progress and the first playing progress is within a third target threshold value, discarding the target gesture of the virtual object at the current playing progress, and under the condition that a time difference value between the current playing progress and the first playing progress is smaller, the target gesture of the virtual object does not have obvious change, and then data of the target gesture of the virtual object at the current playing progress can be discarded, that is, data of the target gesture of the virtual object at the current playing progress is not buffered; if the second playing progress is the minimum playing progress of the target animation corresponding to the gesture allowed to be stored in the next storage position adjacent to the target storage position, and the difference between the current playing progress and the second playing progress is within a fourth target threshold, that is, the data of the target gesture corresponding to the current playing progress is close to the gesture data allowed to be stored in the next storage position adjacent to the current playing progress, the target gesture of the virtual object in the current playing progress is also discarded.
For example, the buffer memory may be configured by allocating a plurality of buckets with uniform time intervals in advance, the playing duration of the animation is 0.6S, the playing progress phase of the animation is 0.0 (time is 0.0×0.6=0.00) and the playing progress phase of the animation is 0.001 (time is 0.001×0.6=0.0006), corresponding to the same bucket, since the time intervals are too short, the gesture of the virtual object does not change significantly, the buffering is not needed, one of the buckets may be discarded, the policy may be set to keep the data of the gesture on the left (or right), and the embodiment may discard the data of the gesture with the playing progress of 0.001. Similarly, the data of the gesture with the animation playing progress phase of 0.099 (time of 0.099×0.6= 0.0594) is too close to the right side (0.060) of the adjacent barrel of the corresponding barrel, so that the cached data is prevented from being too concentrated, the expenditure of the storage space is reduced, and a better caching effect is achieved.
As an alternative embodiment, the target animation information includes at least one of: information for controlling a virtual object to perform a target operation of a target action; identification information of a plurality of first animations; a blend ratio for each first animation; the length of the target animation; the current playing progress of the target animation; the play rate of the target animation.
In this embodiment, the target animation information may include information for controlling a target operation of the virtual object to perform a target action, for example, an operation instruction input through a keyboard, a controller, a touch screen, etc., may be converted into an instruction of walking, jumping, attacking, etc., of the virtual object, without any limitation herein; the target animation information may further include identification information of a plurality of first animations, which may be names of the first animations, and an animation name list is formed by the plurality of first animation names; the target animation information may further include a fusion ratio of each first animation when the target animation is fused, and the target animation information may further include a length of the target animation, where the length may be a total playing duration of the target animation; the target animation information may further include a current playing progress of the target animation to determine through a playing time and a playing rate of the target animation; the target animation information may also include a play rate of the target animation, which may be a number of frames of animation frames played per second.
As an optional implementation manner, step S204, determining, according to the target animation information, that the target animation is in a stable state, and establishing the target mapping relationship cache of the first sub-target animation information and the target gesture in the target animation information includes: and determining that the target animation is in a stable state according to the target animation information, determining that the first sub-target animation information and the target gesture are not cached in the target cache position, and establishing a target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information cached in the target cache position.
In this embodiment, when the target animation is in an unstable state, it is necessary to update the target animation according to the original animation update method, and the data is not cached after the update. Optionally, in response to a target operation performed on the virtual object in the target animation information, updating logic of an animation tree or an animation map, where the updated animation tree or animation map is used to determine a fusion ratio of the plurality of second animations to be fused and each second animation, that is, determine which animations need to be fused and the fusion ratio, and update the animation tree or animation map; finding out the animation to be fused and updated according to the result of updating the animation tree or the animation map, sampling the skeleton gesture matrix according to the playing progress, and interpolating to obtain a final skeleton transformation matrix. Updating the skeleton tree according to the skeleton transformation matrix, and further covering and drawing the presentation state of the virtual object in the virtual scene according to the skeleton tree.
Optionally, acquiring target animation information associated with the virtual object at a first moment, determining that the target animation is in a stable state according to the target animation information, and determining that the target cache position does not cache the first sub-target animation information and the corresponding target gesture, that is, the target cache position does not have corresponding cache data, establishing a target mapping relationship cache of the first sub-target animation information and the target gesture in the target animation information cached at the target cache position, so as to determine the gesture of the action executed by the virtual object at a second moment as the target gesture under the condition that second sub-target animation information associated with the virtual object at a second moment is the same as the first sub-target animation information. The target cache location may be a cache pool, and the animation needs to be updated when the cache pool has no related data, but only needs to be updated once, and after the updating, the animation is cached in the cache pool for the next direct use.
And for the second time after the first time, if the second sub-target animation information and the first sub-target animation information associated with the virtual object are the same, for example, the target operation for controlling the virtual object to execute the target action is the same, since the target cache location has cached the data of the corresponding target gesture, that is, the target cache location has the corresponding cache data, the gesture of the action executed by the virtual object at the second time can be directly determined as the cached corresponding target gesture without updating the animation. That is, if the relevant data has been cached in the cache pool, the cached data may be directly used.
As an optional implementation manner, after establishing the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information in step S204, the method further includes: and determining that the capacity occupied by the data stored in the target cache position exceeds the target capacity, and deleting the first sub-target animation information and the target gesture when the target gesture exceeds the target time.
In this embodiment, after the target mapping relationship cache of the first sub-target animation information and the target gesture in the target animation information is established, if the capacity occupied by the data stored in the target cache location exceeds the target capacity, and if the target gesture exceeds the target time and is not used, that is, the total size of the data cached in the target cache location exceeds the designated size, and the data of the target gesture is the least frequently used cache data, the first sub-target animation information and the data of the corresponding target gesture are deleted. Alternatively, if the capacity occupied by the data stored in the target cache location does not exceed the target capacity, the first sub-target animation information and the data of the target gesture may not be deleted, that is, if the capacity occupied by the data stored in the target cache location does not exceed the target capacity, the above-described deletion operation may not be performed even if the target gesture exceeds the target time.
This embodiment may employ a recently unused algorithm (Least Recently Used, abbreviated LRU) to dynamically purge unusual data and make room for storage in time to store other reproducible target animation information and pose data.
In the related art, the caching technique is generally simply to cache bone data. In the case of animation fusion, the possible values of skeleton data are very large, if all caches can cause the explosion of the cached data, if the cache size is limited, the cache hit can also cause big problems, and finally the cached data is invalid, and the technologies cannot optimize the updating cost of the animation tree/graph. While this embodiment obtains the final target pose (skeleton morphology result) directly from the first sub-target animation information (user's input operation and animation play progress, etc.), the animation tree or animation map overhead is skipped entirely without affecting the logic. After the animation stability evaluation function is introduced, the effectiveness of the cache data is very high, the problem of cache explosion can not occur, the efficiency of processing the animation information is improved under the condition that animation production is not influenced, the running efficiency of an animation system is improved, and especially for the condition that the number of animation on the same screen is large, the optimization is more obvious.
The following describes the technical scheme of the present invention with reference to a preferred embodiment. Specifically, a method for processing animation information in an animation system is exemplified.
The animation system of this embodiment may be a game animation system, which may receive input operations from a game player, and control game characters in a game scene to realize various action expressions. The game characters in the game scene can be internally provided with a plurality of limited animations, and the animation system can fuse and transition according to a certain proportion based on the plurality of limited animations which are arranged in the animation system according to the input operation of a game player, so that actions under various conditions can be obtained. The animation fusion and transition are accompanied by interpolation between the animations, and can be interpolation of frame data of the animations participating in the fusion, and the frame data can be quaternion. This calculation is needed for each frame, and as the number of animations involved in the fusion increases, the overhead of the animation fusion increases linearly. Meanwhile, the cost of animation fusion can be in a linear correlation with the number of game characters in a game scene. Therefore, optimizing the efficiency of the animation system is of great importance for games, especially for games with more characters on the screen.
This embodiment may optimize the efficiency of the animation system. Under the condition that the complexity and the fusion effect of the animation tree or the animation diagram are not influenced, the running speed of the animation system can be greatly optimized, the optimization effect is more obvious under the condition that the same-screen game characters are more, the design of the animation tree or the animation diagram is not influenced, and the animation production flow is not influenced.
Fig. 3 is a schematic diagram of the overhead of an animation system, according to an embodiment of the invention. As shown in fig. 3, the overhead of processing the input operation of the game player is 3%, the overhead of updating the skeleton is 10%, the overhead of calculating the weight of the animation tree or animation map is 18%, and the overhead of animation interpolation is 69%. Thus, to increase the efficiency of an animation system, it is desirable to reduce the overhead of interpolation of animations in addition to the overhead of computing the weights of an animation tree or animation map.
Since the game player's operations are repeatable, the animation of the game character is repeatable as opposed. If the relation between the repeatability is found, the operation or the performance which has occurred can be recorded by utilizing the caching principle, and when the same operation is performed, the corresponding performance can be directly obtained by looking up a table for output, so that the calculation process is completely skipped.
Fig. 4 is a flow chart of a method of animation buffering, according to an embodiment of the present invention. As shown in fig. 4, the method comprises the steps of:
step S401, an input operation of a game player is acquired.
Step S402, judging whether a skeleton matrix corresponding to the input operation of the game player exists in the animation buffer system through the animation system.
Step S403, if a skeleton matrix corresponding to an input operation of the game player exists in the animation buffer system, outputting the skeleton matrix.
This embodiment determines whether or not a skeleton matrix (cached results) corresponding to the input operation of the game player exists according to the input operation of the game player, and if so, the cached results may be directly used.
However, it is not enough to abstract the animation system to a black box that accepts input operations from the game player and outputs the skeleton matrix. Due to the existence of the animation transition, the current animation needs to be merged with the previous animation. The animation transition refers to an intermediate process introduced to avoid the jump of the gesture of the game character when the game character is switched from one animation to another, and the gesture of the game character is shown as a fusion of front and back actions during the animation transition. Taking the movement of a game character as an example, the game character has an acceleration process in the process of moving from rest, and the art does not specially make the acceleration action, so that the game character is realized through the transition from rest to movement. If the previous animations are different, even for the same input operation, the skeleton matrix corresponding to that input operation may be different.
This embodiment solves the above-described problems by introducing time variables and a stable evaluation function of the animation system, which will be described in detail below.
Fig. 5 is a flow chart of a method of introducing an animation buffer of a stable rating function of an animation system, according to an embodiment of the invention. As shown in fig. 5, the method comprises the steps of:
step S501, an input operation of a game player is acquired.
Step S502, judging whether a skeleton matrix corresponding to the input operation of the game player exists in the animation buffer system through the animation system.
Step S503 needs to be executed before judging by the animation system whether or not a skeleton matrix corresponding to an input operation of a game player exists in the animation buffer system.
In step S503, status information of the animation system is collected by the stable evaluation function, and a determination is made as to whether the collected status information can be cached.
In this embodiment, the stability evaluation function may be a component in the engine that collects state information of the animation system, giving a determination as to whether the current state information can be cached.
Alternatively, the stability evaluation function of this embodiment may collect valid state information, which may include: instructions for the player's input operations (e.g., running instructions, walking instructions, etc.), a list of animation names currently participating in the fusion, the length of the fused animation, the play progress percentage of the fused animation, the play rate of the fused animation, etc.
The stability evaluation function of this embodiment determines whether the current state information of the animation system is reproducible based on the collected current state information. Generally, status information corresponding to a cyclic repetitive motion is stable in many cases, for example, status information corresponding to a long-time walking, running, creeping, attacking motion, and the like is stable and reproducible. In the stable animation, the proportion of the animation which participates in fusion is unchanged, and then the state information and the corresponding gesture of the game role can be cached.
If the animation system is in a steady state, the state information is reproducible, i.e., the state information and corresponding gestures are cacheable. In order to correspond to each gesture of the game character, besides the input operation of the player and the animation involved in fusion, the playing progress of the animation needs to be determined, and if the cache mapping relationship is f, then:
the gesture value=f (input operation input, animation clips participating in fusion, animation play progress phase) of the game character.
The embodiment introduces time variable and combines the playing time and the playing speed of the animation, so that the playing progress of the animation can be calculated.
Optionally, the engine of the animation system may obtain the playing rate v of the current animation, the current playing progress phase0, the animation length duration, and the current time t0.
For non-cyclic operation, the play progress phase1 at time t1 may be:
phase1=min (current playing progress phase0+playing rate v of current animation v (time t 1-time t 0)/animation length duration, 1.0)
For the cyclic action, the playing progress phase1 at the time t1 is:
phase1= (current play progress p0+play rate v of current animation v (time t 1-time t 0)/animation length duration) mod1.0.
Therefore, if the cached mapping relationship is f and the gesture phase of the game character is Value, then:
gesture value=f of model (input operation, time t, animation clips participating in fusion).
In this embodiment, caching may be performed by key-value, where the value is the pose (post) of the game character, which may be state information, where the pose of the game character may be all skeletal transformation information (which may be a matrix or quaternion) per frame of animation.
After the time is introduced, the buffer memory can be cut into the process of playing the animation at any time, and the original flow can be used for calculating the animation at any time. The progress, time and fusion-participating animations of the animation playing are completely stored, and the buffer switching does not cause the representation and state switching of the animation system.
Alternatively, in this embodiment, the skeleton matrix of each frame of the animation may be cached at a frame rate with the animation system in a steady state, thus bypassing the overhead of interpolation of the animation matrix entirely. Wherein, a certain frame rate refers to an animation frame rate, the higher the animation frame rate is, the higher the animation quality is, the larger the corresponding animation volume is, and conversely, the smaller the animation volume is. The frame rate of the buffer memory can be determined according to the characteristics of the actual game, and can be generally about 30 frames, and if the requirements on the motion quality are extremely high, the frame rate can be set to be 60.
Under the condition of a certain frame number, the interval between the buffered frames is uniform, so that the buffered data is prevented from being too concentrated, and the best effect is achieved. The structure of the buffer memory can be a bucket with pre-allocated intervals, the buffer memory data is stored in the bucket, the buffer memory data is gradually built when the engine is operated, when one buffer memory data occurs, the buffer memory data is buffered in the corresponding bucket, and the data with too dense postures are removed. The model gesture of each bucket corresponds to a moment, when the stored gesture data is updated and cached each time, the gesture data falls on one side of the bucket, so that a sufficient distance between frames corresponding to each cached gesture is ensured, and the best effect is achieved.
Fig. 6 is a schematic diagram of a storage structure for caching data according to an embodiment of the present invention. As shown in fig. 6, the buffer memory may be configured by a plurality of buckets (bucket 1 to bucket 38) with pre-allocated intervals, the animation playing duration is 0.6S, the animation playing progress phase is 0.0 (time is 0.0×0.6=0.00 corresponding to bucket 1) and 0.001 (time is 0.001×0.6=0.0006 falling in bucket 1), since the time interval is too short, the gesture of the game character does not change significantly, and the buffer memory is not needed, one of the data may be discarded, the policy may be set to keep the gesture data of the left (or right), and the embodiment may discard the gesture data of the playing progress 0.001. Similarly, the data of the gesture with the animation playing progress phase of 0.099 (time 0.099×0.6= 0.0594, falling in the tub 3) is too close to the right side of the tub 3, and the data of the next tub 4 is also discarded.
In step S504, the input operation and the animation playing progress of the game player are respectively consistent with the input operation and the animation playing progress of the cache, and the model gesture of the final animation representation is uniquely determined and can be directly obtained from the cache.
Fig. 7 is a flowchart of an animation buffering method according to the related art. As shown in fig. 7, the method includes the steps of:
step S701, an input operation of a game player is acquired.
In step S702, in the animation system, an animation tree or animation map is updated.
In step S703, in the animation system, an animation data sampling interpolation is performed.
Step S704, in the animation system, updating the skeleton.
Step S705, rendering.
The animation system receives the player's input and updates the skeleton as needed, and finally renders and presents to the player.
Step S702 and step S703 are added together to occupy more than 87% of the total overhead of the animation system.
Fig. 8 is a flowchart of an animation buffering method according to an embodiment of the present invention. As shown in fig. 8, the method includes the steps of:
step S801, an input operation of a game player is acquired.
The player inputs operation instructions through the keyboard and the controller and converts the operation instructions into instructions of walking, jumping, attack and the like of the game roles.
In step S802, in the cache system, cache evaluation is performed.
And (3) evaluating whether the data of the current gesture can be cached or not by combining the stable evaluation function with the state information of the animation system, and caching the data of the current gesture under the condition that the data of the current gesture can be cached, wherein the state information of the animation system can comprise the current playing animation name, the input operation of a game player, the playing progress of the animation, the proportion of animation fusion and the like. The embodiment may also read the state of the animation tree or animation map to evaluate whether the current gesture may be cached.
In this embodiment, the animation buffer inevitably needs to store the buffered data (state information of the animation system and data of the corresponding gesture), and thus there is a certain memory overhead. In order to maximize the utilization of memory and optimize the cache update rate, the cache of this embodiment employs the following strategy:
detecting the space between frames corresponding to the cached data, avoiding too dense cached data, wherein the cached data is used for skeleton updating, and in an actual game, the difference of the data of two skeleton updating is possibly very small; the buffer space is pre-allocated, the data update is rapid, the frame rate of the buffer data is determined, and enough barrels (sockets) are pre-allocated, so that when the buffer update is performed, only the corresponding barrel is found, and the buffered data is modified; LRU algorithms can be employed to dynamically purge unusual data to make more room for subsequent buffered data.
In step S803, it is determined whether the buffered bone transformation matrix is available.
And judging that the input operation and the animation playing progress of the game player are respectively consistent with the cached input operation and the animation playing progress.
In step S804, the cache is read.
If the input operation and the animation playing progress of the game player are respectively consistent with the input operation and the animation playing progress of the cache, the model gesture of the final representation of the animation is uniquely determined and can be directly obtained from the cache, and the cached skeleton transformation matrix is obtained from the target database according to the input operation, the current playing animation name list, the time and the like of the game player through the mapping function.
In step S805, in the animation system, an animation tree or an animation map is updated.
If it is determined that the cached skeletal transformation matrix is not available, logic for updating the animation tree or animation map is updated in response to player input, which animations merge, what the ratio of the merge is, thereby updating the state of the animation tree or animation map, and the cached data. For animated transitions, the transition time may also be updated.
In step S806, in the animation system, the animation data sampling interpolation is performed.
Finding out the animation to be fused and updated according to the result of updating the animation tree or the animation map, sampling the skeleton gesture matrix according to the playing progress, and interpolating to obtain a final skeleton transformation matrix, so as to update the cache data.
Step S807, in the animation system, the skeleton is updated.
After interpolation of the animation data samples, or after reading the cache, the skeletal tree is updated according to the skeletal pose matrix.
Step S808, drawing a presentation.
And according to the skeleton tree, covering and drawing the game roles.
In this embodiment, after the buffer system is added, when the buffer is available, the process of step S805 and step S806 may be skipped directly, so that the efficiency is greatly improved.
The caching system is not capable of caching all state information, but can cache as long as the state information is stable and reproducible. In practical games, most of the running, running and jumping actions are stably reproduced. After the animation uses the animation buffer, the animation system has performance improvement of more than 45%.
In the related art, the caching technique is generally simply to cache bone data. In the case of animation fusion, the possible values of skeleton data are very large, if all caches can cause the explosion of the cached data, if the cache size is limited, the cache hit can also cause big problems, and finally the cached data is invalid, and the technologies cannot optimize the updating cost of the animation tree/graph. The embodiment directly obtains the final skeleton morphology result from the input operation of the user and the animation playing time, and the animation tree or animation drawing cost is completely skipped under the condition of not affecting logic. After the animation stability evaluation function is introduced, the effectiveness of the cache data is very high, the problem of cache explosion can not occur, the running efficiency of an animation system is improved under the condition that animation production is not influenced, and the optimization is more obvious especially for the condition that the number of animation on the same screen is large.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment of the invention also provides a processing device of the animation information. The animation information processing device according to the embodiment may be used to execute the animation information processing method according to the embodiment of the present invention.
Fig. 9 is a schematic diagram of a processing apparatus of animation information according to an embodiment of the present invention. As shown in fig. 9, the animation information processing apparatus 900 includes: an acquisition unit 10, a first processing unit 20 and a second processing unit 30.
And an obtaining unit 10, configured to obtain target animation information associated with the virtual object at a first moment, where the target animation information includes information of a target animation obtained by fusing a plurality of first animations, and the target animation is used to generate a target gesture corresponding to a target action performed by the virtual object in the virtual scene.
The first processing unit 20 is configured to determine that the target animation is in a stable state according to the target animation information, and establish a target mapping relationship cache of the first sub-target animation information and the target gesture in the target animation information;
The second processing unit 30 is configured to determine that second sub-target animation information associated with the virtual object at a second time corresponds to the first sub-target animation information, and control a gesture of an action performed by the virtual object at the second time to be a target gesture, where the second sub-target animation information is a time after the first time.
Optionally, the first sub-target animation information includes at least one of: the method comprises the steps of target operation information, current playing progress and a plurality of first animations, wherein the target operation is used for controlling a virtual object to execute a target action, and the current playing progress is determined through the playing time and the playing speed of the target animations.
Optionally, the first processing unit 20 includes: and the determining module is used for determining the first sub-target animation as a target key of the target key value pair and determining the target gesture as a target value of the target key value pair.
Optionally, the apparatus further comprises: and the determining unit is used for determining the fusion proportion of each first animation used for fusing the target animation before establishing the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information, and keeping the target animation in a stable state within a target time period, wherein the target animation information comprises each first animation.
Optionally, the apparatus further comprises one of: the first buffer unit is used for buffering the skeleton matrix of each animation frame of the target animation according to the target frame rate when the target mapping relation buffer of the first sub-target animation information and the target gesture in the target animation information is established; the second caching unit is used for determining that the difference between the target gesture and the gesture cached last time is larger than a first target threshold value when the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information is established, and caching the target gesture; and the third caching unit is used for determining a target storage position associated with the current playing progress of the target animation from a plurality of storage positions and caching the target gesture of the virtual object at the current playing progress to the target storage position when the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information is established, wherein each storage position is used for caching the gesture of the virtual object at the playing progress associated with each storage position, and the interval between animation frames corresponding to the gesture allowed to be stored in each two adjacent storage positions is larger than a second target threshold value.
Optionally, the apparatus further comprises one of: the first discarding unit is used for determining that the difference value between the current playing progress and the first playing progress is within a third target threshold value, and discarding the target gesture of the virtual object in the current playing progress, wherein the first playing progress is the playing progress of the target animation corresponding to the gesture stored last time by the target storage position; and the second discarding unit is used for determining that the difference value between the current playing progress and the second playing progress is within a fourth target threshold value, discarding the target gesture of the virtual object in the current playing progress, wherein the second playing progress is the minimum playing progress of the target animation corresponding to the gesture allowed to be stored in the next storage position adjacent to the target storage position.
Optionally, the target animation information includes at least one of: information for controlling a virtual object to perform a target operation of a target action; identification information of a plurality of first animations; a blend ratio for each first animation; the length of the target animation; the current playing progress of the target animation; the play rate of the target animation.
Optionally, the buffer unit 20 includes: and the fourth caching module is used for determining that the target animation is in a stable state according to the target animation information, determining that the first sub-target animation information and the target gesture are not cached in the target caching position, and establishing a target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information cached in the target caching position.
Optionally, the apparatus further comprises: and the deleting unit is used for determining that the capacity occupied by the data stored in the target caching position exceeds the target capacity after the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information is established, and deleting the first sub-target animation information and the target gesture when the target gesture exceeds the target time.
In this embodiment, the acquiring unit 10 acquires target animation information associated with a virtual object at a first moment, where the target animation information includes information of a target animation obtained by fusing a plurality of first animations, the target animation is used to generate a target gesture corresponding to a target action executed by the virtual object in a virtual scene, the first processing unit 20 determines that the target animation is in a stable state, and establishes a target mapping relationship cache of first sub-target animation information and the target gesture in the target animation information; the second processing unit 30 determines that the second sub-target animation information associated with the virtual object at the second time corresponds to the first sub-target animation information, and controls the gesture of the action performed by the virtual object at the second time to be the target gesture, wherein the second sub-target animation information is the time after the first time. That is, the current gesture of the virtual object is cached under the reproducible condition through the target animation information, and the target gesture is directly called without recalculation under the condition that the same animation information appears later, so that the calculation cost is reduced, the technical effect of improving the processing efficiency of the animation information is achieved, and the technical problem of low processing efficiency of the animation information is solved.
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (11)
1. A method for processing animation information, comprising:
acquiring target animation information associated with a virtual object at a first moment, wherein the target animation information comprises information of target animations obtained by fusing a plurality of first animations, and the target animations are used for generating target gestures corresponding to target actions executed by the virtual object in a virtual scene;
determining that the target animation is in a stable state according to the target animation information, and establishing a target mapping relation cache of first sub-target animation information and the target gesture in the target animation information;
determining second sub-target animation information associated with the virtual object at a second moment and corresponding to the first sub-target animation information, and controlling the gesture of the action executed by the virtual object at the second moment to be the target gesture, wherein the second moment is a moment after the first moment;
Wherein the determining that the target animation is in a stable state according to the target animation information comprises:
determining a fusion proportion of each first animation used for fusing the target animation, keeping unchanged in a target time period, and determining that the target animation is in the stable state, wherein the target animation information comprises each first animation;
when the target mapping relation cache of the first sub-target animation information in the target animation information and the target gesture is established, the method further comprises: and determining a target storage position associated with the current playing progress of the target animation from a plurality of storage positions, and caching the target gesture of the virtual object at the current playing progress to the target storage positions, wherein each storage position is used for caching the gesture of the virtual object at the playing progress associated with each storage position, and the distance between animation frames corresponding to the gesture allowed to be stored in each two adjacent storage positions is larger than a second target threshold value.
2. The method of claim 1, wherein the first sub-target animation information comprises at least one of: the method comprises the steps of obtaining information of target operation, current playing progress and a plurality of first animations, wherein the target operation is used for controlling the virtual object to execute the target action, and the current playing progress is determined by the playing time and the playing speed of the target animations.
3. The method of claim 1, wherein the step of establishing a target mapping cache of the first sub-target animation information and the target pose in the target animation information comprises:
and determining the first sub-target animation as a target key of a target key value pair, and determining the target gesture as a target value of the target key value pair.
4. The method of claim 1, wherein in establishing a target mapping cache of the first sub-target animation information and the target pose in the target animation information, the method further comprises one of:
caching a skeleton matrix of each animation frame of the target animation according to a target frame rate;
and determining that the difference between the target gesture and the gesture cached last time is larger than a first target threshold value, and caching the target gesture.
5. The method of claim 4, further comprising one of:
determining that the difference value between the current playing progress and a first playing progress is within a third target threshold, discarding the target gesture of the virtual object when the current playing progress, wherein the first playing progress is the playing progress of the target animation corresponding to the gesture stored last time by the target storage position;
And determining that the difference value between the current playing progress and a second playing progress is within a fourth target threshold, discarding the target gesture of the virtual object when the current playing progress, wherein the second playing progress is the minimum playing progress of the target animation corresponding to the gesture permitted to be stored in the next storage position adjacent to the target storage position.
6. The method of claim 1, wherein the target animation information comprises at least one of:
information for controlling the virtual object to perform a target operation of the target action;
identification information of the plurality of first animations;
a blend ratio of each of the first animations;
the length of the target animation;
the current playing progress of the target animation;
and the playing speed of the target animation.
7. The method according to any one of claims 1 to 6, wherein the step of determining that the target animation is in a stable state according to the target animation information and establishing a target mapping relationship buffer of the first sub-target animation information and the target gesture in the target animation information includes:
and determining that the target animation is in a stable state according to the target animation information, determining that the first sub-target animation information and the target gesture are not cached in a target cache position, and establishing the target mapping relation cache of the first sub-target animation information and the target gesture in the target animation information cached in the target cache position.
8. The method of claim 7, wherein after the step of establishing a target mapping cache of the first sub-target animation information and the target pose in the target animation information, the method further comprises:
and determining that the capacity occupied by the data stored in the target cache position exceeds the target capacity, and deleting the first sub-target animation information and the target gesture when the target gesture exceeds the target time.
9. An animation information processing device, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring target animation information associated with a virtual object at a first moment, the target animation information comprises information of target animations obtained by fusing a plurality of first animations, and the target animations are used for generating target gestures corresponding to target actions executed by the virtual object in a virtual scene;
the first processing unit is used for determining that the target animation is in a stable state according to the target animation information, and establishing a target mapping relation cache of first sub-target animation information and the target gesture in the target animation information;
a second processing unit, configured to determine that second sub-target animation information associated with the virtual object at a second time corresponds to the first sub-target animation information, and control a gesture of an action performed by the virtual object at the second time to be the target gesture, where the second time is a time after the first time;
Wherein the determining that the target animation is in a stable state according to the target animation information comprises:
determining a fusion proportion of each first animation used for fusing the target animation, keeping unchanged in a target time period, and determining that the target animation is in the stable state, wherein the target animation information comprises each first animation;
the processing device of the animation information is also used for: when a target mapping relation cache of first sub-target animation information and the target gesture in the target animation information is established, determining a target storage position associated with the current playing progress of the target animation from a plurality of storage positions, and caching the target gesture of the virtual object at the current playing progress to the target storage position, wherein each storage position is used for caching the gesture of the virtual object at the playing progress associated with each storage position, and a distance between animation frames corresponding to each adjacent two storage positions allowing the stored gesture is larger than a second target threshold.
10. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when run.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910549094.0A CN110264554B (en) | 2019-06-24 | 2019-06-24 | Method and device for processing animation information, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910549094.0A CN110264554B (en) | 2019-06-24 | 2019-06-24 | Method and device for processing animation information, storage medium and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110264554A CN110264554A (en) | 2019-09-20 |
CN110264554B true CN110264554B (en) | 2023-12-19 |
Family
ID=67920726
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910549094.0A Active CN110264554B (en) | 2019-06-24 | 2019-06-24 | Method and device for processing animation information, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110264554B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110879738B (en) * | 2019-11-19 | 2023-05-05 | 北京云测信息技术有限公司 | Operation step display method and device and electronic equipment |
CN113744372B (en) * | 2020-05-15 | 2024-09-27 | 完美世界(北京)软件科技发展有限公司 | Animation generation method, device and equipment |
CN112316422B (en) * | 2020-11-27 | 2023-01-13 | 上海米哈游天命科技有限公司 | Clothing change method and device, electronic equipment and storage medium |
CN113318439B (en) * | 2021-06-17 | 2024-05-28 | 网易(杭州)网络有限公司 | Method and device for processing starting animation, processor and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413344A (en) * | 2013-07-10 | 2013-11-27 | 深圳Tcl新技术有限公司 | 3D frame animation realization method, device and terminal |
CN107294838A (en) * | 2017-05-24 | 2017-10-24 | 腾讯科技(深圳)有限公司 | Animation producing method, device, system and the terminal of social networking application |
CN108416830A (en) * | 2018-02-26 | 2018-08-17 | 网易(杭州)网络有限公司 | Animation display control method, device, equipment and storage medium |
CN108619720A (en) * | 2018-04-11 | 2018-10-09 | 腾讯科技(深圳)有限公司 | Playing method and device, storage medium, the electronic device of animation |
CN109806596A (en) * | 2019-03-20 | 2019-05-28 | 网易(杭州)网络有限公司 | Game picture display methods and device, storage medium, electronic equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10169903B2 (en) * | 2016-06-12 | 2019-01-01 | Apple Inc. | Animation techniques for mobile devices |
-
2019
- 2019-06-24 CN CN201910549094.0A patent/CN110264554B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413344A (en) * | 2013-07-10 | 2013-11-27 | 深圳Tcl新技术有限公司 | 3D frame animation realization method, device and terminal |
CN107294838A (en) * | 2017-05-24 | 2017-10-24 | 腾讯科技(深圳)有限公司 | Animation producing method, device, system and the terminal of social networking application |
CN108416830A (en) * | 2018-02-26 | 2018-08-17 | 网易(杭州)网络有限公司 | Animation display control method, device, equipment and storage medium |
CN108619720A (en) * | 2018-04-11 | 2018-10-09 | 腾讯科技(深圳)有限公司 | Playing method and device, storage medium, the electronic device of animation |
CN109806596A (en) * | 2019-03-20 | 2019-05-28 | 网易(杭州)网络有限公司 | Game picture display methods and device, storage medium, electronic equipment |
Non-Patent Citations (1)
Title |
---|
Cocos2d-x 缓存机制:预加载与重复使用;Winco;《https://www.cnblogs.com/atong/p/3363056.html》;20131011;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110264554A (en) | 2019-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110264554B (en) | Method and device for processing animation information, storage medium and electronic device | |
US10332240B2 (en) | Method, device and computer readable medium for creating motion blur effect | |
WO2021143255A1 (en) | Data processing method and apparatus, computer device, and readable storage medium | |
CN113018848B (en) | Game picture display method, related device, equipment and storage medium | |
JP2017517921A (en) | Frame encoding with hints | |
CN111880877B (en) | Animation switching method, device, equipment and storage medium | |
CN113487709B (en) | Special effect display method and device, computer equipment and storage medium | |
WO2014187108A1 (en) | Server, client and video processing method | |
CN108771866A (en) | Virtual object control method in virtual reality and device | |
CN113705520A (en) | Motion capture method and device and server | |
CN102750450A (en) | Scene management method and device in network game | |
WO2022183610A1 (en) | Game mode switching method and apparatus, and electronic device | |
CN112791417B (en) | Game picture shooting method, device, equipment and storage medium | |
CN109117237B (en) | Bullet screen display method and device and electronic equipment | |
CN110750743B (en) | Animation playing method, device, equipment and storage medium | |
CN113032160A (en) | Data synchronization management method and related device | |
JP6748281B1 (en) | Server, processing system, processing method and program | |
US9421469B2 (en) | Game machine, game system, game machine control method, and information storage medium | |
CN104881230A (en) | Method and equipment for performing man-machine interaction on touch control terminal | |
CN109344303B (en) | Data structure switching method, device, equipment and storage medium | |
CN114706433A (en) | Equipment control method and device and electronic equipment | |
CN114768260A (en) | Data processing method and device for virtual character in game and electronic equipment | |
CN114470746A (en) | Server system, data transmission method, device, equipment and storage medium | |
JP6876072B2 (en) | Video game processing program, video game processing device, video game processing method and learning program | |
CN112494945A (en) | Game scene conversion method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |