CN118710779A - Animation playing method, device, medium, electronic equipment and program product - Google Patents
Animation playing method, device, medium, electronic equipment and program product Download PDFInfo
- Publication number
- CN118710779A CN118710779A CN202410814627.4A CN202410814627A CN118710779A CN 118710779 A CN118710779 A CN 118710779A CN 202410814627 A CN202410814627 A CN 202410814627A CN 118710779 A CN118710779 A CN 118710779A
- Authority
- CN
- China
- Prior art keywords
- animation
- frame
- target
- position information
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000009877 rendering Methods 0.000 claims abstract description 35
- 238000004590 computer program Methods 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 abstract description 32
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 26
- 230000004044 response Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 4
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The present disclosure relates to an animation playing method, apparatus, medium, electronic device and program product, and relates to the field of computer technology, wherein the method comprises determining target animation data, wherein the target animation data comprises multiple frames of key frames, obtaining a motion trail formed by target position information corresponding to the multiple frames of key frames according to the target animation data, and rendering to obtain a multi-frame animation frame according to the motion track and the animation object, wherein the multi-frame animation frame is used for describing the motion of the animation object along the motion track, displaying the multi-frame animation frame, and indicating the motion track of the animation object through a multi-frame key frame, so that the corresponding animation effect is rendered in real time. And the animation frame is rendered and generated through the target animation data, and a user can dynamically adjust the target position information in the key frame according to the requirement so as to modify the motion trail of the animation object and obtain the animation effect meeting the requirement.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an animation playing method, an animation playing device, an animation playing medium, an electronic device, and a program product.
Background
In the related art, animation special effects are generally produced by animation producers in advance according to requirements to complete a section of animation resources, and when a scene triggers to play an animation, the animation resources produced in advance are played in the scene. However, due to the complex and changeable scenes, the animation resources manufactured in advance cannot meet the requirements of real-time scenes.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides an animation playing method, including:
Determining target animation data, wherein the target animation data comprises multi-frame key frames, and the key frames are used for recording target position information of an animation object at the key frames;
according to the target animation data, a motion track formed by target position information corresponding to the multi-frame key frame is obtained;
Rendering to obtain a multi-frame animation frame according to the motion trail and the animation object, wherein the multi-frame animation frame is used for describing the motion of the animation object along the motion trail;
And displaying the multi-frame animation frame.
In a second aspect, the present disclosure provides an animation playing device, including:
a determining module configured to determine target animation data, wherein the target animation data comprises a plurality of frames of key frames, the key frames being used for recording target position information of an animation object at the key frames;
the obtaining module is configured to obtain a motion trail formed by target position information corresponding to the multi-frame key frames according to the target animation data;
The rendering module is configured to render and obtain multi-frame animation frames according to the motion trail and the animation objects, wherein the multi-frame animation frames are used for describing the motion of the animation objects along the motion trail;
and the display module is configured to display the multi-frame animation frame.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing device, implements the steps of the method of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
Processing means for executing said computer program in said storage means to carry out the steps of the method according to the first aspect.
In a fifth aspect, the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
Based on the technical scheme, the target animation data are determined, wherein the target animation data comprise a plurality of frames of key frames, the key frames are used for recording target position information of an animation object at the key frames, a motion track formed by the target position information corresponding to the plurality of frames of key frames is obtained according to the target animation data, the plurality of frames of animation frames are rendered according to the motion track and the animation object, the plurality of frames of animation frames are used for describing the motion of the animation object along the motion track, the plurality of frames of animation frames are displayed, and the motion track of the animation object can be indicated through the plurality of frames of key frames, so that a corresponding animation effect is obtained through real-time rendering. And the animation frame is rendered and generated through the target animation data, and a user can dynamically adjust the target position information in the key frame according to the requirement so as to modify the motion trail of the animation object and obtain the animation effect meeting the requirement.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
fig. 1 is a flow chart illustrating an animation playback method according to some embodiments.
Fig. 2 is a detailed flow chart of step 120 shown in fig. 1.
Fig. 3 is a schematic diagram of a motion profile shown according to some embodiments.
Fig. 4 is a schematic diagram of a motion profile shown according to further embodiments.
FIG. 5 is a flow chart illustrating an animation playback according to some embodiments.
FIG. 6 is a schematic diagram of an animation shown according to some embodiments.
FIG. 7 is a schematic diagram of an animation shown according to further embodiments.
Fig. 8 is a detailed flow chart of step 130 shown in fig. 1.
FIG. 9 is a schematic diagram of an animation effect shown according to some embodiments.
Fig. 10 is a detailed flow chart of step 850 shown in fig. 8.
FIG. 11 is a flow diagram illustrating rendering an animation frame, according to some embodiments.
Fig. 12 is a flow chart illustrating the acquisition of target animation data, according to some embodiments.
FIG. 13 is a schematic diagram of an animation editing scene shown according to some embodiments.
Fig. 14 is a schematic diagram illustrating obtaining target animation data, according to some embodiments.
FIG. 15 is a schematic diagram of an animation playback system, shown according to some embodiments.
Fig. 16 is a schematic diagram showing the structure of an animation playing device according to some embodiments.
Fig. 17 is a schematic diagram of a structure of an electronic device shown according to some embodiments.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Fig. 1 is a flow chart illustrating an animation playback method according to some embodiments. As shown in fig. 1, an embodiment of the present disclosure provides an animation playing method, which may be executed by an electronic device, and in particular, may be executed by an animation playing apparatus, where the apparatus may be implemented by software and/or hardware, and configured in the electronic device. As shown in fig. 1, the method may include the following steps.
In step 110, target animation data is determined, wherein the target animation data comprises a plurality of key frames for recording target position information of the animation object at the key frames.
Here, the target animation data is all information and resources for managing an animation. In an embodiment of the present disclosure, the target animation data comprises a plurality of key frames, each key frame for recording target position information (position) of the animation object at the key frame. For example, if the time point of one key frame is 00:02 and the target position information corresponding to the key frame is (x, y, z), the position of the animation object at the time point of 00:02 is (x, y, z).
It should be noted that the target position information may be one coordinate point in the three-dimensional space or one coordinate point in the two-dimensional space, depending on the virtual scene played by the animation. If the virtual scene for playing the animation is a three-dimensional space, the corresponding target position information is a three-dimensional coordinate, and if the virtual scene for playing the animation is a two-dimensional space, the corresponding target position information is a two-dimensional coordinate. That is, the animation playing method provided by the embodiment of the present disclosure may be used for playing an animation in a three-dimensional space or playing an animation in a two-dimensional space.
Wherein the keyframe (Keyframe) is a special frame that is used to record the state of the animated object at a point in time, which may include the position, rotation, scaling, color, transparency, etc. of the animated object. An animated object (Animated Object) refers to any game element or three-dimensional model that requires animation.
It should be understood that the target animation data may be animation data edited in advance by an animation editor, and the target animation data is acquired from a corresponding storage area when the play of the animation is triggered.
In step 120, a motion trail formed by the target position information corresponding to the multi-frame key frame is obtained according to the target animation data.
Here, since each frame of key frame in the target animation data records the target position information of the animation object at the key frame, correspondingly, the position information of the intermediate frame between every two key frames can be calculated between the multi-frame key frames through an interpolation algorithm, so as to obtain the motion trail formed by the target position information corresponding to the multi-frame key frames.
That is, the target position information corresponding to the multi-frame key frame and the position information of the intermediate frame obtained by interpolation are connected, so that a motion track corresponding to the animation object can be formed, and the motion of the animation object along the time axis is represented.
It should be noted that in the embodiments of the present disclosure, the interpolation algorithm used may be determined according to the requirements. The interpolation algorithm may be, for example, a linear interpolation algorithm, a spline interpolation algorithm, a bezier curve interpolation algorithm, or the like.
In step 130, a plurality of frames of animation frames are rendered according to the motion trail and the animation objects.
Here, the animation object may be an object requiring dynamic special effects, and the user may configure the corresponding animation object according to the service requirement. Illustratively, in a live scene, the animated object may be a virtual gift that is delivered. That is, after the user sends out the virtual gift, the virtual gift may be controlled to display a corresponding animation effect.
The multi-frame animation frame can be obtained by applying the motion trail to the animation object to control the animation object to move along the motion trail.
Wherein, the multi-frame animation frame is used for describing the animation object to move along the motion trail. That is, in the user's vision, through successive multi-frame animation frames, the animation object moves along a motion trajectory formed by multi-frame key frames.
It should be noted that, in the process of rendering and generating the animation frame, for each frame of animation frame, the corresponding target position information of the animation object at the frame of animation frame can be determined through the motion track, and then the current state of the animation object at the frame of animation frame is updated to ensure that the animation object moves smoothly along the motion track. And further, according to the current state of the animation object, performing rendering operation to obtain a corresponding animation frame.
In step 140, a multi-frame animation frame is displayed.
Here, after the rendering generates the animation frame, the animation frame is displayed to exhibit an animation effect that the animation object moves along the motion trajectory.
It should be noted that when rendering obtains a frame of animation frame, the frame of animation frame can be displayed, that is, when rendering generates a frame of animation frame, the frame of animation frame can be displayed immediately, so as to ensure real-time performance of animation.
The method comprises the steps of determining target animation data, wherein the target animation data comprise a plurality of frames of key frames, the key frames are used for recording target position information of an animation object at the key frames, obtaining a motion track formed by the target position information corresponding to the plurality of frames of key frames according to the target animation data, and rendering according to the motion track and the animation object to obtain a plurality of frames of animation frames, wherein the plurality of frames of animation frames are used for describing the motion of the animation object along the motion track, displaying the plurality of frames of animation frames, and indicating the motion track of the animation object through the plurality of frames of key frames, so that a corresponding animation effect is obtained through real-time rendering. And the animation frame is rendered and generated through the target animation data, and a user can dynamically adjust the target position information in the key frame according to the requirement so as to modify the motion trail of the animation object and obtain the animation effect meeting the requirement.
In some implementations, the target animation data further includes indication information for indicating a relative position between the target position information corresponding to the at least one frame of keyframes and the element node in the virtual scene as a target relative position.
Wherein the element node in the virtual scene may be an element in the virtual scene. For example, if the virtual scene is a game scene, the element node may be an element of a game character, character avatar, or the like in the game scene. For another example, if the virtual scene is a page (e.g., a live room page), the element node may be an element of a head portrait, graphic, text, etc. in the page.
And under the condition that the target animation data comprises indication information, at least one frame of key frames in the multi-frame key frames in the target animation data is bound with element nodes in the virtual scene, and the relative position between the at least one frame of key frames and the element nodes is the target relative position. When the position of the element node in the virtual scene changes, the target position information of at least one frame of key frame bound with the element node needs to be correspondingly adjusted so as to keep the relative position between the element node and the at least one frame of key frame as the target relative position.
It should be appreciated that a user may indicate, via the animation editor, that at least one frame of keyframes needs to be bound to an element node, and determine the target relative position by adjusting the relative position between the element node and the bound keyframe in the animation editor. When the animation editor outputs the target animation data, the target relative position between at least one frame of key frame and the bound element node is described through the indication information.
Illustratively, each element node in the virtual scene may have a corresponding unique identifier, and the unique identifier corresponding to the element node bound to the at least one frame of keyframes may be stored in one field of the target animation data.
It should be noted that in the embodiments of the present disclosure, different key frames in the target animation data may be respectively bound to different element nodes. For example, if there are multiple head portraits in a virtual scene, each head portrait may be bound to a different keyframe, respectively, if it is desired to achieve an animation effect in which the animation object rotates around the multiple head portraits, respectively.
Fig. 2 is a detailed flow chart of step 120 shown in fig. 1. Accordingly, as shown in fig. 2, in some implementations that may be implemented, step 120 may include the following steps.
In step 121, node position information of the element node in the virtual scene is acquired in response to the target animation data including the instruction information.
Here, in the case where the target animation data includes the above-described instruction information, it is indicated that there is at least one frame of key frame bound to an element node in the virtual scene for a plurality of frames of key frames in the target animation data, and a relative position between the at least one frame of key frame and the element node in the virtual scene is a target relative position. Since the relative position of the target is unchanged, when the position of the element node in the virtual scene changes, the target position information of at least one frame of key frame associated with the element node also changes, and accordingly, the motion trail of the animation object also changes.
Therefore, in the case where the target animation data includes the instruction information, the node position information of the element node in the virtual scene can be acquired. The node position information is coordinate information of the element node in the virtual scene. It should be noted that the node location information of the element node in the virtual scene may be dynamically changed.
Illustratively, the node position information corresponding to the element node can be found according to the unique identifier corresponding to the element node bound with at least one frame of key frame.
In step 122, the target position information corresponding to at least one frame of key frame is adjusted according to the node position information and the target relative position, so as to obtain adjusted target position information.
Here, since the relative position between the control node corresponding to the at least one frame of key frame bound to the element node and the element node in the virtual scene is kept as the target relative position, the target position information corresponding to the at least one frame of key frame can be adjusted to obtain adjusted target position information under the condition that the node position information of the element node and the target relative position are known.
The adjusted target position information is used for enabling the relative position of the adjusted target position information and the element node to be the target relative position.
That is, since the relative position between the control node corresponding to at least one frame key frame bound to the element node and the element node in the virtual scene is maintained as the target relative position, if the node position information of the element node in the virtual scene is changed while the target relative position is maintained, the target position information of the control node may be adjusted correspondingly so that the relative position between the control node and the element node is always maintained as the target relative position.
In step 123, a motion trajectory is obtained based on the adjusted target position information.
Here, obtaining the adjusted target position information may be understood as obtaining new target animation data, and then generating a motion track corresponding to the animation object based on each frame key frame in the new target animation data, and further generating a multi-frame animation frame through the motion track.
Fig. 3 is a schematic diagram of a motion profile shown according to some embodiments. As shown in fig. 3, the motion trail of the animation object 308 is formed by the first control node 301, the second control node 302, the third control node 303, the fourth control node 304, the fifth control node 305, and the sixth control node 306, and the second control node 302, the third control node 303, the fourth control node 304, the fifth control node 305, and the sixth control node 306 are associated with the element node 307 (i.e., the relative positions between the second control node 302, the third control node 303, the fourth control node 304, the fifth control node 305, the sixth control node 306, and the element node 307 are as shown in fig. 3).
Fig. 4 is a schematic diagram of a motion profile shown according to further embodiments. As shown in fig. 4, when the position of the element node 307 changes (from the position of fig. 3 to the position of fig. 4), since the relative positions of the second control node 302, the third control node 303, the fourth control node 304, the fifth control node 305, the sixth control node 306, and the element node 307 remain at the target relative positions, it is necessary to adjust the position information of the second control node 302, the third control node 303, the fourth control node 304, the fifth control node 305, and the sixth control node 306 from fig. 3 to fig. 4, and the position information of the first control node 301 remains unchanged. Accordingly, the motion trajectory of the moving object 308 is changed (from the motion trajectory of fig. 3 to the motion trajectory shown in fig. 4).
Therefore, through the steps 121 to 123, the moving track of the moving object can be changed according to the real-time element node positions of the element nodes in the virtual scene, so that the real-time dynamic track change can be realized without manually editing the target position information of the key frames in the target moving data by the user. For example, in a virtual living broadcast room, different users send out virtual gifts, and because the positions of the head portraits of the different users in the living broadcast room page are different, the motion track of the animation object can be changed according to the real-time positions of the head portraits of the different users, so as to realize the animation effect that the animation object rotates around the head portraits of the users. That is, when playing the animation, the motion trail of the animation object can be dynamically adjusted through the node position information of the element node, so that the flexibility is higher.
FIG. 5 is a flow chart illustrating an animation playback according to some embodiments. As shown in fig. 5, in response to receiving an animation playing instruction, loading target animation data, judging whether a key frame of an associated element node exists in the target animation data, and playing the animation frame based on the loaded target animation data if the key frame of the associated element node does not exist. Under the condition that the key frames associated with the element nodes exist, acquiring node position information corresponding to the element nodes, adjusting target position information of the associated key frames according to the acquired node position information, generating new target animation data, and playing the animation frames based on the new target animation data. And when playing the animation frame, dynamically detecting whether the element node moves, and if so, reloading the target animation data and acquiring new node position information.
In some embodiments, the virtual scene may include a live room page and the element node may include an avatar of the target account in the live room page.
The target account is a first account for sending out the virtual gift or a second account for receiving the virtual gift. That is, the element node may send out an avatar corresponding to the first account of the virtual gift in the live room page or receive a second account of the virtual gift in the live room page. Accordingly, the animation object may be the virtual gift, although the animation object may be other animation elements, such as a three-dimensional model.
By the embodiment, the motion trail of the animation object can be associated with the head portrait of the first account for sending out the virtual gift or the head portrait of the second account for receiving the virtual gift, so that different relative operations of the same trail are dynamically generated, and the animation object moves and rotates along different head portraits.
FIG. 6 is a schematic diagram of an animation shown according to some embodiments. As shown in FIG. 6, in the live room page, element node 601 is in a first position and animation object 602 moves and rotates around element node 601. FIG. 7 is a schematic diagram of an animation shown according to further embodiments. As shown in FIG. 7, element node 601 moves from the first position shown in FIG. 6 to the second position shown in FIG. 7, and accordingly, the motion profile of animated object 602 changes.
In the embodiment of the disclosure, a motion track of an animation object can be obtained according to node position information corresponding to element nodes in a virtual scene and target animation data, and then a multi-frame animation frame is rendered and generated based on the motion track of the animation object in response to an animation playing instruction, and the multi-frame animation frame is displayed, wherein the multi-frame animation frame is used for describing the motion of the animation object along the motion track.
For example, node position information corresponding to an element node in the virtual scene may be obtained, where the element node is an element associated with at least one frame of key frame in the target animation data in the virtual scene, further, according to the node position information, target position information of at least one frame of key frame associated with the element node in the target animation data is adjusted, adjusted target animation data is obtained, then, a motion track corresponding to an animation object is obtained based on the adjusted target animation data, a multi-frame animation frame is obtained by rendering based on the motion track and the animation object, and the multi-frame animation frame is displayed.
In some implementation manners, element nodes can be obtained from the virtual scene, and multiple frames of animation frames are rendered according to the motion track, the element nodes and the animation objects, so that the multiple frames of animation frames are displayed through an animation playing layer overlapped on the upper layer of the virtual scene.
Here, the obtaining the element node from the virtual scene may be obtaining a real-time image corresponding to the element node, and then rendering to obtain a multi-frame animation frame based on the motion trail, the element node and the animation object.
The animation playing layer may be a video playing layer superimposed on the upper layer of the virtual scene, through which animation frames are displayed, and actually, a layer of animation effects including multiple frames of animation frames is superimposed on the virtual scene.
As shown in fig. 6, the animation playing layer is overlapped on the upper layer of the live broadcasting room page, so that a real-time image of the element node 601 in the live broadcasting room page can be obtained, an animation frame is obtained through the motion track, the real-time image of the element node 601 and the animation object 602, and then the animation frame is overlapped and displayed on the upper layer of the live broadcasting room page, so that the animation can be played on the live broadcasting room page, and the animation can interact with the element node in the live broadcasting room page.
In some implementations, the target animation data may further include rotation parameters for controlling rotation of the animation object, which may include a desired upward direction of the animation object, an initial heading direction of the animation object, and a desired heading direction of the animation object.
Here, the rotation parameter is a parameter for rotation control of the animation object, which determines a rotation angle of the animation object when moving along the motion trajectory.
The desired upward direction (desiredUpInWorld) of the animation object refers to the desired alignment direction of the animation object in the vertical direction. The desired upward direction is used to control the animation object to have its "up" direction consistent with the desired direction as it rotates or changes direction, helping to maintain the correct pose of the animation object in space. In general, the desired upward direction may be the direction of gravity.
The initial orientation direction (Initial Direction) of the animation object refers to the actual orientation of the animation object prior to playback of the animation. The initial orientation direction is understood to be the initial orientation of the animated object during the art production of the animated object. As shown in FIG. 3, the initial direction of orientation of the animated object 308 is off-screen.
The desired direction of the animation object (INITIAL DESIRED direction) refers to a particular direction in which the animation object is desired to be oriented before the animation starts. For example, as shown in FIG. 3, the initial direction of the animation object 308 is out of the screen, and the desired direction of the orientation may be a desire to move the animation object 308 from out of the screen to in the screen.
It should be noted that the initial orientation direction of the animation object is the starting point of the animation frame sequence, and the desired orientation direction of the animation object is the target direction at the beginning of the animation frame sequence.
For example, the user may instruct the animation object to perform a rotation operation on the animation object through the desired upward direction, the initial orientation direction, and the desired orientation direction during the movement along the movement track by adding an automatic rotation component to the animation object in the animation editor, and configuring rotation parameters of the desired upward direction of the animation object, the initial orientation direction of the animation object, and the desired orientation direction through the automatic rotation component.
Fig. 8 is a detailed flow chart of step 130 shown in fig. 1. As shown in fig. 8, step 130 may include the following steps.
In step 810, in the case where the currently rendered animation frame is the first animation frame, first position information of the animation object at the first animation frame is obtained according to the motion trail.
Here, the first animation frame is an animation frame of the first frame corresponding to the target animation data. That is, the first animation frame is the first frame animation frame in the sequence of multi-frame animation frames.
When the first animation frame is rendered, first position information of the animation object at the time of the first animation frame can be determined through the motion trail. The motion trail reflects the position information of the animation object corresponding to different time points, and the first position information of the animation object at the time point corresponding to the first animation frame can be obtained through the motion trail.
In step 820, a first frame of animation frame is rendered based on the first location information and the animation object.
Here, the animation object may be rendered through the first position information to obtain a first frame of animation frame. Wherein, the animation effect corresponding to the first frame of animation frame can be understood as that the animation object appears in the spatial position corresponding to the first position information.
In step 830, in the case where the currently rendered animation frame is the second animation frame, second position information of the animation object at the second animation frame is obtained according to the motion trail.
Here, the second animation frame is an animation frame other than the animation frame of the first frame in the target animation data. That is, the second animation frame is another animation frame in the sequence of multi-frame animation frames other than the first frame animation frame.
When the second animation frame is rendered, second position information of the animation object at the time of the second animation frame can be determined through the motion trail. The motion trail reflects the position information of the animation object corresponding to different time points, and the second position information of the animation object at the time point corresponding to the second animation frame can be obtained through the motion trail.
In step 840, the current motion direction of the animation object is obtained according to the second position information corresponding to the second animation frame and the third position information corresponding to the previous animation frame.
Here, the last frame of the animation frame refers to the last frame of the second animation frame, for example, the last frame of the second frame of the animation frame is the first frame of the animation frame. The third position information corresponding to the previous frame of animation frame may be obtained through a motion trail when the previous frame of animation frame is rendered.
Illustratively, the current direction of motion of the animated object may be obtained from a difference between the second position information and the third position information.
The current motion direction (forwardInWorld) of the animation object is a direction vector, and represents the change of the motion direction of the animation object relative to the previous frame of animation frame, so that the advancing direction of the animation object is represented.
In step 850, a target rotation angle is obtained based on the current direction of motion, the desired upward direction, the initial heading direction, and the desired heading direction.
Here, the target rotation angle may be understood as a rotation quaternion (Quaternion). Wherein the target rotation angle is used to enable the animated object to move toward a desired direction of orientation and to remain in a desired upward direction.
For example, in character animation, a character may be required to face a particular direction (desired direction) while maintaining vertical alignment of its head or upper body (desired upward direction) to achieve natural and realistic motion, and the target rotation angle may then be such that the character faces the desired direction and maintains the desired upward direction during movement in the current direction of movement.
A target rotation angle, which can be calculated by the current movement direction, the desired upward direction, the initial facing direction, and the desired upward direction, is applied to the animation object, and then the animation object can be enabled to move toward the desired upward direction and maintain the desired upward direction.
In step 860, a second animation frame is rendered according to the second position information, the target rotation angle, and the animation object.
Here, after the second position information and the target rotation angle are obtained, the second position information and the target rotation angle are applied to the animation object, and the corresponding second animation frame is rendered and generated. The animation effect corresponding to the second frame of animation frame can be understood as that the animation object appears at the spatial position corresponding to the second position information and rotates according to the target rotation angle.
In some embodiments, the target rotation angle of the second animation frame and the target rotation angle corresponding to the previous animation frame may be interpolated according to the time interval between two adjacent animation frames to obtain a plurality of rotation angles, and then the animation frames are rendered based on the second position information, the plurality of rotation angles and the animation objects.
Wherein the time interval between two adjacent frames of animation frames may be determined according to the frame rate (FRAME RATE) of the animation. By interpolating the target rotation angle of the second animation frame and the target rotation angle corresponding to the previous animation frame at the time interval between two adjacent animation frames, a plurality of rotation angles can be obtained by interpolation. And then, a plurality of rotation angles obtained by interpolation are applied to the animation object, so that the rotation of the animation object between frames is more natural and smooth.
Therefore, through the steps 810 to 860, the moving object can move towards the desired direction and keep the desired upward direction during the movement along the movement track, so that the gesture of the moving object is more real and natural, and the moving effect with better visual effect is obtained.
It should be noted that, when the animation object moves along the motion track, the animation object does not dynamically rotate along the motion track, so that the animation effect lacks dynamic feeling. For example, the front of a person always moves forward when the person runs normally. In the related animation process, a developer is often required to manually set the rotation directions of the animation objects at different time points, so that the animation objects can face forward along with the change of the motion trail. However, this manner of animation is relatively cumbersome, resulting in a dramatic increase in animation costs.
According to the animation playing method provided by the embodiment of the disclosure, in the motion process of an animation object, the target rotation angle of the animation object in each frame of animation frame is calculated through the position of the animation object in each frame of animation frame, the designated expected upward direction, the designated initial direction and the expected direction, and the calculated target rotation angle is applied to the animation object, so that the rotation angle can be adjusted at any time along with the change of the motion track, the direction of the animation object can always face forward, and the expected upward direction is maintained.
FIG. 9 is a schematic diagram of an animation effect shown according to some embodiments. As shown in fig. 9, the animation object 901 can always maintain a posture consistent with the real world during the rotation around the element node 902, so that the animation effect is more natural and real.
Fig. 10 is a detailed flow chart of step 850 shown in fig. 8. As shown in fig. 10, step 850 may include the following steps.
In step 851, an initial rotation angle is obtained based on the current direction of motion and the desired upward direction.
Here, the initial rotation angle is used to cause the animated object to move in the desired direction while maintaining the desired upward direction.
In some embodiments, a first rotational quaternion for rotating the default orientation vector to a current movement direction may be determined according to a default orientation vector and the current movement direction in the world coordinate system, a right side vector may be obtained according to a cross product of the current movement direction and a desired upward direction, a new desired upward direction may be obtained according to a cross product of the right side vector and the current movement direction, a rotated upward vector may be obtained according to the first rotational quaternion and the default upward vector in the world coordinate system, a second rotational quaternion may be obtained according to the rotated upward vector and the new desired upward direction, and an initial rotation angle may be obtained according to the first rotational quaternion and the second rotational quaternion.
The world coordinate system may refer to a coordinate system in the virtual scene, and the default orientation vector in the world coordinate system refers to a vector representing an orientation in the world coordinate system, and in general, the default orientation vector may be (0, 1) in the three-dimensional space. The first rotation quaternion is the rotation quaternion required to rotate the default orientation vector to the current direction of motion.
Illustratively, the first rotation quaternion may be calculated by a function const rotateForwardToDesiredForward=new Quaternion().setFromUnitVectors(UpVector.FORWARD,forwardInWorld). Wherein rotateForwardToDesiredForward is the first rotation quaternion, up vector forward is the default orientation vector, forwardInWorld is the current direction of motion.
By calculating the cross product of the current direction of motion and the desired upward direction, a right side vector (rightInWorld) is obtained, which is actually a vector perpendicular to the current direction of motion and the desired upward direction.
The new desired upward direction, obtained by the cross product of the right vector and the current direction of motion, corresponds to a vertical vector in the plane of the current direction of motion.
The default upward vector in the world coordinate system refers to a vector in the world coordinate system representing an upward direction, and in general, the default upward vector in the three-dimensional space may be represented as (0, 1, 0). By applying the first rotation quaternion to the default upward vector in the world coordinate system, the default upward vector in the world coordinate system may be rotated to obtain a rotated upward vector.
The second rotation quaternion is the rotation quaternion required to rotate from the rotated up vector to the new desired up direction.
And further, calculating to obtain an initial rotation angle through the first rotation quaternion and the second rotation quaternion. Wherein the initial rotation angle indicates that the animated object is capable of rotating in a given current direction of motion and in a desired upward direction.
For example, the initial rotation angle may be obtained from a product between the first rotation quaternion and the second rotation quaternion. The initial rotation angle can also be understood as a rotation quaternion.
Based on the method, the initial rotation angle of the animation object can be accurately calculated, so that the gesture of the animation object in the motion process is accurately controlled, and a natural and real animation effect is realized.
In step 852, an initial rotational offset is obtained based on the initial heading direction and the desired heading direction.
Here, the initial rotational offset is a rotational angle for rotating the initial facing direction to the desired facing direction. For example, assuming that the initial orientation direction of the animated object is front-facing out of the screen, the desired orientation direction is front-facing in the screen of the animated object, and the initial rotational offset is used to rotate the animated object from front-facing out of the screen into front-facing in the screen.
It should be noted that, for each frame of animation frame, the corresponding initial rotational offset is unchanged, and can be calculated by the initial orientation direction and the desired orientation direction.
In step 853, a target rotation angle is obtained from the initial rotation angle and the initial rotation offset.
Here, the initial rotation angle and the initial rotation offset amount may be superimposed to obtain the target rotation angle.
It should be noted that, by superimposing the initial rotational offset amounts, the animation object can be caused to move toward the specified desired direction.
Therefore, through the steps 851 to 853, the target rotation angle of the animation object can be accurately calculated, so that the animation object can move towards the expected direction and keep the expected upward direction, and a more real and natural animation effect is realized.
FIG. 11 is a flow diagram illustrating rendering an animation frame, according to some embodiments. As shown in fig. 11, at the time of playing an animation, whether or not rotation is turned on is detected by detecting whether or not the target animation data instructs rotation of the animation object, and in the case where rotation is not turned on, animation frame rendering and display are performed directly through the above-described embodiments. And further judging whether the rendered current animation frame is a first frame animation frame or not under the condition that the rotation is started, and recording the original rotation angle of the animation object and recording the second position information of the current frame as the third position information of the previous frame of the next frame under the condition that the rendered current animation frame is determined to be the first frame animation frame. The original rotation angle of the animation object refers to a rotation angle of the animation object in an original state, for example, the original rotation angle may be 0. By recording the original rotation angle of the animation object, the animation object can be reset by the original rotation angle of the animation object after the animation play is finished, so as to prepare for the next animation play.
Under the condition that the current frame is not the first frame animation frame, acquiring second position information of the current frame and third position information of the previous frame, subtracting the second position information from the third position information to acquire a current movement direction, calculating an initial rotation angle according to the current movement direction and the expected upward direction, calculating an initial rotation offset according to the initial direction and the expected direction, and superposing the initial rotation angle and the initial rotation offset to acquire the target rotation angle. Then, the target rotation angle of the previous frame and the target rotation angle of the current frame are interpolated, the interpolation result is assigned to the animation object, and the current animation frame is rendered and generated. And then, recording the second position information of the current frame as the third position information of the previous frame of the next frame, judging whether the animation is finished, if not, continuing to render the animation frame, and if so, finishing the whole animation playing flow.
Fig. 12 is a flow chart illustrating the acquisition of target animation data, according to some embodiments. As shown in fig. 12, in some embodiments that may be implemented, the target animation data may be obtained by the following steps.
In step 1210, raw animation data is acquired.
Here, the original animation data includes a plurality of frames of key frames for recording initial position information of the animation object at the key frames. For example, the original animation data may be acquired from a motion editor or a storage area of the animation data.
It should be noted that the concept of the original animation data is consistent with that of the target animation data, which corresponds to the original animation data when it is necessary to adjust the target position information of the key frame in one target animation data. Of course, the original animation data may be animation data corresponding to an animation that needs to be edited again. For example, when it is necessary to modify the motion trail of an animation that has been completed, the animation data corresponding to the animation is the original animation data.
In the disclosed embodiment, the original animation data may be any animation data carrying multi-frame key frames. For example, the original animation data may be animation data edited by a Unity engine.
In step 1220, the control node corresponding to the initial position information of the key frame and the motion trail of the animation object composed of the control node are displayed in the animation editing scene according to the original animation data.
Here, the animation editing scene may be a scene in an animation editor, that is, a scene in which a control node and a motion trail are displayed by the animation editor. In the animation editing scene, where to display the control node corresponding to the key frame depends on the initial position information corresponding to the key frame, and when the position of the displayed control node changes, the initial position information of the animation object recorded by the key frame at the key frame also changes.
After the original animation data is obtained, control nodes corresponding to the key frames and the motion trail formed by the control nodes can be displayed in a world coordinate system corresponding to the animation editing scene, so that a user can intuitively adjust the motion trail of the animation object according to the displayed motion trail and the control nodes.
Each frame of key frame corresponds to a control node, and the position information of the control node in the animation editing scene is corresponding initial position information. The motion trail formed by the control nodes may refer to a line formed by the control nodes corresponding to the multi-frame key frames. Illustratively, the line may be a spline, a Bezier curve, or the like.
It should be understood that the motion trail displayed by the control node may be obtained by interpolating initial position information corresponding to the key frames of the plurality of frames.
Because each frame of key frame in the original animation data records the initial position information of the animation object at the key frame, correspondingly, the position information of the intermediate frame between every two key frames can be calculated between the multi-frame key frames through an interpolation algorithm, so that the motion trail formed by the initial position information corresponding to the multi-frame key frames is obtained.
That is, the initial position information corresponding to the multi-frame key frame and the position information of the intermediate frame obtained by interpolation are connected, so that a motion track corresponding to the animation object can be formed, and the motion of the animation object along the time axis is represented.
It should be noted that in the embodiments of the present disclosure, the interpolation algorithm used may be determined according to the requirements. The interpolation algorithm may be, for example, a linear interpolation algorithm, a spline interpolation algorithm, a bezier curve interpolation algorithm, or the like.
FIG. 13 is a schematic diagram of an animation editing scene shown according to some embodiments. As shown in fig. 13, in the animation editing scene 1300, a first key frame 1311, a second key frame 1312, a fourth key frame 1313, a fourth key frame 1314, a fifth key frame 1315, and a sixth key frame 1316 are displayed through a time axis 1310. And in the animation editing scene 1300, an animation object 1307, a first control node 1301, a second control node 1302, a third control node 1303, a fourth control node 1304, a fifth control node 1305, a sixth control node 1306, and a motion trajectory 1317 formed by the first control node 1301, the second control node 1302, the third control node 1303, the fourth control node 1304, the fifth control node 1305, the sixth control node 1306 are displayed. The first control node 1301 is a control node corresponding to the first key frame 1311, the second control node 1302 is a control node corresponding to the second key frame 1312, the third control node 1303 is a control node corresponding to the third key frame 1313, the fourth control node 1304 is a control node corresponding to the fourth key frame 1314, the fifth control node 1305 is a control node corresponding to the fifth key frame 1315, and the sixth control node 1306 is a control node corresponding to the sixth key frame 1316.
Note that in the animation editing scene 1300, a corresponding virtual scene 1309 may also be displayed and a virtual node 1308 in the animation editing scene 1300 in which element nodes in the virtual scene 1309 are mapped may also be displayed.
In step 1230, initial position information of a key frame corresponding to the control node is adjusted in response to the first adjustment operation for the control node, to obtain target animation data.
Here the number of the elements is the number,
And the first adjustment operation aiming at the control node is used for adjusting the initial position information corresponding to the control node. By adjusting the control node, the position information of the control node changes the adjusted target position information from the initial position information.
The first adjustment operation may be a drag operation for the control node, for example. The user can adjust the corresponding initial position information by dragging the control node to be adjusted so as to adjust the motion trail of the animation object.
For example, the user may adjust the initial position information corresponding to the first key frame 1311 by dragging the first control node 1301 shown in fig. 13.
It should be understood that if the motion trajectory is a bezier curve, the control node is controlled by the operation points of the two bezier curves, and the user can adjust the corresponding control node by dragging the operation points.
It should be noted that, after the original animation data is adjusted through the first adjustment operation described above, the target animation data may be obtained. After the target animation data is obtained, the target animation data may be stored to overwrite the original animation data to perform animation playback through the target animation data.
Therefore, the original animation data are obtained, the control nodes corresponding to the initial position information of the key frames and the motion trail of the animation objects formed by the control nodes are displayed in the animation editing scene according to the original animation data, the initial position information of the key frames corresponding to the control nodes is adjusted in response to the first adjustment operation for the control nodes, the target animation data are obtained, the motion trail of the animation objects can be intuitively checked, and the motion trail of the animation objects is adjusted in a mode of adjusting the control nodes, so that the animation data are intuitively and rapidly edited, and the animation production efficiency is improved. Based on the method, the original animation data can be rendered into the motion trail formed by the control nodes, and the motion trail of the animation object is adjusted in a mode of dragging the control nodes of the motion trail in the animation editing scene, so that the animation effect of the animation is intuitively, clearly and conveniently adjusted, the animation production efficiency is greatly improved, and large learning cost is not required for animation production personnel.
In some implementations, the key frame and the animation object may be displayed in an animation editing scene, and the time point corresponding to the key frame is adjusted in response to a second adjustment operation for the key frame, so as to obtain the target animation data.
Here, as shown in fig. 13, in the animation editing scene 1300, a first key frame 1311, a second key frame 1312, a third key frame 1313, a fourth key frame 1314, a fifth key frame 1315, and a sixth key frame 1316 may be displayed through a time axis 1310. And, in the animation editing scene 1300, the animation object 1307 is displayed to show the user the position of the animation object 1307 in the animation editing scene.
And the second adjustment operation is used for adjusting the time point corresponding to the key frame. For example, the second adjustment operation may be a drag operation for a key frame, and the user may adjust a time point of the key frame to be adjusted by dragging the key frame to be adjusted. For example, the user may adjust the point in time corresponding to the first keyframe 1311 by dragging the first keyframe 1311 shown in fig. 13.
It should be noted that, the time point of the key frame is adjusted by the second adjustment operation, and the change is that the time point corresponding to the key frame, the initial position information corresponding to the key frame is not changed.
It should be noted that, after the original animation data is adjusted through the second adjustment operation described above, the target animation data may be obtained. After the target animation data is obtained, the target animation data may be stored to overwrite the original animation data to perform animation playback through the target animation data.
It should be noted that, in the process of editing an animation, a user may not only adjust a time point corresponding to a key frame, but also adjust initial position information corresponding to the key frame to obtain target animation data. That is, the operation of adjusting the time point of the key frame and the operation of adjusting the control point may be performed separately or simultaneously in one animation editing process.
Therefore, through the second adjustment operation, the motion trail of the animation object can be visually checked, and the animation effect can be adjusted by adjusting the time point of the key frame.
Fig. 14 is a schematic diagram illustrating obtaining target animation data, according to some embodiments. As shown in fig. 14, the original animation data may be acquired, a key frame and a motion trail included in the original animation data are rendered, the key frame and/or the control node are adjusted, whether to modify the initial position information of the control node is determined, if not, the motion trail and the control node remain unchanged, and if modified, the original animation data are updated.
In some implementations, the raw animation data may further include indication information for indicating a relative position between initial position information corresponding to at least one frame of the key frame and an element node in the virtual scene as a target relative position.
Here, regarding the indication information included in the original animation data, the conceptual meaning of which is identical to the indication information in the target animation data described in the above embodiment, reference may be made to the related description of the above embodiment, and the description thereof will not be repeated.
In response to this, the control unit,
The method comprises the steps that in an animation editing scene, a display element node is mapped to a virtual node in the animation editing scene, and in response to a third adjustment operation for the virtual node, initial position information corresponding to at least one frame of key frame associated with the virtual node is adjusted according to position information adjusted by the virtual node and a target relative position, so that target animation data are obtained.
Here, the virtual node in which the element node is mapped in the animation editing scene may refer to a node in which the element node in the virtual scene is mapped in the animation editing scene. As shown in fig. 13, a virtual scene 1309 may be displayed in the animation editing scene 1300 as well as a corresponding virtual node 1308.
In some embodiments, in an animation editing scene, at least one frame of key frames associated with a virtual node may be highlighted when the display element node is mapped to the virtual node in the animation editing scene. For example, at least one frame of keyframes associated with a virtual node may be represented by a different color to enable an animator to quickly learn the keyframes bound to the virtual node.
It should be noted that in an animation editing scene, the display position of the virtual node may be related to the position information of the element node in the virtual scene.
Accordingly, in response to the third adjustment operation for the virtual node, initial position information corresponding to at least one frame of key frame associated with the virtual node can be adjusted according to the position information adjusted by the virtual node and the target relative position, so that target animation data can be obtained.
The third adjustment operation for the virtual node may be a drag operation for the virtual node. The user may adjust the position of the virtual node by dragging the virtual node in the animation editing scene.
The indication information is actually used for indicating that the control node corresponding to at least one frame of key frame is bound with the virtual node, and the relative position between the control node and the virtual node after the binding is always kept as the target relative position, when the position information of the virtual node changes, the initial position information corresponding to at least one frame of key frame associated with the virtual node can be adjusted, so that the adjusted relative position between the control node and the virtual node is kept as the target relative position.
For example, as shown in fig. 13, the second control node 1302, the third control node 1303, the fourth control node 1304, the fifth control node 1305, and the sixth control node 1306 are all bound to the virtual node 1308, that is, the relative positions between the second control node 1302, the third control node 1303, the fourth control node 1304, the fifth control node 1305, and the sixth control node 1306 and the virtual node 1308 remain at the target relative positions. When the virtual node 1308 is dragged, the positions of the second control node 1302, the third control node 1303, the fourth control node 1304, the fifth control node 1305 and the sixth control node 1306 are changed correspondingly, so that the initial position information of the key frames corresponding to the second control node 1302, the third control node 1303, the fourth control node 1304, the fifth control node 1305 and the sixth control node 1306 is changed.
Thus, according to the embodiment, the initial position information of the key frame associated with the virtual node can be adjusted by adjusting the virtual node, so that the motion trail of the animation object can be quickly adjusted.
It should be noted that the original animation data described in the above embodiment may be obtained by editing by a user through an animation editor.
In some implementations, in response to an animation editing instruction, multiple frames of key frames may be created in an animation editing scene, corresponding initial position information may be configured for each key frame, a control node corresponding to the created key frame and a motion trail formed by the control node may be displayed in the animation editing scene, and then original animation data may be obtained based on the multiple frames of key frames.
Here, the animation editing instruction is used to instruct to create multi-frame key frames in the animation editing scene, and configure corresponding initial position information for each key frame. In the animation editing scene of the animation editor, a user can configure the time point of a key frame of the required animation and the initial position information corresponding to the key frame through an animation editing instruction so as to edit the motion trail forming an animation object, thereby realizing the corresponding animation effect.
As shown in fig. 13, multiple frames of key frames may be created through a time axis 1310 in an animation editing scene 1300, and corresponding initial position information is configured for each key frame to indicate a motion trail corresponding to an animation object.
It should be noted that, when configuring initial position information corresponding to each key frame configuration, for each configured key frame, a control node corresponding to the created key frame and a motion track formed by the control node may be displayed in the animation editing scene, so that a user can intuitively see the motion track of the animation object drawn by the user.
The raw animation data may then be obtained based on the key frames. It should be appreciated that the key frame has corresponding initial position information recorded to indicate the position of the animated object at the time corresponding to the key frame.
Therefore, through the embodiment, when the user creates the animation data, the user can visually see the motion trail of the created animation object, thereby helping the user edit the animation effect which meets the requirements.
In some implementations, the raw animation data further includes indication information for indicating a relative position between initial position information corresponding to the at least one frame of key frame and an element node in the virtual scene as a target relative position.
Accordingly, a virtual node can be created in the animation editing scene, at least one frame of key frames in the multi-frame key frames is associated with the virtual node in response to the node association operation, indication information is obtained, and original animation data is obtained according to the multi-frame key frames and the indication information.
Here, the virtual nodes are nodes in which element nodes in the virtual scene are mapped in the animation editing scene. The user can indicate the virtual node to correspond to a certain element node in the virtual scene through the unique identifier.
The node association operation is used for indicating that at least one frame key frame is associated with the virtual node. That is, at least one of the multi-frame key frames is indicated to bind with the virtual node. It should be noted that, after the at least one frame of key frame is associated with the virtual node, the relative position between the control node corresponding to the at least one frame of key frame and the virtual node is maintained as the relative position when the control node is associated with the virtual node. As shown in fig. 13, if the second control node 1302, the third control node 1303, the fourth control node 1304, the fifth control node 1305, the sixth control node 1306, and the virtual node 1308 are bound, the relative positions among the second control node 1302, the third control node 1303, the fourth control node 1304, the fifth control node 1305, the sixth control node 1306, and the virtual node 1308 are all maintained as the target relative positions shown in fig. 13.
Of course, it should be noted that after the control node is bound to the virtual node, the target relative position between the control node and the virtual node may be modified by a first adjustment operation to the control node.
Thereby the processing time of the product is reduced,
Through the embodiment, the user indicates at least one frame of key frame to bind with the virtual node in the animation editor, so that when the animation is played, the motion trail of the animation object is changed according to the real-time position of the element node corresponding to the virtual node, and the animation effect of realizing a plurality of different motion trails through one animation data is realized.
In some implementations, the target animation data further includes rotation parameters for controlling rotation of the animation object, the rotation parameters including a desired upward direction of the animation object, an initial heading direction of the animation object, and a desired heading direction of the animation object.
Here, the rotation parameter is a parameter for rotation control of the animation object, which determines a rotation angle of the animation object when moving along the motion trajectory.
It should be understood that, the detailed description of the desired upward direction of the animation object, the initial orientation direction of the animation object, and the desired orientation direction of the animation object may refer to the relevant descriptions of the above embodiments, and will not be repeated herein.
Accordingly, the desired upward direction, the initial direction of orientation, and the desired direction of orientation corresponding to the animation object may be configured in the animation editing scene in response to the rotation setting operation, and the original animation data may be obtained based on the multi-frame key frame, the desired upward direction, the initial direction of orientation, and the desired direction of orientation.
Here, the rotation setting operation is used to configure rotation parameters (including a desired upward direction, an initial orientation direction, and a desired orientation direction) of the animation object to indicate that when the animation is played, a more natural and realistic animation effect is obtained by performing the rotation operation on the animation object through the rotation parameters.
For example, an auto-rotate component may be added to an animation object by an animation editor, and a desired upward direction, an initial orientation direction, and a desired orientation direction corresponding to the animation object are configured in the auto-rotate component.
Then, the original animation data is obtained based on the key frame, the desired upward direction, the initial orientation direction, and the desired orientation direction.
Therefore, through the implementation mode, through the rotation setting operation, the rotation angle of the animation object can be adjusted at any time along with the movement direction of the movement track during movement, the user is not required to manually adjust the rotation direction of the animation object at each time point, and the efficiency of animation editing is greatly improved.
FIG. 15 is a schematic diagram of an animation playback system, shown according to some embodiments. As shown in fig. 15, when the animation Editor (Editor) is started, animation data is obtained from the CombineAnimation component (AnimationData), and a data copy of the animation data is created (AnimationDataTemp). The animation editor instantiates a new animation player (Engine) and passes a copy of the data into the player instance of the animation player. When the Timeline is dragged, the time change is passed to the animation player instance, triggering reRender (re-rendering) the change scene animation.
Each AnimationDataTemp data modification is passed to TRACKSYSTEM (track presentation system) which generates a motion track for the animated object in the animated editing scene. When the control node rendered in operation TRACKSYSTEM, animationDataTemp is synchronously modified so that the motion trail of the animated object is updated.
Note that effects generally refer to various visual and auditory elements added to an animation to enhance the attractiveness and expressivity of the animation. These effects may include particle effects (e.g., flame, smoke), light shadow effects, motion blur, color adjustment, etc. animationJson refers to animation data stored using the JSON (JavaScript Object Notation) format. JSON may be used to describe key frame data, animation curves, timeline information, etc. CombinedAnimation System (combined animation system) is a system for managing and controlling a plurality of animation elements or animation layers. RotationAnimation (rotational animation) refers to an animation module dedicated to controlling the rotation of an animation object. In the animation editor, a Store refers to a place where animation data is stored or saved, and may be a database or a file system for saving animation items, resources, presets, and the like. NodeList refers to a list comprising a plurality of nodes (nodes). In animation and graphical programming, nodes may represent objects, bones, controllers, etc. in a scene, with a NodeList being used to manage and access these nodes. CurveArea refers to the area in the animation editor that is used to edit and view animation curves. EditArea refers to the main working area in the animation editor for editing an animation. In this area, the animator can view previews of the animation, adjust the timeline, edit key frames, and perform other animation-related editing operations.
Fig. 16 is a schematic diagram showing the structure of an animation playing device according to some embodiments. As shown in fig. 16, an embodiment of the present disclosure provides an animation playing device 1600, the animation playing device 1600 may include:
a determining module 1601 configured to determine target animation data, wherein the target animation data comprises a plurality of key frames for recording target position information of an animation object at the key frames;
the obtaining module 1602 is configured to obtain a motion trail formed by target position information corresponding to the multi-frame key frame according to the target animation data;
A rendering module 1603, configured to render and obtain a multi-frame animation frame according to the motion trail and the animation object, wherein the multi-frame animation frame is used for describing the motion of the animation object along the motion trail;
A display module 1604 configured to display the multi-frame animated frames.
Optionally, the target animation data further includes indication information for indicating that a relative position between the target position information corresponding to at least one frame of key frame and the element node in the virtual scene is a target relative position;
the obtaining module 1602 includes:
An acquisition unit configured to acquire node position information of the element node in the virtual scene in response to the target animation data including the instruction information;
the first obtaining unit is configured to adjust target position information corresponding to the at least one frame of key frame according to the node position information and the target relative position, and obtain adjusted target position information, wherein the adjusted target position information is used for enabling the relative position of the adjusted target position information and the element node to be the target relative position;
And a second obtaining unit configured to obtain the motion trajectory based on the adjusted target position information.
Optionally, the rendering module 1603 is specifically configured to:
acquiring the element node from the virtual scene;
rendering to obtain the multi-frame animation frame according to the motion trail, the element nodes and the animation objects;
the display module 1604 is specifically configured to:
and displaying the multi-frame animation frames through an animation playing layer overlapped on the upper layer of the virtual scene.
Optionally, the virtual scene includes a live room page, and the element node includes an avatar of a target account in the live room page, where the target account is a first account for sending out a virtual gift or a second account for receiving the virtual gift.
Optionally, the target animation data further includes a rotation parameter for controlling the animation object to rotate, the rotation parameter including a desired upward direction of the animation object, an initial orientation direction of the animation object, and a desired orientation direction of the animation object;
The rendering module 1603 includes:
A first determining unit configured to obtain, according to the motion trail, first position information of the animation object at a first animation frame in a case where the currently rendered animation frame is the first animation frame, where the first animation frame is an animation frame of a first frame corresponding to the target animation data;
A first rendering unit configured to render the first frame of animation frame according to the first position information and the animation object;
A second determining unit configured to obtain second position information of the animation object at a second animation frame according to the motion trail in a case where the currently rendered animation frame is the second animation frame, wherein the second animation frame is an animation frame other than the animation frame of the first frame in the target animation data;
A third determining unit configured to obtain a current motion direction of the animation object according to the second position information corresponding to the second animation frame and the third position information corresponding to the previous animation frame;
A fourth determination unit configured to obtain a target rotation angle according to the current movement direction, the desired upward direction, the initial facing direction, and the desired facing direction, wherein the target rotation angle is used to enable the animated object to move toward the desired facing direction and to maintain the desired upward direction;
And a second rendering unit configured to render the second animation frame according to the second position information, the target rotation angle, and the animation object.
Optionally, the fourth determining unit includes:
A first angle obtaining unit configured to obtain an initial rotation angle according to the current movement direction and the desired upward direction;
a second angle obtaining unit configured to obtain an initial rotational offset from the initial facing direction and the desired facing direction;
a third angle obtaining unit configured to obtain the target rotation angle based on the initial rotation angle and the initial rotation offset.
Optionally, the first angle obtaining unit is specifically configured to:
determining a first rotation quaternion for rotating a default orientation vector to the current motion direction according to the default orientation vector and the current motion direction in a world coordinate system;
Obtaining a right vector according to the cross product of the current movement direction and the expected upward direction;
Obtaining a new expected upward direction according to the cross product of the right vector and the current movement direction;
Obtaining a rotated upward vector according to the first rotation quaternion and a default upward vector in a world coordinate system;
Obtaining a second rotation quaternion according to the rotated upward vector and the new expected upward direction;
And obtaining the initial rotation angle according to the first rotation quaternion and the second rotation quaternion.
Optionally, the second rendering unit is specifically configured to:
According to the time interval between two adjacent frames of animation frames, interpolating the target rotation angle of the second animation frame and the target rotation angle corresponding to the previous frame of animation frame to obtain a plurality of rotation angles;
And rendering the animation frame based on the second position information, the plurality of rotation angles and the animation object.
Optionally, the determining module 1601 includes:
A data acquisition unit configured to acquire original animation data, wherein the original animation data includes a plurality of frames of key frames for recording initial position information of an animation object at the key frames;
A first display unit configured to display a control node corresponding to initial position information of the key frame and a motion trail of the animation object constituted by the control node in an animation editing scene according to the original animation data;
And the adjusting unit is configured to respond to a first adjusting operation for the control node, adjust the initial position information of the key frame corresponding to the control node and obtain target animation data.
The logic of the method executed by each functional module in the animation playing device 1600 may refer to the parts of the method related to the above embodiment, which are not described herein.
Referring next to fig. 17, a schematic diagram of an electronic device (e.g., a terminal device or server) 1700 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 17 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 17, the electronic apparatus 1700 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 1701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1702 or a program loaded from a storage device 1708 into a Random Access Memory (RAM) 1703. In the RAM 1703, various programs and data necessary for the operation of the electronic device 1700 are also stored. The processing device 1701, the ROM 1702, and the RAM 1703 are connected to each other via a bus 1704. An input/output (I/O) interface 1705 is also connected to the bus 1704.
In general, the following devices may be connected to the I/O interface 1705: input devices 1706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; a storage 1708 including, for example, a magnetic tape, a hard disk, or the like; and a communication device 1709. The communication means 1709 may allow the electronic device 1700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 17 shows an electronic device 1700 with various means, it is to be understood that not required to implement or possess all of the illustrated means. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1709, or installed from the storage device 1708, or installed from the ROM 1702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing apparatus 1701.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the electronic device may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining target animation data, wherein the target animation data comprises multi-frame key frames, and the key frames are used for recording target position information of an animation object at the key frames; according to the target animation data, a motion track formed by target position information corresponding to the multi-frame key frame is obtained; rendering to obtain a multi-frame animation frame according to the motion trail and the animation object, wherein the multi-frame animation frame is used for describing the motion of the animation object along the motion trail; and displaying the multi-frame animation frame.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module does not in some cases define the module itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Claims (13)
1. An animation playing method, comprising:
Determining target animation data, wherein the target animation data comprises multi-frame key frames, and the key frames are used for recording target position information of an animation object at the key frames;
according to the target animation data, a motion track formed by target position information corresponding to the multi-frame key frame is obtained;
Rendering to obtain a multi-frame animation frame according to the motion trail and the animation object, wherein the multi-frame animation frame is used for describing the motion of the animation object along the motion trail;
And displaying the multi-frame animation frame.
2. The method according to claim 1, wherein the target animation data further comprises indication information for indicating that a relative position between target position information corresponding to at least one frame of key frame and an element node in the virtual scene is a target relative position;
The step of obtaining a motion trail formed by the target position information corresponding to the multi-frame key frame according to the target animation data comprises the following steps:
Responding to the target animation data comprising the indication information, and acquiring node position information of the element node in the virtual scene;
Adjusting target position information corresponding to the at least one frame of key frame according to the node position information and the target relative position to obtain adjusted target position information, wherein the adjusted target position information is used for enabling the relative position of the adjusted target position information and the element node to be the target relative position;
and obtaining the motion trail based on the adjusted target position information.
3. The method according to claim 2, wherein rendering the multi-frame animation frame according to the motion trail and the animation object comprises:
acquiring the element node from the virtual scene;
rendering to obtain the multi-frame animation frame according to the motion trail, the element nodes and the animation objects;
the displaying the multi-frame animation frame comprises the following steps:
and displaying the multi-frame animation frames through an animation playing layer overlapped on the upper layer of the virtual scene.
4. The method of claim 2, wherein the virtual scene comprises a live room page, and the element node comprises an avatar of a target account in the live room page, the target account being a first account to send out a virtual gift or a second account to accept a virtual gift.
5. The method of claim 1, wherein the target animation data further comprises rotation parameters for controlling the animation object to rotate, the rotation parameters comprising a desired upward direction of the animation object, an initial heading direction of the animation object, and a desired heading direction of the animation object;
rendering to obtain a multi-frame animation frame according to the motion trail and the animation object, wherein the multi-frame animation frame comprises the following steps:
Under the condition that the currently rendered animation frame is a first animation frame, obtaining first position information of the animation object at the first animation frame according to the motion track, wherein the first animation frame is an animation frame of a first frame corresponding to the target animation data;
rendering to obtain the first frame of animation frame according to the first position information and the animation object;
Obtaining second position information of the animation object at the second animation frame according to the motion track under the condition that the currently rendered animation frame is the second animation frame, wherein the second animation frame is an animation frame except for the animation frame of the first frame in the target animation data;
obtaining the current motion direction of the animation object according to the second position information corresponding to the second animation frame and the third position information corresponding to the previous animation frame;
Obtaining a target rotation angle according to the current movement direction, the expected upward direction, the initial facing direction and the expected upward direction, wherein the target rotation angle is used for enabling the animation object to move towards the expected upward direction and keeping the expected upward direction;
and rendering the second animation frame according to the second position information, the target rotation angle and the animation object.
6. The method of claim 5, wherein the obtaining a target rotation angle based on the current direction of motion, the desired upward direction, the initial heading direction, and the desired heading direction comprises:
obtaining an initial rotation angle according to the current movement direction and the expected upward direction;
obtaining an initial rotational offset according to the initial facing direction and the desired facing direction;
And obtaining the target rotation angle according to the initial rotation angle and the initial rotation offset.
7. The method of claim 6, wherein said obtaining an initial rotation angle based on said current direction of motion and said desired upward direction comprises:
determining a first rotation quaternion for rotating a default orientation vector to the current motion direction according to the default orientation vector and the current motion direction in a world coordinate system;
Obtaining a right vector according to the cross product of the current movement direction and the expected upward direction;
Obtaining a new expected upward direction according to the cross product of the right vector and the current movement direction;
Obtaining a rotated upward vector according to the first rotation quaternion and a default upward vector in a world coordinate system;
Obtaining a second rotation quaternion according to the rotated upward vector and the new expected upward direction;
And obtaining the initial rotation angle according to the first rotation quaternion and the second rotation quaternion.
8. The method of claim 5, wherein rendering the second animation frame based on the second position information, the target rotation angle, and the animation object comprises:
According to the time interval between two adjacent frames of animation frames, interpolating the target rotation angle of the second animation frame and the target rotation angle corresponding to the previous frame of animation frame to obtain a plurality of rotation angles;
Rendering the second animation frame based on the second position information, the plurality of rotation angles and the animation object.
9. The method according to any one of claims 1 to 8, wherein the determining target animation data includes:
Acquiring original animation data, wherein the original animation data comprises multi-frame key frames, and the key frames are used for recording initial position information of an animation object at the key frames;
Displaying control nodes corresponding to the initial position information of the key frames and the motion trail of the animation objects formed by the control nodes in an animation editing scene according to the original animation data;
And responding to a first adjustment operation for the control node, and adjusting initial position information of a key frame corresponding to the control node to obtain target animation data.
10. An animation playback apparatus, comprising:
a determining module configured to determine target animation data, wherein the target animation data comprises a plurality of frames of key frames, the key frames being used for recording target position information of an animation object at the key frames;
the obtaining module is configured to obtain a motion trail formed by target position information corresponding to the multi-frame key frames according to the target animation data;
The rendering module is configured to render and obtain multi-frame animation frames according to the motion trail and the animation objects, wherein the multi-frame animation frames are used for describing the motion of the animation objects along the motion trail;
and the display module is configured to display the multi-frame animation frame.
11. A computer readable medium on which a computer program is stored, characterized in that the computer program, when being executed by a processing device, carries out the steps of the method according to any one of claims 1to 9.
12. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method of any one of claims 1 to 9.
13. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410814627.4A CN118710779A (en) | 2024-06-21 | 2024-06-21 | Animation playing method, device, medium, electronic equipment and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410814627.4A CN118710779A (en) | 2024-06-21 | 2024-06-21 | Animation playing method, device, medium, electronic equipment and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118710779A true CN118710779A (en) | 2024-09-27 |
Family
ID=92807212
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410814627.4A Pending CN118710779A (en) | 2024-06-21 | 2024-06-21 | Animation playing method, device, medium, electronic equipment and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118710779A (en) |
-
2024
- 2024-06-21 CN CN202410814627.4A patent/CN118710779A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11663785B2 (en) | Augmented and virtual reality | |
CA2669409C (en) | Method for scripting inter-scene transitions | |
US9240070B2 (en) | Methods and systems for viewing dynamic high-resolution 3D imagery over a network | |
US20130321396A1 (en) | Multi-input free viewpoint video processing pipeline | |
CN112053449A (en) | Augmented reality-based display method, device and storage medium | |
KR20070086037A (en) | Method for inter-scene transitions | |
WO2018102013A1 (en) | Methods, systems, and media for enhancing two-dimensional video content items with spherical video content | |
US11238657B2 (en) | Augmented video prototyping | |
JP7300563B2 (en) | Display method, device, and storage medium based on augmented reality | |
CN112672185B (en) | Augmented reality-based display method, device, equipment and storage medium | |
KR20130133319A (en) | Apparatus and method for authoring graphic user interface using 3d animations | |
CN112884908A (en) | Augmented reality-based display method, device, storage medium, and program product | |
CN102411791A (en) | Method and equipment for dynamic still image | |
US20230120437A1 (en) | Systems for generating dynamic panoramic video content | |
KR20210030384A (en) | 3D transition | |
US11763506B2 (en) | Generating animations in an augmented reality environment | |
US9773524B1 (en) | Video editing using mobile terminal and remote computer | |
CA3199128A1 (en) | Systems and methods for augmented reality video generation | |
CN111862273B (en) | Animation processing method, device, electronic equipment and storage medium | |
US9558578B1 (en) | Animation environment | |
CN109636917B (en) | Three-dimensional model generation method, device and hardware device | |
CN113614710A (en) | Device for presenting a presentation of data and associated method | |
CN118710779A (en) | Animation playing method, device, medium, electronic equipment and program product | |
CN118537455A (en) | Animation editing method, playing method, medium, electronic device, and program product | |
GB2535143A (en) | System and method for manipulating audio data in view of corresponding visual data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |