CN116934913A - Animation generation method, device, equipment and storage medium - Google Patents

Animation generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN116934913A
CN116934913A CN202310917382.3A CN202310917382A CN116934913A CN 116934913 A CN116934913 A CN 116934913A CN 202310917382 A CN202310917382 A CN 202310917382A CN 116934913 A CN116934913 A CN 116934913A
Authority
CN
China
Prior art keywords
position information
frame image
current frame
rendered
bone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310917382.3A
Other languages
Chinese (zh)
Inventor
满溢芳
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310917382.3A priority Critical patent/CN116934913A/en
Publication of CN116934913A publication Critical patent/CN116934913A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an animation generation method, an animation generation device, animation generation equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: acquiring a current frame image in an animation sequence to be rendered, wherein the current frame image comprises at least one object to be rendered; acquiring initial position information of each bone node included in a bone node set corresponding to an object to be rendered on a current frame image according to the frame identification of the current frame image; determining offset position information of each bone node corresponding to the current frame image according to a pre-constructed curve, initial position information of each bone node corresponding to the current frame image and a frame identifier of the current frame image; and rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeleton node corresponding to the current frame image, and outputting the rendered animation sequence. This can improve the efficiency of generating the animation having the waving effect.

Description

Animation generation method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an animation generating method, apparatus, device, and storage medium.
Background
In animating, a portion of the virtual character or scene needs to have a waving effect, e.g., clothing, hair, ribbon, etc., on the virtual character.
At present, animation with a flutter effect can be produced by using a dynamic plug-in, and in this way, not only complex problems such as boundary conditions and the like need to be considered, but also a large number of parameters influencing the flutter effect are needed.
Therefore, the conventional method of creating a waving effect causes a decrease in efficiency of generating an animation having the waving effect.
Disclosure of Invention
The present application has been made in view of the above-described drawbacks of the related art, and an object of the present application is to provide a method, apparatus, device, and storage medium for generating an animation, which can improve the efficiency of generating an animation having a waving effect.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides an animation generation method, including:
acquiring a current frame image in an animation sequence to be rendered, wherein the current frame image comprises at least one object to be rendered;
acquiring initial position information of each bone node included in a bone node set corresponding to the object to be rendered on the current frame image according to the frame identification of the current frame image;
Determining offset position information of each skeleton node corresponding to the current frame image according to a pre-constructed curve, initial position information of each skeleton node corresponding to the current frame image and frame identification of the current frame image, wherein the curve is used for representing the offset of the position of the same skeleton node corresponding to each frame image in the animation sequence along with the change of the frame identification;
and rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeleton node corresponding to the current frame image, and outputting a rendered animation sequence, wherein the rendered animation sequence comprises the image corresponding to the current frame image after rendering.
In a second aspect, an embodiment of the present application further provides an animation generating apparatus, including:
the first acquisition module is used for acquiring a current frame image in an animation sequence to be rendered, wherein the current frame image comprises at least one object to be rendered;
the second acquisition module is used for acquiring initial position information of all bone nodes included in a bone node set corresponding to the object to be rendered on the current frame image according to the frame identification of the current frame image;
The determining module is used for determining offset position information of each skeleton node corresponding to the current frame image according to a pre-constructed curve, initial position information of each skeleton node corresponding to the current frame image and frame identification of the current frame image, wherein the curve is used for representing the offset of the position of the same skeleton node corresponding to each frame image in the animation sequence along with the change of the frame identification;
the rendering module is used for rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeleton node corresponding to the current frame image and outputting a rendered animation sequence, and the rendered animation sequence comprises an image corresponding to the current frame image after being rendered.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium, and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the animation generation method of the first aspect described above.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the animation generation method of the first aspect described above.
The beneficial effects of the application are as follows:
the embodiment of the application provides an animation generation method, an animation generation device, animation generation equipment and a storage medium, wherein after initial position information of all skeleton nodes included in a skeleton node set corresponding to an object to be rendered on a current frame image is obtained, offset position information of all skeleton nodes corresponding to the current frame image can be determined based on a preselected constructed curve and a current frame image frame identifier, and then an animation sequence to be rendered is rendered according to the initial position information and the offset position information of all skeleton nodes, so that the object to be rendered on each frame image of the rendered animation sequence has a self-drifting effect. The coordinates corresponding to each point on the curve are used for indicating the offset of the position of the same skeleton node corresponding to each frame image in the animation sequence along with the change of the frame identification, so that the offset position information of each skeleton node corresponding to the object to be rendered on each frame image can be directly and rapidly determined by using simple curve parameters, and the position of each skeleton node on each frame image is further rapidly modified, so that the object after rendering presents the expected drifting effect. That is, not only can the rendered object on each frame image in the finally obtained rendered animation sequence have a smooth and natural fluttering effect, but also the generation efficiency of the animation with the fluttering effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an animation generation method according to an embodiment of the present application;
FIG. 2 is a flowchart of another animation generation method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another animation generation method according to an embodiment of the present application;
FIG. 4 is a flowchart of another animation generation method according to an embodiment of the present application;
FIG. 5 is a flowchart of another animation generation method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an animation generating device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The animation generation method in one embodiment of the application can be operated on a local terminal device or a server. When the animation generation method is run on a server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the animation generation method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, when the cloud game server which performs information processing and is cloud game is playing, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as a game picture and the like are encoded and compressed, the data is returned to the client device through a network, and finally, the data is decoded through the client device and the game picture is output.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In a possible implementation manner, the embodiment of the application provides an animation generation method, and a graphical user interface is provided through a terminal device, wherein the terminal device can be the aforementioned local terminal device or the aforementioned client device in the cloud interaction system.
The animation generation method according to the present application is illustrated in the following description with reference to the accompanying drawings, and the execution subject of the method may be the server mentioned above, and the graphical user interface for displaying the game may be rendered on the terminal device that performs data interaction with the server, where the game may specifically be a game in units of play, such as a multiplayer game, a large escape game, or other types of games, and the present application is not limited thereto.
The animation generation method according to the present application will be exemplified with reference to the accompanying drawings. Fig. 1 is a flow chart of an animation generation method according to an embodiment of the present application, as shown in fig. 1, the method may include:
s101, acquiring a current frame image in an animation sequence to be rendered.
The current frame image comprises at least one object to be rendered, and the object to be rendered can be clothes, hair, accessories, ribbon or the like on the virtual character. For example, the objects to be rendered mounted on the virtual character included in the current frame image are accessories and ribbon, and it should be noted that, if the objects to be rendered include a plurality of objects to be rendered, the rendering process of each object to be rendered may be processed according to the related example of the animation generating method provided by the present application, where the following example is illustrated by taking the image including one object to be rendered as an example.
It will be appreciated that a game scene may be displayed on the graphical user interface of the terminal device, the game scene including a virtual character carrying an object to be rendered (e.g. hair). For example, if it is required to render a flutter effect on the hair of the virtual character in the game scene, the animation frame sequence to be rendered may be obtained from the associated storage device according to the preset animation sequence length and the current game scene information, and the current image to be processed in the animation sequence to be rendered may be referred to as the current frame image. In another example, if it is required to render a waving effect on the hair of the virtual character in the game scene, then an image corresponding to the current game scene information is obtained in the storage device, and the image is used as the current frame image.
S102, acquiring initial position information of each bone node included in a bone node set corresponding to an object to be rendered on the current frame image according to the frame identification of the current frame image.
It can be understood that bone binding is a link in three-dimensional animation production, and specifically, a bone system is built for a three-dimensional model of an animal, a task, and the like which are already produced, and the bone system is generally divided into two parts, such as a main body bone of a character and a bone of a mounted object. The character main body skeleton is generally used for building a character main body and can be used for executing physical motion information such as a virtual character model, wherein the physical motion information is character animation corresponding to the character main body skeleton; the object skeleton is generally used for building an object skeleton chain corresponding to the character main skeleton, wherein the object skeleton chain is a skeleton of a game object mounted on the character main skeleton, and a connector skeleton node in the character main skeleton is connected with a first skeleton node in the object skeleton chain. Based on this, the storage device may store in advance a bone node set associated with the character body bone corresponding to the frame identifier of each image and a bone node set associated with the mount target bone chain, and may also store in advance initial position information corresponding to each bone node, where the initial position information may be represented by a bone space matrix including other dimensional parameters such as a position parameter, a direction parameter, and an angle parameter of the corresponding bone node.
As an example, as can be seen from the above description, the correspondence between the frame identifier and the skeleton node is pre-stored in the storage device, and then the skeleton node set corresponding to the object to be rendered on the current frame image may be read from the storage device based on the correspondence and the frame identifier of the current frame image, where the skeleton node set corresponding to the object to be rendered includes the skeleton node set associated with the skeleton chain of the mounted object and the connection sub-skeleton node in the skeleton node set associated with the skeleton of the character main body, and the connection sub-skeleton node may also be referred to as the root node of the skeleton chain of the mounted object. After determining the skeleton node set corresponding to the object to be rendered, the initial position information of each skeleton node, namely, the skeleton space matrix, can be read from the storage device according to the identification of each skeleton node included in the skeleton node set.
S103, determining offset position information of each skeleton node corresponding to the current frame image according to the pre-constructed curve, the initial position information of each skeleton node corresponding to the current frame image and the frame identification of the current frame image.
The curve is used for representing the offset of the position of the same skeleton node corresponding to each frame image in the animation sequence along with the change of the frame identification. It is understood that the first coordinate of the curve may represent a time point, and the second coordinate may represent an offset corresponding to each time point, where the first coordinate may be an abscissa and the second coordinate may be an ordinate. Each frame of image in the obtained animation sequence comprises the same skeleton node (such as a first skeleton node), the same skeleton node has corresponding initial position information on each frame of image, and the sequence of each frame of image in the animation sequence to be rendered can be represented by a frame identifier.
The following examples are described by taking the first coordinate as the abscissa and the second coordinate as the ordinate.
An exemplary illustration of points on a curve may be described in terms of dimensions for the same skeletal node to which each frame of image corresponds. The curve may be divided into a plurality of points according to the number of images included in the animation sequence to be rendered, with the same skeletal node corresponding to the plurality of points on the curve. Taking the same skeleton node as a first skeleton node as an example for explanation, assuming that the current frame image is the first frame image in the animation sequence to be rendered, the first skeleton node corresponding to the object to be rendered on the current frame image corresponds to a point on the starting position of the curve, that is, the offset position information of the first skeleton node is related to the ordinate of the point on the starting position of the curve; if the current frame image is the last frame image in the animation sequence to be rendered, the first skeleton node corresponding to the object to be rendered on the current frame image corresponds to a point at the ending position of the curve, namely the offset position information of the first skeleton node is related to the ordinate of the point at the ending position of the curve.
In another exemplary embodiment, the points on the curve may be described by taking the skeleton node corresponding to the current frame image as a dimension, and assuming that the current frame image is the first frame image in the animation sequence to be rendered, each skeleton node corresponding to the object to be rendered on the first frame image corresponds to a point on the starting position of the curve, that is, the offset position information of each skeleton node corresponding to the object to be rendered on the first frame image is related to the ordinate of the point on the starting position of the curve; if the current frame image is the last frame image in the animation sequence to be rendered, each skeleton node corresponding to the object to be rendered on the last frame image corresponds to a point at the end position of the curve, that is, the offset position information of each skeleton node is related to the ordinate of the point at the end position of the curve.
Based on the above description, the initial offset of each skeletal node corresponding to the object to be rendered on the current frame image can be determined according to the pre-constructed curve and the frame identifier of the current frame image, and then the offset position information of each skeletal node corresponding to the current frame image is determined according to the initial offset of each skeletal node corresponding to the object to be rendered on the current frame image and the initial position information of each skeletal node.
And S104, rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeleton node corresponding to the current frame image, and outputting the rendered animation sequence.
The rendered animation sequence includes an image corresponding to the rendered current frame image, and it can be understood that the rendered object on each frame image in the rendered animation sequence can present an automatic effect, such as natural hair waving.
Illustratively, each skeleton node corresponding to an object to be rendered on the current frame image is associated with a set of data (initial position information and offset position information), and the initial position information is corrected based on the offset position information of the first skeleton node to determine target offset position information of the first skeleton node. And finally, the target position information of each skeleton node corresponding to the object to be rendered on each frame of image in the animation sequence to be rendered can be obtained, the animation sequence to be rendered is further rendered according to the target position information of each skeleton node corresponding to the object to be rendered on each frame of image, and the rendered animation sequence can be output and recorded in a pre-configured buffer area.
In another exemplary embodiment, the animation sequence to be rendered is rendered according to the target position information of each skeletal node corresponding to the object to be rendered on each frame of image, and the rendered animation sequence is directly displayed on the graphical user interface of the terminal device.
In summary, in the animation generation method provided by the application, after the initial position information of each skeletal node included in the skeletal node set corresponding to the object to be rendered on the current frame image is obtained, the offset position information of each skeletal node corresponding to the current frame image can be determined based on the pre-selected constructed curve and the frame identifier of the current frame image, and then the animation sequence to be rendered is rendered according to the initial position information and the offset position information of each skeletal node, so that the rendered object on each frame image of the rendered animation sequence has a self-drifting effect. The coordinates corresponding to each point on the curve are used for indicating the offset of the position of the same skeleton node corresponding to each frame image in the animation sequence along with the change of the frame identification, so that the offset position information of each skeleton node corresponding to the object to be rendered on each frame image can be directly and rapidly determined by using simple curve parameters, and the position of each skeleton node on each frame image is further rapidly modified, so that the object after rendering presents the expected drifting effect. That is, not only can the rendered object on each frame image in the rendered animation sequence obtained by the final rendering have smooth and natural fluttering effect, but also the generation efficiency of the animation with the fluttering effect is improved.
Fig. 2 is a flowchart of another animation generation method according to an embodiment of the present application. Optionally, as shown in fig. 2, determining the offset position information of each bone node corresponding to the current frame image according to the pre-constructed curve, the initial position information of each bone node corresponding to the current frame image, and the frame identifier of the current frame image includes:
s201, based on the corresponding relation between the first coordinates on the curve and the frame identification of the current frame image, determining initial offset corresponding to each skeleton node corresponding to the current frame image according to the offset corresponding to each first coordinate on the curve.
S202, determining offset position information of each skeleton node corresponding to the current frame image according to initial offset and initial position information corresponding to each skeleton node corresponding to the current frame image.
From the above description, it is known that the frame identification of the current frame image can be used to gauge the order of the current frame image in the animation sequence to be rendered. The method includes the steps that a constructed curve can be divided into a plurality of points according to the number of images included in an animation sequence to be rendered, namely, each point has a corresponding relation with a frame identifier, each point comprises a first coordinate and a second coordinate on the curve, the second coordinate is an offset corresponding to the first coordinate, after the frame identifier of a current frame image is determined, a target point corresponding to the frame identifier of the current frame image can be obtained according to the frame identifier of the current frame image and the corresponding relation between each point on the curve and the frame identifier, and then the ordinate corresponding to the target point is used as the initial offset of each skeletal node corresponding to an object to be rendered on the current frame image. It can be seen that the initial offset of each skeleton node corresponding to the object to be rendered on the same frame of image is consistent, and the initial offset of the same skeleton node on different frames of images corresponds to the second coordinates of each point on the curve, that is, the same skeleton node on different frames of images moves along the curve.
Alternatively, the first coordinate may be an abscissa and the second coordinate may be an ordinate, and it should be noted that the alignment of the present application is not limited.
The method comprises the steps that each skeleton node corresponding to an object to be rendered on a current frame image is associated with a group of data (initial offset and initial position information), reference position information of each skeleton node is determined according to a skeleton space matrix corresponding to the initial position information of each skeleton node of the object to be rendered on the current frame image, further offset position information calculation is carried out according to the reference position information of each skeleton node and the initial offset, offset position information of each skeleton node of the object to be rendered on the current frame image is obtained, and similarly, offset position information of each skeleton node corresponding to each frame image in an animation sequence to be rendered can be obtained.
It can be seen that the present application enables an object (e.g., hair) to move along a curve by means of parameters on the curve, thereby achieving a self-drifting effect, so that the adjustment of skeletal movements can be made more flexible and easier, and the process of producing an animation with a drifting effect is reduced.
Fig. 3 is a flowchart of another animation generation method according to an embodiment of the present application. Optionally, as shown in fig. 3, determining the offset position information of each bone node corresponding to the current frame image according to the initial offset amount and the initial position information of each bone node corresponding to the current frame image includes:
S301, determining initial position information of the associated skeleton node before the skeleton node to be processed currently according to the position relation and the initial position information of the skeleton nodes corresponding to the current frame image.
The bone nodes to be processed currently are any bone node in each bone node, such as the first bone node, the second secondary bone node, the last bone node or the bone nodes at other positions on the bone chain of the mounted object in the bone node set associated with the bone chain of the mounted object, which is not limited by the application.
For example, if the currently to-be-processed skeleton node is the first skeleton node, initial position information of the associated skeleton node before the first skeleton node may be determined according to the position relationship between skeleton nodes included in the set of skeleton nodes of the object to be rendered on the current frame image, and it may be understood that the associated skeleton node before the first skeleton node is the above-mentioned connection sub-skeleton node, and if the currently to-be-processed skeleton node is the second secondary skeleton node, the determined associated skeleton node before the second secondary skeleton node is the connection sub-skeleton node or the first skeleton node.
S302, determining the reference position information of the bone node to be processed currently according to the initial position information of the associated bone node and the initial position information of the bone node to be processed currently, and replacing the initial position information of the bone node to be processed currently with the reference position information.
The initial position information may be represented by a bone space matrix, where, if the currently to-be-processed bone node is a first bone node, the associated bone node of the first bone node includes a connection sub-bone node, the bone space matrix of the connection sub-bone node is multiplied by the bone space matrix of the first bone node, and the multiplication result is used as the reference position information of the first bone node.
Meanwhile, the initial position information of the first skeleton node is replaced by the corresponding parameter position information, that is, if the currently-to-be-processed skeleton node is the second secondary skeleton node, the associated skeleton node of the second secondary skeleton node comprises a connecting sub-skeleton node and the first skeleton node, the skeleton space matrix connected with the sub-skeleton node, the skeleton space matrix corresponding to the reference position information of the first skeleton node and the skeleton space matrix of the second secondary skeleton node are multiplied, the multiplication result is taken as the reference position information of the second secondary skeleton node, and meanwhile, the initial position information of the second secondary skeleton node is replaced by the corresponding reference position information.
S303, determining offset position information of the currently-to-be-processed skeleton node corresponding to the current frame image according to the reference position information of the currently-to-be-processed skeleton node and the initial offset.
Continuing to take the currently-to-be-processed bone node as the first bone node for illustration, inputting the reference position information and the initial offset corresponding to the first bone node into a function corresponding to the calculated offset position information, and calculating to obtain the offset position information of the first bone node, and similarly, obtaining the offset position information of each bone node corresponding to the current frame image.
Optionally, determining the offset position information of the currently pending skeletal node corresponding to the current frame image according to the reference position information of the currently pending skeletal node and the initial offset includes: determining the offset position information of the current to-be-processed bone node corresponding to the current frame image according to the reference position information of the current to-be-processed bone node, the initial offset and the influence parameter, wherein the influence parameter comprises at least one of the following components: game scene parameters, delay parameters corresponding to skeleton nodes, and noise parameters corresponding to skeleton nodes.
The reference position information, the initial offset and the influence parameter of the currently-to-be-processed bone node can be input into a function corresponding to the calculated offset position information, and the offset position information of the currently-to-be-processed bone node is calculated.
The method can be realized by superposing the initial offset of the bone node to be processed and the influence parameter to obtain a superposition result, multiplying the superposition result by a bone space matrix corresponding to the reference position information of the bone node to be processed, taking the multiplication result as offset position information, and it can be understood that the offset position information can also be expressed in a matrix form.
An exemplary influencing parameter is a game scene parameter, which can be obtained according to state information of a current game scene, such as wind speed, gravity acceleration, and the like, and the game scene parameters can influence the drifting effect of an object to be rendered, the initial offset corresponding to the current skeleton node to be processed is overlapped with the current game scene parameter, and then the offset position information of the current skeleton node to be processed is calculated according to the overlapping result and the relation of the reference position information in the corresponding function. It can be seen that when the offset position information of the currently-to-be-processed skeletal node is determined, not only the reference position information and the initial offset are introduced, but also game scene parameters are introduced, so that the drifting effect of the object after rendering in the finally-rendered target animation sequence is more matched with the game scene, and the visual sense is more realistic.
In another exemplary embodiment, the influencing parameters include delay parameters and noise parameters corresponding to skeleton nodes, which are generally set for other skeleton nodes except the first skeleton node in the skeleton node set associated with the skeleton chain of the mounting object according to actual requirements, and the delay parameters and noise parameters corresponding to the identification of the skeleton node are prestored in the storage device. Taking a bone node to be processed currently as a second secondary bone node and an influence parameter as a noise parameter as an example for description, reading the noise parameter corresponding to the second secondary bone node from the storage device according to the identifier of the second secondary bone node, superposing the initial offset corresponding to the second secondary bone node with the noise parameter, and calculating to obtain the offset position information of the second secondary bone node according to the superposition result and the relation of the reference position information in the corresponding function. It can be seen that when determining the offset position information of each skeletal node of the object to be rendered on the current frame image, not only the reference position information and the initial offset amount are introduced, but also noise parameters and/or delay parameters are introduced, so that the style of the finally rendered target animation sequence can be controlled, for example, the rendered object in the target animation sequence can exhibit the waving of the secondary meta-type style.
Optionally, the influence parameters include delay parameters, noise parameters and game scene parameters corresponding to skeleton nodes, so that the waving effect of the object after rendering in the finally rendered target animation sequence is more matched with the game scene, the effect is more realistic in vision, and the object after rendering can show waving in a required style.
Fig. 4 is a flowchart of another animation generation method according to an embodiment of the present application. Optionally, as shown in fig. 4, determining the offset position information of each bone node corresponding to the current frame image according to the pre-constructed curve, the initial position information of each bone node corresponding to the current frame image, and the frame identifier of the current frame image includes:
s401, determining a curve corresponding to the object to be rendered according to a mapping relation between a pre-constructed curve and the object.
S402, determining offset position information of each skeleton node corresponding to the current frame image according to a curve corresponding to the object to be rendered, initial position information of each skeleton node corresponding to the current frame image and a frame identifier of the current frame image.
The pre-constructed curves can include a plurality of types, each type of curve has a mapping relationship with the object, for example, the curve corresponding to the object to be rendered can be determined according to the mapping relationship between the curve and the object identifier and the identifier of the object to be rendered.
For example, when the object is taken as hair for illustration, when the curve corresponding to the hair is constructed, the shape, length, amplitude and other information of the curve can be defined according to the expected hair waving effect, for example, if the expected hair waving effect is that the expected hair waving effect has stronger rhythm sense and stability, the curve with stronger amplitude and more uniform waveform change can be constructed, and if the expected hair waving effect is real and natural, the curve with the waveform approaching to the written animation can be constructed. That is, the finally generated animation having the flutter effect is highly controllable.
After determining the curve corresponding to the object to be rendered, determining the initial offset of each skeleton node corresponding to the object to be rendered on the current frame image according to the curve and the frame identification of the current frame image, and then inputting the initial offset and the initial position information of each skeleton node corresponding to the object to be rendered on the current frame image into a function corresponding to the calculated offset position information, and calculating to obtain the offset position information of each skeleton node corresponding to the current frame image.
Fig. 5 is a flowchart of another animation generation method according to an embodiment of the present application. Optionally, as shown in fig. 5, the rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeletal node corresponding to the current frame image includes:
S501, the initial position information and the offset position information of the same skeleton node corresponding to the current frame image are overlapped, and the target position information of each skeleton node corresponding to the current frame image is obtained.
S502, rendering the animation sequence to be rendered according to the target position information of each skeleton node corresponding to the current frame image and outputting the rendered animation sequence.
Illustratively, each skeletal node on the skeletal chain of the mounted object corresponding to the object to be rendered in the current frame image is associated with a set of data, where the set of data includes initial position information and offset position information. And respectively carrying out superposition processing on a group of data associated with each bone node, namely adding the position parameters included in the bone space matrix corresponding to the initial position information and the position parameters included in the bone space matrix corresponding to the offset position information to obtain target position information of each bone node, namely a target bone space matrix. That is, each skeleton node corresponding to each frame of image in the animation series to be rendered is associated with a target skeleton space matrix, the object to be processed on each frame of image can be rendered based on the target skeleton space matrix associated with each skeleton node corresponding to each frame of image, the rendered animation series is obtained, the rendered animation series can be recorded in a pre-configured buffer area, and the rendered animation series can also be directly displayed on the graphical user interface of the terminal device, so that the object (such as hair) with smooth and natural drifting effect can be displayed on the graphical user interface.
Fig. 6 is a schematic structural diagram of an animation generating device according to an embodiment of the present application. As shown in fig. 6, the apparatus includes:
a first obtaining module 601, configured to obtain a current frame image in an animation sequence to be rendered, where the current frame image includes at least one object to be rendered;
a second obtaining module 602, configured to obtain, according to a frame identifier of a current frame image, initial position information of each skeletal node included in a skeletal node set corresponding to an object to be rendered on the current frame image;
the determining module 603 is configured to determine offset position information of each skeletal node corresponding to the current frame image according to a pre-constructed curve, initial position information of each skeletal node corresponding to the current frame image, and a frame identifier of the current frame image, where the curve is used to represent an offset amount of a position of the same skeletal node corresponding to each frame image in the animation sequence, where the offset amount changes along with the frame identifier;
the rendering module 604 is configured to render an animation sequence to be rendered according to initial position information and offset position information of each skeletal node corresponding to the current frame image, and output a rendered animation sequence, where the rendered animation sequence includes an image corresponding to the current frame image after rendering.
Optionally, the determining module 603 is specifically configured to determine, based on a correspondence between the first coordinates on the curve and the frame identifier of the current frame image, an initial offset of each skeletal node corresponding to the current frame image according to an offset corresponding to each first coordinate on the curve; and determining the offset position information of each bone node corresponding to the current frame image according to the initial offset and the initial position information of each bone node corresponding to the current frame image.
Optionally, the determining module 603 is further specifically configured to determine, according to the positional relationship between the bone nodes corresponding to the current frame image and the initial position information, initial position information of an associated bone node before the current bone node to be processed, where the current bone node to be processed is any bone node in the bone nodes; determining reference position information of the bone node to be processed currently according to the initial position information of the associated bone node and the initial position information of the bone node to be processed currently, and replacing the initial position information of the bone node to be processed currently with the reference position information; and determining the offset position information of the currently-to-be-processed skeleton node corresponding to the current frame image according to the reference position information and the initial offset of the currently-to-be-processed skeleton node.
Optionally, the determining module 603 is further specifically configured to determine, according to the reference position information of the currently pending skeletal node, the initial offset, and an influence parameter, offset position information of the currently pending skeletal node corresponding to the current frame image, where the influence parameter includes at least one of the following: game scene parameters, delay parameters corresponding to skeleton nodes, and noise parameters corresponding to skeleton nodes.
Optionally, the first obtaining module 601 is further configured to obtain, according to the identifier of the currently pending skeletal node, a delay parameter and a noise parameter corresponding to the currently pending skeletal node.
Optionally, the first obtaining module 601 is further configured to obtain a game scene parameter according to the state information of the current game scene, where the game scene parameter is used to indicate a parameter affecting the waving effect.
Optionally, the determining module 603 is further specifically configured to determine a curve corresponding to the object to be rendered according to a mapping relationship between a pre-constructed curve and the object; and determining offset position information of each skeleton node corresponding to the current frame image according to the curve corresponding to the object to be rendered, the initial position information of each skeleton node corresponding to the current frame image and the frame identification of the current frame image.
Optionally, the rendering module 604 is specifically configured to superimpose initial position information and offset position information of the same skeleton node corresponding to the current frame image, so as to obtain target position information of each skeleton node corresponding to the current frame image; and rendering the animation sequence to be rendered according to the target position information of each skeleton node corresponding to the current frame image.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (Digital Signal Processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 7, where the electronic device may include: a processor 701, a storage medium 702, and a bus 703, the storage medium 702 storing machine-readable instructions executable by the processor 701, the processor 701 and the storage medium 702 in communication over the bus 703 when the electronic device is operating, the processor 701 executing the machine-readable instructions to perform the steps of:
in one possible embodiment, the processor 701, when executing the animation generation method, is specifically configured to: acquiring a current frame image in an animation sequence to be rendered, wherein the current frame image comprises at least one object to be rendered; acquiring initial position information of each bone node included in a bone node set corresponding to an object to be rendered on a current frame image according to the frame identification of the current frame image; determining offset position information of each skeleton node corresponding to the current frame image according to a pre-constructed curve, initial position information of each skeleton node corresponding to the current frame image and frame identification of the current frame image, wherein the curve is used for representing the offset of the position of the same skeleton node corresponding to each frame image in an animation sequence along with the change of the frame identification; and rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeleton node corresponding to the current frame image, and outputting a rendered animation sequence, wherein the rendered animation sequence comprises the image corresponding to the current frame image after rendering.
In one possible embodiment, the processor 701, when executing the animation generation method, is specifically configured to: based on the corresponding relation between the first coordinates on the curve and the frame identification of the current frame image, determining the initial offset of each skeleton node corresponding to the current frame image according to the offset corresponding to each first coordinate on the curve; and determining the offset position information of each bone node corresponding to the current frame image according to the initial offset and the initial position information of each bone node corresponding to the current frame image.
In one possible embodiment, the processor 701, when executing the animation generation method, is specifically configured to: according to the position relation and initial position information among the bone nodes corresponding to the current frame image, determining initial position information of the associated bone nodes before the current bone node to be processed, wherein the current bone node to be processed is any bone node among the bone nodes; determining reference position information of the bone node to be processed currently according to the initial position information of the associated bone node and the initial position information of the bone node to be processed currently, and replacing the initial position information of the bone node to be processed currently with the reference position information; and determining the offset position information of the currently-to-be-processed skeleton node corresponding to the current frame image according to the reference position information and the initial offset of the currently-to-be-processed skeleton node.
In one possible embodiment, the processor 701, when executing the animation generation method, is specifically configured to: determining the offset position information of the current to-be-processed bone node corresponding to the current frame image according to the reference position information of the current to-be-processed bone node, the initial offset and the influence parameter, wherein the influence parameter comprises at least one of the following components: game scene parameters, delay parameters corresponding to skeleton nodes, and noise parameters corresponding to skeleton nodes.
In one possible embodiment, the processor 701, when executing the animation generation method, is specifically configured to: and acquiring delay parameters and noise parameters corresponding to the current bone nodes to be processed according to the identification of the current bone nodes to be processed.
In one possible embodiment, the processor 701, when executing the animation generation method, is specifically configured to: and acquiring game scene parameters according to the state information of the current game scene, wherein the game scene parameters are used for indicating parameters affecting the drifting effect.
In one possible embodiment, the processor 701, when executing the animation generation method, is specifically configured to: determining a curve corresponding to the object to be rendered according to a mapping relation between a pre-constructed curve and the object; and determining offset position information of each skeleton node corresponding to the current frame image according to the curve corresponding to the object to be rendered, the initial position information of each skeleton node corresponding to the current frame image and the frame identification of the current frame image.
In one possible embodiment, the processor 701, when executing the animation generation method, is specifically configured to: superposing initial position information and offset position information of the same skeleton node corresponding to the current frame image to obtain target position information of each skeleton node corresponding to the current frame image; and rendering the animation sequence to be rendered according to the target position information of each skeletal node corresponding to the current frame image and outputting the rendered animation sequence.
Optionally, the present application further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
in one possible embodiment, the processor, when executing the animation generation method, is specifically configured to: acquiring a current frame image in an animation sequence to be rendered, wherein the current frame image comprises at least one object to be rendered; acquiring initial position information of each bone node included in a bone node set corresponding to an object to be rendered on a current frame image according to the frame identification of the current frame image; determining offset position information of each skeleton node corresponding to the current frame image according to a pre-constructed curve, initial position information of each skeleton node corresponding to the current frame image and frame identification of the current frame image, wherein the curve is used for representing the offset of the position of the same skeleton node corresponding to each frame image in an animation sequence along with the change of the frame identification; and rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeleton node corresponding to the current frame image, and outputting a rendered animation sequence, wherein the rendered animation sequence comprises the image corresponding to the current frame image after rendering.
In one possible embodiment, the processor, when executing the animation generation method, is specifically configured to: based on the corresponding relation between the first coordinates on the curve and the frame identification of the current frame image, determining the initial offset of each skeleton node corresponding to the current frame image according to the offset corresponding to each first coordinate on the curve; and determining the offset position information of each bone node corresponding to the current frame image according to the initial offset and the initial position information of each bone node corresponding to the current frame image.
In one possible embodiment, the processor, when executing the animation generation method, is specifically configured to: according to the position relation and initial position information among the bone nodes corresponding to the current frame image, determining initial position information of the associated bone nodes before the current bone node to be processed, wherein the current bone node to be processed is any bone node among the bone nodes; determining reference position information of the bone node to be processed currently according to the initial position information of the associated bone node and the initial position information of the bone node to be processed currently, and replacing the initial position information of the bone node to be processed currently with the reference position information; and determining the offset position information of the currently-to-be-processed skeleton node corresponding to the current frame image according to the reference position information and the initial offset of the currently-to-be-processed skeleton node.
In one possible embodiment, the processor, when executing the animation generation method, is specifically configured to: determining the offset position information of the current to-be-processed bone node corresponding to the current frame image according to the reference position information of the current to-be-processed bone node, the initial offset and the influence parameter, wherein the influence parameter comprises at least one of the following components: game scene parameters, delay parameters corresponding to skeleton nodes, and noise parameters corresponding to skeleton nodes.
In one possible embodiment, the processor, when executing the animation generation method, is specifically configured to: and acquiring delay parameters and noise parameters corresponding to the current bone nodes to be processed according to the identification of the current bone nodes to be processed.
In one possible embodiment, the processor, when executing the animation generation method, is specifically configured to: and acquiring game scene parameters according to the state information of the current game scene, wherein the game scene parameters are used for indicating parameters affecting the drifting effect.
In one possible embodiment, the processor, when executing the animation generation method, is specifically configured to: determining a curve corresponding to the object to be rendered according to a mapping relation between a pre-constructed curve and the object; and determining offset position information of each skeleton node corresponding to the current frame image according to the curve corresponding to the object to be rendered, the initial position information of each skeleton node corresponding to the current frame image and the frame identification of the current frame image.
In one possible embodiment, the processor, when executing the animation generation method, is specifically configured to: superposing initial position information and offset position information of the same skeleton node corresponding to the current frame image to obtain target position information of each skeleton node corresponding to the current frame image; and rendering the animation sequence to be rendered according to the target position information of each skeletal node corresponding to the current frame image and outputting the rendered animation sequence.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A method of animation generation, the method comprising:
acquiring a current frame image in an animation sequence to be rendered, wherein the current frame image comprises at least one object to be rendered;
acquiring initial position information of each bone node included in a bone node set corresponding to the object to be rendered on the current frame image according to the frame identification of the current frame image;
determining offset position information of each skeleton node corresponding to the current frame image according to a pre-constructed curve, initial position information of each skeleton node corresponding to the current frame image and frame identification of the current frame image, wherein the curve is used for representing the offset of the position of the same skeleton node corresponding to each frame image in the animation sequence along with the change of the frame identification;
and rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeleton node corresponding to the current frame image, and outputting a rendered animation sequence, wherein the rendered animation sequence comprises the image corresponding to the current frame image after rendering.
2. The method according to claim 1, wherein determining the offset position information of each bone node corresponding to the current frame image according to the pre-constructed curve, the initial position information of each bone node corresponding to the current frame image, and the frame identification of the current frame image includes:
Based on the corresponding relation between the first coordinates on the curve and the frame identification of the current frame image, determining the initial offset of each skeleton node corresponding to the current frame image according to the offset corresponding to each first coordinate on the curve;
and determining the offset position information of each bone node corresponding to the current frame image according to the initial offset and the initial position information of each bone node corresponding to the current frame image.
3. The method according to claim 2, wherein determining the offset position information of each bone node corresponding to the current frame image according to the initial offset amount and the initial position information of each bone node corresponding to the current frame image comprises:
determining initial position information of associated bone nodes before a current bone node to be processed according to the position relation and initial position information between bone nodes corresponding to the current frame image, wherein the current bone node to be processed is any bone node in the bone nodes;
determining reference position information of the bone node to be processed currently according to the initial position information of the associated bone node and the initial position information of the bone node to be processed currently, and replacing the initial position information of the bone node to be processed currently with the reference position information;
And determining the offset position information of the currently-to-be-processed skeleton node corresponding to the current frame image according to the reference position information and the initial offset of the currently-to-be-processed skeleton node.
4. A method according to claim 3, wherein the determining the offset position information of the currently pending skeletal node corresponding to the current frame image according to the reference position information of the currently pending skeletal node and the initial offset amount includes:
determining the offset position information of the currently-to-be-processed bone node corresponding to the current frame image according to the reference position information of the currently-to-be-processed bone node, the initial offset and an influence parameter, wherein the influence parameter comprises at least one of the following: game scene parameters, delay parameters corresponding to skeleton nodes, and noise parameters corresponding to skeleton nodes.
5. The method according to claim 4, wherein before determining the offset position information of the currently pending bone node corresponding to the current frame image according to the reference position information of the currently pending bone node, the initial offset, and the influencing parameter, the method further comprises:
And acquiring delay parameters and noise parameters corresponding to the current to-be-processed bone nodes according to the identification of the current to-be-processed bone nodes.
6. The method according to claim 4, wherein before determining the offset position information of the currently pending bone node corresponding to the current frame image according to the reference position information of the currently pending bone node, the initial offset, and the influencing parameter, the method further comprises:
and acquiring game scene parameters according to the state information of the current game scene, wherein the game scene parameters are used for indicating parameters affecting the drifting effect.
7. The method according to any one of claims 1 to 6, wherein determining offset position information of each bone node corresponding to the current frame image according to the pre-constructed curve, initial position information of each bone node corresponding to the current frame image, and frame identification of the current frame image includes:
determining a curve corresponding to the object to be rendered according to a mapping relation between a pre-constructed curve and the object;
and determining offset position information of each skeleton node corresponding to the current frame image according to the curve corresponding to the object to be rendered, the initial position information of each skeleton node corresponding to the current frame image and the frame identification of the current frame image.
8. The method according to any one of claims 1 to 6, wherein the rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeletal node corresponding to the current frame image and outputting the rendered animation sequence includes:
superposing initial position information and offset position information of the same skeleton node corresponding to the current frame image to obtain target position information of each skeleton node corresponding to the current frame image;
and rendering the animation sequence to be rendered according to the target position information of each skeletal node corresponding to the current frame image and outputting the rendered animation sequence.
9. An animation generation device, the device comprising:
the first acquisition module is used for acquiring a current frame image in an animation sequence to be rendered, wherein the current frame image comprises at least one object to be rendered;
the second acquisition module is used for acquiring initial position information of all bone nodes included in a bone node set corresponding to the object to be rendered on the current frame image according to the frame identification of the current frame image;
the determining module is used for determining offset position information of each skeleton node corresponding to the current frame image according to a pre-constructed curve, initial position information of each skeleton node corresponding to the current frame image and frame identification of the current frame image, wherein the curve is used for representing the offset of the position of the same skeleton node corresponding to each frame image in the animation sequence along with the change of the frame identification;
The rendering module is used for rendering the animation sequence to be rendered according to the initial position information and the offset position information of each skeleton node corresponding to the current frame image and outputting a rendered animation sequence, and the rendered animation sequence comprises an image corresponding to the current frame image after being rendered.
10. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the animation generation method of any of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, performs the steps of the animation generation method according to any of claims 1-8.
CN202310917382.3A 2023-07-24 2023-07-24 Animation generation method, device, equipment and storage medium Pending CN116934913A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310917382.3A CN116934913A (en) 2023-07-24 2023-07-24 Animation generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310917382.3A CN116934913A (en) 2023-07-24 2023-07-24 Animation generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116934913A true CN116934913A (en) 2023-10-24

Family

ID=88375166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310917382.3A Pending CN116934913A (en) 2023-07-24 2023-07-24 Animation generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116934913A (en)

Similar Documents

Publication Publication Date Title
CN108010112B (en) Animation processing method, device and storage medium
CN109448099B (en) Picture rendering method and device, storage medium and electronic device
US6700586B1 (en) Low cost graphics with stitching processing hardware support for skeletal animation
CN112933597B (en) Image processing method, image processing device, computer equipment and storage medium
CN111714880B (en) Picture display method and device, storage medium and electronic device
EP2065854B1 (en) posture dependent normal vectors for texture mapping
CN111773688B (en) Flexible object rendering method and device, storage medium and electronic device
CN111738935B (en) Ghost rendering method and device, storage medium and electronic device
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
CN110930484B (en) Animation configuration method and device, storage medium and electronic device
CN115115752A (en) Virtual garment deformation prediction method and device, storage medium and electronic equipment
JP4868586B2 (en) Image generation system, program, and information storage medium
US10754498B2 (en) Hybrid image rendering system
JP3625201B2 (en) Three-dimensional model transformation program, three-dimensional model transformation method, and video game apparatus
CN116934913A (en) Animation generation method, device, equipment and storage medium
CN108986228B (en) Method and device for displaying interface in virtual reality
CN112868052A (en) Method and system for providing at least partial content with six degrees of freedom
CN109636888A (en) 2D special effect making method and device, electronic equipment, storage medium
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN116982088A (en) Layered garment for conforming to underlying body and/or garment layers
CN114452646A (en) Virtual object perspective processing method and device and computer equipment
JP4229316B2 (en) Image generation system, program, and information storage medium
JP3706545B2 (en) Image generation method and program used therefor
CN115861500B (en) 2D model collision body generation method and device
WO2023142756A1 (en) Live broadcast interaction method, device, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination