WO2023015921A1 - 动画数据处理方法、非易失性存储介质及电子装置 - Google Patents

动画数据处理方法、非易失性存储介质及电子装置 Download PDF

Info

Publication number
WO2023015921A1
WO2023015921A1 PCT/CN2022/085465 CN2022085465W WO2023015921A1 WO 2023015921 A1 WO2023015921 A1 WO 2023015921A1 CN 2022085465 W CN2022085465 W CN 2022085465W WO 2023015921 A1 WO2023015921 A1 WO 2023015921A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
model
character
animation
description information
Prior art date
Application number
PCT/CN2022/085465
Other languages
English (en)
French (fr)
Inventor
吴雪平
唐子豪
关子敬
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Publication of WO2023015921A1 publication Critical patent/WO2023015921A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Definitions

  • the present disclosure relates to the field of computers, in particular, to an animation data processing method, a non-volatile storage medium and an electronic device.
  • the skeletal animation used in the game scene provided in the related art usually has the following problems:
  • the skeleton of the virtual character model will fuse the gestures presented by the multi-frame character animation.
  • the skeleton of the virtual character model will fuse the gestures presented by the multi-frame character animation.
  • Embodiments of the present disclosure provide an animation data processing method, a non-volatile storage medium, and an electronic device to at least solve the problem that the skeletal animation used in the game scene provided in the related art not only takes a long time to load, but also A technical problem that takes up too much memory.
  • a method for processing animation data including:
  • the target motion description information of the target avatar model wherein the target motion description information records the position information of the key node bones of the character skeleton of the target avatar model in each frame of character animation, and the key node bones are part of the complete bones of the character skeleton skeleton; input the target motion description information into the target neural network model corresponding to the target avatar model to obtain the target animation data of the target avatar model, wherein the target neural network model uses the skeletal animation training data corresponding to the target avatar model
  • the target animation data includes: multi-frame target character animation, each frame of character animation of the multi-frame target character animation records the position information of the complete skeleton of the character skeleton of the target virtual character model in the current posture; Drive the target avatar model to perform corresponding actions according to the target animation data.
  • the key node bone is a terminal bone in the complete skeleton.
  • obtaining the target motion description information of the target avatar model includes: obtaining the basic motion description information of the base avatar model, wherein the base avatar model and the target avatar model are the same type of avatar model; determining the base avatar model The corresponding relationship with the target avatar model; adjusting the basic motion description information according to the corresponding relationship to obtain the target motion description information of the target avatar model.
  • character models of the same type indicate that the base avatar model and the target avatar model belong to the same biological classification.
  • determining the corresponding relationship between the base avatar model and the target avatar model includes: determining the proportional relationship between the base avatar model and the target avatar model according to the base model size of the base avatar model and the target model size of the target avatar model .
  • adjusting the basic motion description information according to the corresponding relationship to obtain the target motion description information of the target avatar model includes: adjusting the bone end position information in the basic motion description information according to the proportional relationship to obtain the bone end position information in the target motion description information. location information.
  • obtaining the basic motion description information of the basic avatar model includes: obtaining original animation data; determining the basic motion description information of the basic avatar model from the original animation data according to the calculation method of the motion description information corresponding to the basic avatar model .
  • an animation data processing device including:
  • the acquiring module is configured to acquire the target motion description information of the target virtual character model, wherein the target motion description information records the position information of the key node bones of the character skeleton of the target virtual character model in each frame of character animation, and the key node bones are the character Part of the bones in the complete skeleton of the skeleton; a processing module configured to input the target motion description information into the target neural network model corresponding to the target virtual character model, so as to obtain the target animation data of the target virtual character model, wherein the target neural network
  • the model is a model obtained by machine learning training using the skeletal animation training data corresponding to the target virtual character model.
  • the target animation data includes: multi-frame target character animation, and the target virtual character model is recorded in each frame of multi-frame target character animation The position information of the complete skeleton of the character skeleton in the current posture; the driving module is configured to drive the target virtual character model to perform corresponding actions according to the target animation data.
  • the key node bone is a terminal bone in the complete skeleton.
  • the acquisition module is configured to acquire the basic motion description information of the basic avatar model, wherein the basic avatar model and the target avatar model are the same type of avatar model; determine the basic avatar model and the target avatar model corresponding relationship; adjust the basic motion description information according to the corresponding relationship to obtain the target motion description information of the target avatar model.
  • character models of the same type indicate that the base avatar model and the target avatar model belong to the same biological classification.
  • the acquisition module is configured to determine the proportional relationship between the base avatar model and the target avatar model according to the base model size of the base avatar model and the target model size of the target avatar model.
  • the acquiring module is configured to adjust the bone end position information in the basic motion description information according to the proportional relationship, so as to acquire the bone end position information in the target motion description information.
  • the acquisition module is configured to acquire original animation data; determine the basic motion description information of the basic virtual character model from the original animation data according to the calculation method of the motion description information corresponding to the basic virtual character model.
  • a non-volatile storage medium and a computer program is stored in the storage medium, wherein the computer program is configured to execute the animation data processing method in any one of the above when running.
  • a processor the processor is used to run a program, wherein the program is configured to execute the animation data processing method in any one of the above when running.
  • an electronic device including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to execute the animation data processing method in any one of the above.
  • the target motion description information of the target avatar model is obtained, and the target motion description information records the position information of the key node bones of the character skeleton of the target avatar model in each frame of character animation.
  • the key node bones are In the way of part of the bones in the complete skeleton of the character skeleton
  • the target animation data of the target virtual character model is obtained by inputting the target motion description information into the target neural network model corresponding to the target virtual character model.
  • the skeletal animation training data corresponding to the character model is a model obtained by machine learning training.
  • the target animation data includes multi-frame target character animation, and each frame of character animation of the multi-frame target character animation records the character skeleton of the target virtual character model in the current posture.
  • the position information of the complete skeleton, and the target avatar model is driven to perform corresponding actions according to the target animation data, and the position information of the key node bones recorded by the target motion description information is used to restore the actions performed by the target avatar model.
  • the target motion description information is input to the target neural network model for prediction, and the target animation data of the target avatar model can be obtained, thereby driving the target avatar model to perform corresponding actions, thereby effectively reducing the loading time of skeletal animation,
  • the technical effect of reducing the memory occupied by the skeletal animation further solves the technical problem that the skeletal animation used in the game scene provided in the related art not only takes a long loading time, but also takes up too much memory.
  • FIG. 1 is a block diagram of a hardware structure of a mobile terminal according to an animation data processing method according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a method for processing animation data according to one embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of generating motion description information according to an optional embodiment of the present disclosure
  • Fig. 4 is a flow chart of acquiring target animation data of an avatar model based on a neural network model according to an optional embodiment of the present disclosure
  • Fig. 5 is a schematic diagram of predicting a complete pose of a virtual character model based on a neural network model according to an optional embodiment of the present disclosure
  • Fig. 6 is a structural block diagram of an animation data processing device according to one embodiment of the present disclosure.
  • Skeletal Animation It belongs to a kind of model animation (model animation includes: vertex animation and skeletal animation), and skeletal animation usually includes two parts of bone and skin data.
  • model animation includes: vertex animation and skeletal animation
  • skeletal animation usually includes two parts of bone and skin data.
  • the interconnected bones make up the skeleton structure, and the animation is generated by changing the orientation and position of the bones.
  • Skinned Mesh It refers to attaching (binding) the vertices of the Mesh to the bones, and each vertex can be controlled by multiple bones, so that the vertices at the joints are pulled by the parent and child bones at the same time. Change the position to eliminate the gap. Skinning is defined by each bone and the weights that each vertex is influenced by each bone.
  • Neural network In the field of machine learning and cognitive science, it is a mathematical model or computational model that imitates the structure and function of biological neural networks (such as the central nervous system of animals, especially the brain), and is used to Estimate or approximate.
  • IK Inverse dynamics
  • Animation fusion refers to a processing method that enables multi-frame animation clips to have an effect on the final pose of the virtual character model. More precisely, multiple input poses are combined to produce the final pose of the skeleton.
  • Animation Retargeting It is a function that allows animation to be reused between virtual character models that share the same skeleton resource but have greatly different proportions. Retargeting prevents an animated skeleton from losing scale or deforming unnecessarily when using animations from differently shaped avatar models.
  • an embodiment of a method for processing animation data is provided. It should be noted that the steps shown in the flow chart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions, Also, although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.
  • the animation data processing method in one of the embodiments of the present disclosure can run on a terminal device or a server.
  • the terminal device may be a local terminal device.
  • the animation data processing method runs on the server, the method can be implemented and executed based on a cloud interactive system, wherein the cloud interactive system includes a server and a client device.
  • cloud games refer to game methods based on cloud computing.
  • the client device can be a display device with data transmission function close to the user side, such as a mobile terminal, TV, computer, palmtop computer, etc.; but the terminal device for information processing is Cloud game server in the cloud.
  • the player When playing a game, the player operates the client device to send an operation command to the cloud game server, and the cloud game server runs the game according to the operation command, encodes and compresses the game screen and other data, and returns it to the client device through the network. Decode and output the game screen.
  • the terminal device may be a local terminal device.
  • the local terminal device stores game programs and is used to present game screens.
  • the local terminal device is used to interact with the player through the graphical user interface, that is, the conventional electronic device downloads and installs the game program and runs it.
  • the local terminal device may provide the graphical user interface to the player in various manners, for example, rendering and displaying it on the display screen of the terminal, or providing it to the player through holographic projection.
  • the local terminal device may include a display screen and a processor, the display screen is used to present a graphical user interface, the graphical user interface includes a game screen, and the processor is used to run the game, generate a graphical user interface, and control the graphical user interface displayed on the display.
  • an embodiment of the present disclosure provides a method for processing animation data, which provides a graphical user interface through a terminal device, where the terminal device may be the aforementioned local terminal device, or the aforementioned Client devices in the cloud interactive system.
  • a mobile terminal running on a local terminal device can be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a handheld computer, and a mobile Internet device (Mobile Internet Devices, referred to as MID), PAD, game console and other terminal equipment.
  • Fig. 1 is a block diagram of a hardware structure of a mobile terminal according to an animation data processing method according to an embodiment of the present disclosure. As shown in FIG. 1, the mobile terminal may include one or more (only one is shown in FIG.
  • processor 102 may include but not limited to a central processing unit (CPU), a graphics processing unit (GPU), a digital Processing devices such as signal processing (DSP) chips, microprocessors (MCU), programmable logic devices (FPGA), neural network processors (NPU), tensor processors (TPU), artificial intelligence (AI) type processors, etc. ) and memory 104 for storing data.
  • the aforementioned mobile terminal may further include a transmission device 106 , an input and output device 108 , and a display device 110 for communication functions.
  • the structure shown in FIG. 1 is only for illustration, and it does not limit the structure of the above mobile terminal.
  • the mobile terminal may also include more or fewer components than those shown in FIG. 1 , or have a different configuration from that shown in FIG. 1 .
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the animation data processing method in the embodiment of the present disclosure, and the processor 102 executes the computer program stored in the memory 104 by running the computer program.
  • the memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include a memory that is remotely located relative to the processor 102, and these remote memories may be connected to the mobile terminal through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Transmission device 106 is used to receive or transmit data via a network.
  • the specific example of the above network may include a wireless network provided by the communication provider of the mobile terminal.
  • the transmission device 106 includes a network interface controller (NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • the input to the input and output device 108 may come from multiple human interface devices (Human Interface Device, HID for short). For example: keyboard and mouse, gamepad, other special game controllers (such as: steering wheel, fishing rod, dance mat, remote control, etc.).
  • HID Human Interface Device
  • some human interface devices can also provide output functions, such as: force feedback and vibration of gamepads, audio output of controllers, etc.
  • the display device 110 may be, for example, a head-up display (HUD), a touch-screen liquid crystal display (LCD), and a touch display (also referred to as a "touch screen” or “touch display”).
  • HUD head-up display
  • LCD liquid crystal display
  • touch display also referred to as a "touch screen” or “touch display”
  • the liquid crystal display may enable a user to interact with a user interface of the mobile terminal.
  • the above-mentioned mobile terminal has a graphical user interface (GUI), and the user can perform human-computer interaction with the GUI by touching finger contacts and/or gestures on the touch-sensitive surface, and the human-computer interaction function here is optional Including the following interactions: creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving emails, call interface, playing digital video, playing digital music and/or web browsing, etc., used to perform the above
  • the executable instructions of the computer interaction function are configured/stored in one or more processor-executable computer program products or readable storage media.
  • FIG. 2 is a flowchart of a method for processing animation data according to one embodiment of the present disclosure. As shown in FIG. 2 , the method includes the following steps :
  • Step S20 obtain the target motion description information of the target avatar model, wherein the target motion description information records the position information of the key node bones of the character skeleton of the target avatar model in each frame of character animation, and the key node bones are the complete bones of the character skeleton part of the skeleton in
  • the aforementioned target avatar model can be an avatar model, a virtual animal model, and the like.
  • the above target motion description information is used to record the position information of key node bones of the character skeleton of the target virtual character model in each frame of character animation.
  • the key node bones are part of the complete bones (that is, the whole body bones) of the character skeleton.
  • the target control touched by the game player can be determined in response to the touch operation performed on the graphical user interface of the mobile terminal, and a control instruction corresponding to the target control can be generated thereby.
  • the mobile terminal controls the target avatar model to perform a corresponding action according to the generated control instruction, so as to acquire corresponding target description information when it detects that the target avatar model performs a corresponding action.
  • the mobile terminal controls the target avatar model to perform a corresponding jump action according to the generated jump instruction, so as to acquire corresponding target description information when it detects that the target avatar model performs a corresponding jump action.
  • the target motion description information will be input into the target neural network model corresponding to the target avatar model to obtain target animation data of the target avatar model, and then drive the target avatar model to perform corresponding actions according to the target animation data.
  • the above-mentioned key node bone is a terminal bone in the complete bone.
  • the target motion description information can be a series of key points on the character skeleton of the virtual character model, which are responsible for recording the positions of the skeleton ends (for example: the human skeleton ends are usually the left and right wrists, left and right ankles, hips and head, etc.) Data, so that the animation data is separated from the specific skeleton, so that different stylized actions can be generated according to different skeletons. Since the target motion description information mainly records a small amount of skeleton end position data, it will greatly reduce the storage space occupied by skeletal animation data and reduce the number of animations that need to be loaded.
  • the animation data is greatly simplified, and because it is separated from the specific skeleton information, the animation data is universal and can be applied to other similar virtual character models character skeleton.
  • each frame since the target animation data is stored by frame, each frame records the position data of the skeleton end of the target avatar model at the current character pose, so the corresponding skeleton end can be obtained from the target animation data according to the time information of the current playback progress location data.
  • Step S21 input the target motion description information into the target neural network model corresponding to the target avatar model to obtain the target animation data of the target avatar model, wherein the target neural network model is trained by using the skeletal animation corresponding to the target avatar model
  • the target animation data includes: multi-frame target character animation, each frame of multi-frame target character animation records the position information of the complete skeleton of the character skeleton of the target virtual character model in the current posture ;
  • the above-mentioned target neural network model is a model obtained by machine learning training using skeletal animation training data corresponding to the target virtual character model.
  • the above target neural network model can be trained.
  • the character skeletons of different virtual character models correspond to different target neural network models.
  • the target animation data includes: multi-frame target character animation, each frame of the multi-frame target character animation records the position information of the complete skeleton of the character skeleton of the target virtual character model in the current posture.
  • a small amount of position data of the end of the skeleton recorded by the target motion description information can restore the actions performed by the target avatar model.
  • the target neural network model can be used to predict the complete pose of each frame of animation.
  • the target neural network model By inputting the position data of the end of the skeleton after pose matching processing to the target neural network model corresponding to the character skeleton of the target avatar model, all the bone positions of the target avatar model can be predicted, and each frame can be restored. The full pose in the animation.
  • the neural network model corresponding to the character skeleton of the target avatar model is only used to predict the position of all bones of the target avatar model, prediction of different character skeletons requires training of respective neural network models.
  • Accurate predictions of target neural network models rely on inputting a large amount of animation training data. Using methods such as motion capture, a large amount of animation training data can be generated for the target virtual character model, that is, original animation data that records all bone positions. For each neural network model, the more animation data used for training and the richer the types, the better the training effect.
  • Step S22 drive the target avatar model to perform corresponding actions according to the target animation data.
  • the target motion description information of the target avatar model can be acquired, and the target motion description information records the position information of the key node bones of the character skeleton of the target avatar model in each frame of character animation, and the key node bones are the integrity of the character skeleton Part of the bones in the skeleton, by inputting the target motion description information into the target neural network model corresponding to the target avatar model to obtain the target animation data of the target avatar model, the target neural network model uses the corresponding target avatar model Skeletal animation training data is a model obtained by machine learning training.
  • the target animation data includes multi-frame target character animation, and each frame of multi-frame target character animation records the complete skeleton of the character skeleton of the target virtual character model in the current posture.
  • Position information and drive the target avatar model to perform corresponding actions according to the target animation data, so as to restore the actions performed by the target avatar model by using the position information of the key node bones recorded in the target motion description information.
  • Input to the target neural network model for prediction, and the target animation data of the target avatar model can be obtained, thereby driving the target avatar model to perform corresponding actions, thereby effectively reducing the loading time of skeletal animation and reducing the time spent on skeletal animation.
  • the technical effect of occupying memory further solves the technical problem that the skeletal animation used in the game scene provided in the related art not only takes a long loading time, but also takes up too much memory.
  • step S20 acquiring the target motion description information of the target avatar model may include the following execution steps:
  • Step S200 acquiring basic motion description information of the basic avatar model, wherein the basic avatar model and the target avatar model are the same type of avatar;
  • the aforementioned character models of the same type indicate that the base avatar model and the target avatar model belong to the same biological classification.
  • the base avatar model and the target avatar model belong to the same character classification, for example: the base avatar model is a virtual adult model, and the target avatar model is a virtual child model.
  • the base avatar model and the target avatar model belong to the same animal category, for example: the base avatar model is a virtual cheetah model, and the target avatar model is a virtual hunting dog model.
  • the original animation data can be obtained first, and then the basis of the basic avatar model can be determined from the original animation data according to the calculation method of the motion description information corresponding to the basic avatar model.
  • Motion description information Specifically, firstly, the original animation data can be collected by means of motion capture, etc.; secondly, the bone joint position is obtained from the collected original animation data; and then, the position data of the end of the skeleton is obtained by calculation of the bone joint position. It should be noted that the above motion description information can be applied to character skeletons of similar character models.
  • Fig. 3 is a schematic diagram of generating motion description information according to an optional embodiment of the present disclosure.
  • the right side is the original animation data of the virtual character model collected by means of motion capture, etc., through the collected Obtain the bone joint position from the original animation data and use the bone joint position calculation to obtain the position data of the end of the skeleton displayed on the left (that is, the key points contained in the motion description information).
  • the key points are obtained through the preset calculation method, which is used to specify the bones and joints involved in the calculation and the calculation method, and the same set of calculation methods is used by the same character skeleton to generate the key points, for example: virtual human models share a set of calculation methods , the virtual reptile model shares another set of calculation methods, and the position of the key points of the feet is obtained by the foot joints of the human skeleton through a preset calculation method.
  • Step S201 determining the corresponding relationship between the basic avatar model and the target avatar model
  • animation effects can be rendered in different styles on different character skeletons of the same virtual character model. Since the neural network model is trained through part of the original animation data of the avatar model, its output will be affected by the training data, which is conducive to maintaining the action style of the avatar model. For example, for the same running animation, there will be differences between the action style of the virtual adult character model and the action style of the virtual child character model, so as to respectively reflect the characteristics of the adult movement and the movement characteristics of the child.
  • the proportional relationship between the base avatar model and the target avatar model can be determined according to the base model size of the base avatar model and the target model size of the target avatar model .
  • Step S202 adjusting the basic motion description information according to the corresponding relationship to obtain target motion description information of the target avatar model.
  • the bone end position information in the basic motion description information can be adjusted according to the proportional relationship to obtain the bone end position information in the target motion description information location information.
  • all bone positions of the specified avatar model can be obtained through neural network model prediction, and a complete pose of the specified avatar model in each frame of animation can be formed. If the specified avatar model has additional constraints and restrictions in a specific game scene (for example: the left foot of the specified avatar model needs to step on the virtual ground in the game scene), then the specified avatar needs to be set according to the constraints and restrictions. The specific pose of the model is corrected.
  • the above-mentioned constraints and restrictions generally act on the position data of the end of the skeleton, therefore, it is equivalent to re-determining the position data of the end of the skeleton. After re-determining the position data of the end of the skeleton, the IK solution can be performed using the re-determined position data of the end of the skeleton to correct and re-determine the position of all bones.
  • Fig. 4 is a flow chart of obtaining target animation data of a virtual character model based on a neural network model according to an optional embodiment of the present disclosure. As shown in Fig. 4 , the process may include the following processing steps:
  • Step S402 using methods such as motion capture to generate a large amount of animation training data for the target avatar model, that is, recording original animation data of all bone positions.
  • step S404 since the target neural network model corresponding to the character skeleton of the target avatar model is only used to predict the positions of all bones of the target avatar model, prediction of different character skeletons requires training of respective neural network models. Accurate predictions of target neural network models rely on inputting a large amount of animation training data.
  • step S406 the basic animation data of the basic avatar model is collected by means of motion capture or the like.
  • Step S408 acquire motion description information generation algorithm.
  • Step S410 using the motion description information generation algorithm to first obtain the skeleton joint position from the collected basic animation data, and then use the skeleton joint position calculation to obtain the position data of the skeleton end, that is, the basic motion description information.
  • Step S412 since the body proportions of the character skeletons of different avatar models are different, in order to apply the position data of the end of the skeleton to different character skeletons of the same avatar model, it is necessary to carry out the position data of the end of the skeleton according to the information of the body proportions. Pose matching to obtain target motion description information.
  • Step S414 using the position information of key node bones recorded in the target motion description information to restore the actions performed by the target avatar model, and by inputting the target motion description information into the target neural network model for prediction, the target avatar model can be obtained Target animation data.
  • Step S416 judging that the target avatar model has additional constraints and restrictive conditions in the specific game scene; if yes, go to step S418; if not, continue to execute step S420.
  • Step S4108 correcting the specific posture of the target avatar model according to the constraints and restrictions.
  • the above-mentioned constraints and restrictions generally act on the position data of the end of the skeleton, therefore, it is equivalent to re-determining the position data of the end of the skeleton.
  • the IK solution can be performed using the re-determined position data of the end of the skeleton to correct and re-determine the position of all bones.
  • step S420 the target animation data of the target avatar model is finally obtained, so as to drive the target avatar model to perform corresponding actions according to the target animation data.
  • Fig. 5 is a schematic diagram of predicting the complete posture of the virtual character model based on the neural network model according to an optional embodiment of the present disclosure. As shown in Fig. 5, firstly, it is obtained by sampling the time information of the current animation playback progress from the original animation data The position data (ie motion description information) of the end of the skeleton for a particular frame. Secondly, the posture matching processing is performed on the position data of the end of the skeleton obtained by sampling with the skeletons of virtual character model A and virtual character model B, respectively, and the matched end position data corresponding to virtual character model A and the corresponding matching position data of virtual character model B are respectively obtained. Rear end position data.
  • the position information of the character skeleton end of the target virtual character model in the continuous multi-frame character animation can be obtained from the target animation data; secondly, for the obtained character
  • the animation fusion processing is performed on the position information of the skeleton end to obtain the fusion result; then, the target neural network model is used to analyze the fusion result to predict the fused animation data of the target virtual character model. That is, when it is necessary to fuse the various animation poses of the virtual character model in the multi-frame continuous animation, the fusion calculation can only be performed on the position data of the end of the skeleton to speed up the fusion speed.
  • the virtual character model is in a complex motion state, it is necessary to fuse each animation pose in up to a dozen frames of animation. By only performing fusion calculation on the position data of the end of the skeleton, the animation fusion speed can be greatly improved.
  • the method according to the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD) contains several instructions to enable a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in various embodiments of the present disclosure.
  • an animation data processing device is also provided, which is used to implement the above embodiments and preferred implementation modes, and those that have been explained will not be repeated here.
  • the term "module” may be a combination of software and/or hardware that realizes a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
  • FIG. 6 is a structural block diagram of an animation data processing device according to one embodiment of the present disclosure.
  • the device includes: an acquisition module 10 configured to acquire target motion description information of a target avatar model, wherein, The target motion description information records the position information of the key node bones of the character skeleton of the target virtual character model in each frame of character animation, and the key node bones are some bones in the complete bones of the character skeleton; the processing module 20 is configured to execute the target motion
  • the description information is input into the target neural network model corresponding to the target avatar model to obtain the target animation data of the target avatar model, wherein the target neural network model is obtained by machine learning training using the skeletal animation training data corresponding to the target avatar model
  • the target animation data includes: multi-frame target character animation, the position information of the complete skeleton of the character skeleton of the target virtual character model in the current posture is recorded in each frame of the multi-frame target character animation; the drive module 30 is It is configured to drive the target avatar model to perform corresponding actions according to the target animation
  • the key node bone is a terminal bone in the complete skeleton.
  • the acquisition module 10 is configured to acquire the basic motion description information of the basic avatar model, wherein the basic avatar model and the target avatar model are the same type of avatar model; determine the basic avatar model and the target avatar Correspondence of the models; adjust the basic motion description information according to the correspondence to obtain the target motion description information of the target avatar model.
  • character models of the same type indicate that the base avatar model and the target avatar model belong to the same biological classification.
  • the acquisition module 10 is configured to determine the proportional relationship between the base avatar model and the target avatar model according to the base model size of the base avatar model and the target model size of the target avatar model.
  • the obtaining module 10 is configured to perform adjustment of the bone end position information in the basic motion description information according to the proportional relationship, so as to obtain the bone end position information in the target motion description information.
  • the acquiring module 10 is configured to acquire original animation data; and determine the basic motion description information of the base avatar model from the original animation data according to the calculation method of motion description information corresponding to the base avatar model.
  • the above-mentioned modules can be realized by software or hardware. For the latter, it can be realized by the following methods, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules can be combined in any combination The forms of are located in different processors.
  • Embodiments of the present disclosure also provide a non-volatile storage medium, where a computer program is stored in the non-volatile storage medium, wherein the computer program is set to execute any one of the above-mentioned method embodiments when running. step.
  • the above-mentioned non-volatile storage medium may be configured to store a computer program for performing the following steps:
  • the target motion description information records the position information of the key node bones of the character skeleton of the target virtual character model in each frame of character animation, and the key node bones are the complete bones of the character skeleton part of the skeleton;
  • the target animation data includes: multi-frame target character animation, each frame of character animation of the multi-frame target character animation records the position information of the complete skeleton of the character skeleton of the target virtual character model in the current posture;
  • the key node bone is a terminal bone in the complete skeleton.
  • obtaining the target motion description information of the target avatar model includes: obtaining the basic motion description information of the base avatar model, wherein the base avatar model and the target avatar model are the same type of avatar model; determining the base avatar model The corresponding relationship with the target avatar model; adjusting the basic motion description information according to the corresponding relationship to obtain the target motion description information of the target avatar model.
  • character models of the same type indicate that the base avatar model and the target avatar model belong to the same biological classification.
  • determining the corresponding relationship between the base avatar model and the target avatar model includes: determining the proportional relationship between the base avatar model and the target avatar model according to the base model size of the base avatar model and the target model size of the target avatar model .
  • adjusting the basic motion description information according to the corresponding relationship to obtain the target motion description information of the target avatar model includes: adjusting the bone end position information in the basic motion description information according to the proportional relationship to obtain the bone end position information in the target motion description information. location information.
  • obtaining the basic motion description information of the basic avatar model includes: obtaining original animation data; determining the basic motion description information of the basic avatar model from the original animation data according to the calculation method of the motion description information corresponding to the basic avatar model .
  • the above-mentioned non-volatile storage medium may include but not limited to: U disk, read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as Various media that can store computer programs such as RAM), mobile hard disk, magnetic disk or optical disk.
  • the target motion description information of the target avatar model is obtained, and the target motion description information records the position information of the key node bones of the character skeleton of the target avatar model in each frame of character animation.
  • the key node bones are In the way of part of the bones in the complete skeleton of the character skeleton
  • the target animation data of the target virtual character model is obtained by inputting the target motion description information into the target neural network model corresponding to the target virtual character model.
  • the skeletal animation training data corresponding to the character model is a model obtained by machine learning training.
  • the target animation data includes multi-frame target character animation, and each frame of character animation of the multi-frame target character animation records the character skeleton of the target virtual character model in the current posture.
  • the position information of the complete skeleton, and the target avatar model is driven to perform corresponding actions according to the target animation data, and the position information of the key node bones recorded by the target motion description information is used to restore the actions performed by the target avatar model.
  • the target motion description information is input to the target neural network model for prediction, and the target animation data of the target avatar model can be obtained, thereby driving the target avatar model to perform corresponding actions, thereby effectively reducing the loading time of skeletal animation,
  • the technical effect of reducing the memory occupied by the skeletal animation further solves the technical problem that the skeletal animation used in the game scene provided in the related art not only takes a long loading time, but also takes up too much memory.
  • Embodiments of the present disclosure also provide an electronic device, including a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to execute the following steps through a computer program:
  • the target motion description information records the position information of the key node bones of the character skeleton of the target virtual character model in each frame of character animation, and the key node bones are the complete bones of the character skeleton part of the skeleton;
  • the target animation data includes: multi-frame target character animation, each frame of character animation of the multi-frame target character animation records the position information of the complete skeleton of the character skeleton of the target virtual character model in the current posture;
  • the key node bone is a terminal bone in the complete skeleton.
  • obtaining the target motion description information of the target avatar model includes: obtaining the basic motion description information of the base avatar model, wherein the base avatar model and the target avatar model are the same type of avatar model; determining the base avatar model The corresponding relationship with the target avatar model; adjusting the basic motion description information according to the corresponding relationship to obtain the target motion description information of the target avatar model.
  • character models of the same type indicate that the base avatar model and the target avatar model belong to the same biological classification.
  • determining the corresponding relationship between the base avatar model and the target avatar model includes: determining the proportional relationship between the base avatar model and the target avatar model according to the base model size of the base avatar model and the target model size of the target avatar model .
  • adjusting the basic motion description information according to the corresponding relationship to obtain the target motion description information of the target avatar model includes: adjusting the bone end position information in the basic motion description information according to the proportional relationship to obtain the bone end position information in the target motion description information. location information.
  • obtaining the basic motion description information of the basic avatar model includes: obtaining original animation data; determining the basic motion description information of the basic avatar model from the original animation data according to the calculation method of the motion description information corresponding to the basic avatar model .
  • the target motion description information of the target avatar model is obtained, and the target motion description information records the position information of the key node bones of the character skeleton of the target avatar model in each frame of character animation.
  • the key node bones are In the way of part of the bones in the complete skeleton of the character skeleton
  • the target animation data of the target virtual character model is obtained by inputting the target motion description information into the target neural network model corresponding to the target virtual character model.
  • the skeletal animation training data corresponding to the character model is a model obtained by machine learning training.
  • the target animation data includes multi-frame target character animation, and each frame of character animation of the multi-frame target character animation records the character skeleton of the target virtual character model in the current posture.
  • the position information of the complete skeleton, and the target avatar model is driven to perform corresponding actions according to the target animation data, and the position information of the key node bones recorded by the target motion description information is used to restore the actions performed by the target avatar model.
  • the target motion description information is input to the target neural network model for prediction, and the target animation data of the target avatar model can be obtained, thereby driving the target avatar model to perform corresponding actions, thereby effectively reducing the loading time of skeletal animation,
  • the technical effect of reducing the memory occupied by the skeletal animation further solves the technical problem that the skeletal animation used in the game scene provided in the related art not only takes a long loading time, but also takes up too much memory.
  • the disclosed technical content can be realized in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units may be a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of units or modules may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure is essentially or part of the contribution to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开公开了一种动画数据处理方法、非易失性存储介质及电子装置。该方法包括:获取目标虚拟角色模型的目标运动描述信息,其中,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼;将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型,以获得目标虚拟角色模型的目标动画数据;根据目标动画数据驱动目标虚拟角色模型执行对应的动作。本公开解决了相关技术中提供的游戏场景中所使用的骨骼动画不仅需要耗费较长的加载时间,而且还需要占用过多的内存的技术问题。

Description

动画数据处理方法、非易失性存储介质及电子装置
相关申请的交叉引用
本公开要求于2021年08月11日提交的申请号为202110920138.3、名称为“动画数据处理方法、非易失性存储介质及电子装置”的中国专利申请的优先权,该中国专利申请的全部内容通过引用全部并入本文。
技术领域
本公开涉及计算机领域,具体而言,涉及一种动画数据处理方法、非易失性存储介质及电子装置。
背景技术
目前,相关技术中提供的游戏场景中所使用的骨骼动画通常存在如下问题:
(1)游戏角色动画数量众多,无论是各个虚拟角色模型执行的相同动作还是不同动作,针对每个虚拟角色模型均需要存储对应的用于描述全部骨骼位置信息的骨骼动画数据,其需要占用巨大的存储空间。如果使用动作匹配(Motion Matching)等方式加载动画资源,则不仅需要耗费较长的加载时间,而且还需要占用过多的内存。
(2)骨骼动画的末端骨骼在骨骼动画数据压缩过程中会导致骨骼动画末端精度下降,而如果不对骨骼动画数据进行压缩处理,则会占用大量的存储空间。在对骨骼动画数据进行压缩处理之后,骨骼的相对位置存在误差。鉴于骨骼的位置信息是相对于父骨骼来定义的,误差会逐级累积,由此会使得骨骼末端产生较大的误差。
(3)在虚拟角色模型进行动作过渡时,虚拟角色模型的骨架上会存在对多帧角色动画呈现的姿态进行融合。尤其是对于复杂的动作过渡,可能同时存在数十个角色动画所呈现的姿态参与融合,其涉及的运算量较大。
(4)不同骨骼结构的骨骼动画数据无法在不同虚拟角色模型的骨架上进行复用。如果需要将骨骼动画数据从一个虚拟角色模型迁移至另一个虚拟角色模型,需要对骨骼动画数据进行重定向以指定两套骨骼之间的映射关系,但是复用效果较差。
针对上述的问题,目前尚未提出有效的解决方案。
需要说明的是,上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开实施例提供了一种动画数据处理方法、非易失性存储介质及电子装置,以至少解决相关技术中提供的游戏场景中所使用的骨骼动画不仅需要耗费较长的加载时间,而且还需要占用过多的内存的技术问题。
根据本公开其中一实施例,提供了一种动画数据处理方法,包括:
获取目标虚拟角色模型的目标运动描述信息,其中,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼;将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型,以获得目标虚拟角色模型的目标动画数据,其中,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括:多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息;根据目标动画数据驱动目标虚拟角色模型执行对应的动作。
可选地,关键节点骨骼为完整骨骼中的末端骨骼。
可选地,获取目标虚拟角色模型的目标运动描述信息包括:获取基础虚拟角色模型的基础运动描述信息,其中,基础虚拟角色模型与目标虚拟角色模型为同一类型的角色模型;确定基础虚拟角色模型与目标虚拟角色模型的对应关系;根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息。
可选地,同一类型的角色模型表示基础虚拟角色模型与目标虚拟角色模型属于相同的生物分类。
可选地,确定基础虚拟角色模型与目标虚拟角色模型的对应关系包括:根据基础虚拟角色模型的基础模型尺寸和目标虚拟角色模型的目标模型尺寸确定基础虚拟角色模型与目标虚拟角色模型的比例关系。
可选地,根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息包括:根据比例关系调整基础运动描述信息中的骨骼末端位置信息,以获取目标运动描述信息中的骨骼末端位置信息。
可选地,获取基础虚拟角色模型的基础运动描述信息包括:获取原始动画数据;根据与基础虚拟角色模型相对应的运动描述信息计算方式从原始动画数据中确定基础虚拟角色模型的基础运动描述信息。
根据本公开其中一实施例,还提供了一种动画数据处理装置,包括:
获取模块,被配置为执行获取目标虚拟角色模型的目标运动描述信息,其中,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼;处理模块,被配置为执行将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型,以获得目标虚拟角色模型的目标动画数据,其中,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括:多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息;驱动模块,被配置为执行根据目标动画数据驱动目标虚拟角色模型执行对应的动作。
可选地,关键节点骨骼为完整骨骼中的末端骨骼。
可选地,获取模块,被配置为执行获取基础虚拟角色模型的基础运动描述信息,其中,基础虚拟角色模型与目标虚拟角色模型为同一类型的角色模型;确定基础虚拟角色模型与目标虚拟角色模型的对应关系;根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息。
可选地,同一类型的角色模型表示基础虚拟角色模型与目标虚拟角色模型属于相同的生物分类。
可选地,获取模块,被配置为执行根据基础虚拟角色模型的基础模型尺寸和目标虚拟角色模型的目标模型尺寸确定基础虚拟角色模型与目标虚拟角色模型的比例关系。
可选地,获取模块,被配置为执行根据比例关系调整基础运动描述信息中的骨骼末端位置信息,以获取目标运动描述信息中的骨骼末端位置信息。
可选地,获取模块,被配置为执行获取原始动画数据;根据与基础虚拟角色模型相对应的运动描述信息计算方式从原始动画数据中确定基础虚拟角色模型的基础运动描述信息。
根据本公开其中一实施例,还提供了一种非易失性存储介质,存储介质中存储有计算机程序,其中,计算机程序被设置为运行时执行上述任一项中的动画数据处理方法。
根据本公开其中一实施例,还提供了一种处理器,处理器用于运行程序,其中,程序被设置为运行时执行上述任一项中的动画数据处理方法。
根据本公开其中一实施例,还提供了一种电子装置,包括存储器和处理器,存储器中存储有计算机程序,处理器被设置为运行计算机程序以执行上述任一项中的动画数据处理方法。
在本公开至少部分实施例中,采用获取目标虚拟角色模型的目标运动描述信息,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼的方式,通过将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型以获得目标虚拟角色模型的目标动画数据,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息,以及根据目标动画数据驱动目标虚拟角色模型执行对应的动作,达到了利用目标运动描述信息记录的关键节点骨骼的位置信息来还原目标虚拟角色模型所执行的动作,通过将目标运动描述信息输入至目标神经网络模型进行预测,即可得到目标虚拟角色模型的目标动画数据,由此驱动目标虚拟角色模型执行对应的动作的目的,从而实现了有效降低骨骼动画的加载时长、减少骨骼动画所占用内存的技术效果,进而解决了相关技术中提供的游戏场景中所使用的骨骼动画不仅需要耗费较长的加载时间,而且还需要占用过多的内存的技术问题。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处所说明的附图用来提供对本公开的进一步理解,构成本申请的一部分,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。在附图中:
图1是根据本公开其中一实施例的一种动画数据处理方法的移动终端的硬件结构框图;
图2是根据本公开其中一实施例的动画数据处理方法的流程图;
图3是根据本公开其中一可选实施例的生成运动描述信息的示意图;
图4是根据本公开其中一可选实施例的基于神经网络模型获取虚拟角色模型的目标动画数据的流程图;
图5是根据本公开其中一可选实施例的基于神经网络模型预测虚拟角色模型的完整姿态的示意图;
图6是根据本公开其中一实施例的动画数据处理装置的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本公开方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分的实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本公开保护的范围。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,在对本申请实施例进行描述的过程中出现的部分名词或术语适用于如下解释:
(1)骨骼动画(Skeletal Animation):属于一种模型动画(模型动画包括:顶点动画和骨骼动画),骨骼动画通常包含骨骼和蒙皮两个部分数据。互相连接的骨骼组成骨架结构,通过改变骨骼的朝向和位置来生成动画。
(2)蒙皮(Skinned Mesh):是指将Mesh的顶点附着(绑定)在骨骼上,并且每 个顶点可以被多个骨骼控制,这样在关节处的顶点由于同时受到父子骨骼的拉扯而改变位置从而消除缝隙。蒙皮由每一根骨骼以及每一个顶点所受到各个骨骼影响的权重共同定义。
(3)神经网络:在机器学习和认知科学领域,是一种模仿生物神经网络(例如:动物的中枢神经系统,特别是大脑)的结构和功能的数学模型或计算模型,用于对函数进行估计或近似。
(4)反向动力学(Inverse kinematics,简称为IK):是一种通过首先确定子骨骼的位置,然后再反求推导出其所在骨骼链上多级父骨骼位置,从而确定整条骨骼链的方法。即,通过确定骨骼末端的状态,反向求解整个骨骼链状态的过程。
(5)动画融合:是指能够使得多帧动画片段对虚拟角色模型的最终姿势起到作用的处理方式。更准确地说,是将多个输入姿势进行结合,以产生骨骼的最终姿势。
(6)动画重定向(Animation Retargeting):是一种允许在共用相同骨架资源但比例差异很大的虚拟角色模型之间复用动画的功能。通过重定向可以防止生成动画的骨架在使用来自不同外形的虚拟角色模型的动画时丢失比例或产生不必要的变形。
根据本公开其中一实施例,提供了一种动画数据处理方法的实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
在本公开其中一种实施例中的动画数据处理方法可以运行于终端设备或者是服务器。终端设备可以为本地终端设备。当动画数据处理方法运行于服务器时,该方法则可以基于云交互系统来实现与执行,其中,云交互系统包括服务器和客户端设备。
在一可选的实施方式中,云交互系统下可以运行各种云应用,例如:云游戏。以云游戏为例,云游戏是指以云计算为基础的游戏方式。在云游戏的运行模式下,游戏程序的运行主体和游戏画面呈现主体是分离的,动画数据处理方法的储存与运行是在云游戏服务器上完成的,客户端设备的作用用于数据的接收、发送以及游戏画面的呈现,举例而言,客户端设备可以是靠近用户侧的具有数据传输功能的显示设备,如,移动终端、电视机、计算机、掌上电脑等;但是进行信息处理的终端设备为云端的云游戏服务器。在进行游戏时,玩家操作客户端设备向云游戏服务器发送操作指令,云游戏服务器根据操作指令运行游戏,将游戏画面等数据进行编码压缩,通过网络返回客户端设备,最后,通过客户端设备进行解码并输出游戏画面。
在一可选的实施方式中,终端设备可以为本地终端设备。以游戏为例,本地终端设备存储有游戏程序并用于呈现游戏画面。本地终端设备用于通过图形用户界面与玩家进行交互,即,常规的通过电子设备下载安装游戏程序并运行。该本地终端设备将图形用户界面提供给玩家的方式可以包括多种,例如,可以渲染显示在终端的显示屏上,或者, 通过全息投影提供给玩家。举例而言,本地终端设备可以包括显示屏和处理器,该显示屏用于呈现图形用户界面,该图形用户界面包括游戏画面,该处理器用于运行该游戏、生成图形用户界面以及控制图形用户界面在显示屏上的显示。
在一种可能的实施方式中,本公开实施例提供了一种动画数据处理方法,通过终端设备提供图形用户界面,其中,终端设备可以是前述提到的本地终端设备,也可以是前述提到的云交互系统中的客户端设备。
以运行在本地终端设备中的移动终端上为例,该移动终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,简称为MID)、PAD、游戏机等终端设备。图1是根据本公开其中一实施例的一种动画数据处理方法的移动终端的硬件结构框图。如图1所示,移动终端可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于中央处理器(CPU)、图形处理器(GPU)、数字信号处理(DSP)芯片、微处理器(MCU)、可编程逻辑器件(FPGA)、神经网络处理器(NPU)、张量处理器(TPU)、人工智能(AI)类型处理器等的处理装置)和用于存储数据的存储器104。可选地,上述移动终端还可以包括用于通信功能的传输设备106、输入输出设备108以及显示设备110。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述移动终端的结构造成限定。例如,移动终端还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本公开实施例中的动画数据处理方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的动画数据处理方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至移动终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输设备106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括移动终端的通信供应商提供的无线网络。在一个实例中,传输设备106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输设备106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
输入输出设备108中的输入可以来自多个人体学接口设备(Human Interface Device,简称为HID)。例如:键盘和鼠标、游戏手柄、其他专用游戏控制器(如:方向盘、鱼竿、跳舞毯、遥控器等)。部分人体学接口设备除了提供输入功能之外,还可以提供输出功能,例如:游戏手柄的力反馈与震动、控制器的音频输出等。
显示设备110可以例如平视显示器(HUD)、触摸屏式的液晶显示器(LCD)和触摸显示器(也被称为“触摸屏”或“触摸显示屏”)。该液晶显示器可使得用户能够与移动终端的用户界面进行交互。在一些实施例中,上述移动终端具有图形用户界面(GUI),用户可以通过触摸触敏表面上的手指接触和/或手势来与GUI进行人机交互,此处的人机交互功能可选的包括如下交互:创建网页、绘图、文字处理、制作电子文档、游戏、视频会议、即时通信、收发电子邮件、通话界面、播放数字视频、播放数字音乐和/或网络浏览等、用于执行上述人机交互功能的可执行指令被配置/存储在一个或多个处理器可执行的计算机程序产品或可读存储介质中。
在本实施例中提供了一种运行于上述移动终端的动画数据处理方法,图2是根据本公开其中一实施例的动画数据处理方法的流程图,如图2所示,该方法包括如下步骤:
步骤S20,获取目标虚拟角色模型的目标运动描述信息,其中,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼;
上述目标虚拟角色模型可以为虚拟人物模型、虚拟动物模型等。上述目标运动描述信息用于记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息。关键节点骨骼为角色骨架的完整骨骼(即全身骨骼)中的部分骨骼。在游戏运行阶段,可以响应对移动终端的图形用户界面执行的触控操作,确定游戏玩家所触控的目标控件,并由此产生该目标控件对应的控制指令。然后,移动终端再根据生成的控制指令控制目标虚拟角色模型执行对应的动作,以便在检测到目标虚拟角色模型执行对应的动作时,获取对应的目标描述信息。例如:响应对移动终端的图形用户界面执行的触控操作,确定游戏玩家触控的是跳跃控件,并由此产生该目标控件对应的跳跃指令。然后,移动终端再根据生成的跳跃指令控制目标虚拟角色模型执行对应的跳跃动作,以便在检测到目标虚拟角色模型执行对应的跳跃动作时,获取对应的目标描述信息。该目标运动描述信息将会被输入至目标虚拟角色模型对应的目标神经网络模型以获得目标虚拟角色模型的目标动画数据,进而根据目标动画数据驱动目标虚拟角色模型执行对应的动作。
在一个可选实施例中,上述关键节点骨骼为完整骨骼中的末端骨骼。即,目标运动描述信息可以是虚拟角色模型的角色骨架上一系列关键点,负责记录骨架末端(例如:人体骨架末端通常为左、右手腕、左、右脚踝、臀部和头部等)的位置数据,以使动画数据脱离具体骨架,由此能够根据不同骨架生成不同风格化的动作。由于目标运动描述信息主要记录少量的骨架末端位置数据,因此,会大幅减少骨骼动画数据所占据的存储空间、降低需要加载的动画数量。
另外,由于目标运动描述信息负责存储骨架末端的位置数据,因此,极大地简化了动画数据,并且由于脱离了具体的骨架信息,因此使得动画数据具有普适性,可以应用于其他同类虚拟角色模型的角色骨架。而且,由于目标动画数据被按帧存储,每帧记录 目标虚拟角色模型在当前时刻的角色姿态下骨架末端的位置数据,因此可以根据当前播放进度的时间信息从目标动画数据中获取对应的骨架末端的位置数据。
步骤S21,将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型,以获得目标虚拟角色模型的目标动画数据,其中,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括:多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息;
上述目标神经网络模型是利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型。通过将骨骼动画训练数据设置为初始神经网络模型的输入参数,便可以训练得到上述目标神经网络模型。而不同虚拟角色模型的角色骨架分别对应不同的目标神经网络模型。
目标动画数据包括:多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息。利用目标运动描述信息记录的少量骨架末端的位置数据能够还原目标虚拟角色模型所执行的动作。通过将骨架末端的位置数据输入至目标神经网络模型进行预测,即可得到目标虚拟角色模型的全身骨骼信息,由此利用目标运动描述信息还原得到全身骨骼状态,以获得目标虚拟角色模型的目标动画数据。为了符合各个虚拟角色模型骨架的姿态,需要利用动作捕捉方式获取动画训练数据,并采用动画训练数据对目标神经网络模型进行训练以提高预测准确性。由于输入的目标运动描述信息并非为全部骨骼数据,因此适用于各种虚拟角色模型的骨架。
为了通过骨架末端的位置数据来获取目标虚拟角色模型在每帧动画中的完整姿态(即全部骨骼位置),可以采用目标神经网络模型对每帧动画中的完整姿态进行预测。通过将经过姿态匹配处理后得到的骨架末端的位置数据输入至目标虚拟角色模型的角色骨架所对应的目标神经网络模型,即可预测得到该目标虚拟角色模型的全部骨骼位置,从而还原出每帧动画中的完整姿态。
由于目标虚拟角色模型的角色骨架所对应的神经网络模型仅用于预测该目标虚拟角色模型的全部骨骼位置,因此,预测不同角色骨架需要分别训练各自的神经网络模型。目标神经网络模型的准确预测依赖于输入大量的动画训练数据。利用运动捕捉等方式可以为目标虚拟角色模型生成大量的动画训练数据,即记录全部骨骼位置的原始动画数据。对于每个神经网络模型而言,用于训练的动画数据越多,类型越丰富,训练效果越好。
步骤S22,根据目标动画数据驱动目标虚拟角色模型执行对应的动作。
例如:根据行走动画数据驱动目标虚拟角色模型执行对应的行走动作,或者根据跑步动画数据驱动目标虚拟角色模型执行对应的跑步动作,或者根据跳跃动画数据驱动目标虚拟角色模型执行对应的跳跃动作。
通过上述步骤,可以采用获取目标虚拟角色模型的目标运动描述信息,目标运动描 述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼的方式,通过将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型以获得目标虚拟角色模型的目标动画数据,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息,以及根据目标动画数据驱动目标虚拟角色模型执行对应的动作,达到了利用目标运动描述信息记录的关键节点骨骼的位置信息来还原目标虚拟角色模型所执行的动作,通过将目标运动描述信息输入至目标神经网络模型进行预测,即可得到目标虚拟角色模型的目标动画数据,由此驱动目标虚拟角色模型执行对应的动作的目的,从而实现了有效降低骨骼动画的加载时长、减少骨骼动画所占用内存的技术效果,进而解决了相关技术中提供的游戏场景中所使用的骨骼动画不仅需要耗费较长的加载时间,而且还需要占用过多的内存的技术问题。
可选地,在步骤S20中,获取目标虚拟角色模型的目标运动描述信息可以包括以下执行步骤:
步骤S200,获取基础虚拟角色模型的基础运动描述信息,其中,基础虚拟角色模型与目标虚拟角色模型为同一类型的角色模型;
上述同一类型的角色模型表示基础虚拟角色模型与目标虚拟角色模型属于相同的生物分类。在一个可选示例中,基础虚拟角色模型与目标虚拟角色模型属于相同的人物分类,例如:基础虚拟角色模型为虚拟成年人模型,目标虚拟角色模型为虚拟儿童模型。在另一个可选示例中,基础虚拟角色模型与目标虚拟角色模型属于相同的动物分类,例如:基础虚拟角色模型为虚拟猎豹模型,目标虚拟角色模型为虚拟猎狗模型。
在获取基础虚拟角色模型的基础运动描述信息的过程中,可以先获取原始动画数据,然后再根据与基础虚拟角色模型相对应的运动描述信息计算方式从原始动画数据中确定基础虚拟角色模型的基础运动描述信息。具体地,首先可以采用运动捕捉等方式来采集原始动画数据;其次,从采集到的原始的动画数据中获取骨骼关节位置;然后,再利用骨骼关节位置计算获得骨架末端的位置数据。需要说明的是,上述运动描述信息可以适用于同类角色模型的角色骨架。
图3是根据本公开其中一可选实施例的生成运动描述信息的示意图,如图3所示,右侧为利用运动捕捉等方式采集到的虚拟角色模型的原始动画数据,通过从采集到的原始的动画数据中获取骨骼关节位置以及利用骨骼关节位置计算获得左侧显示的骨架末端的位置数据(即运动描述信息中包含的关键点)。该关键点通过预设计算方式来获取,该计算方式用于指定参与计算的骨骼关节以及计算方式,并且同类角色骨架使用同一套计算方式来生成关键点,例如:虚拟人体模型共用一套计算方式,虚拟爬行动物模型共用另一套计算方式,足部关键点位置由人体骨架的足部关节通过预设计算方式来获得。
另外,由于在模型空间下,全部骨骼关节或关键点位置共用该空间的坐标系,而在关节空间下,在父关节建立坐标系并且子关节位置坐标依赖于父关节,因此,骨架末端的位置数据定义在模型空间,而非关节空间。而且,由于骨架末端的位置数据定义在模型空间,因此,相比于相关技术中所采用的数据压缩和浮点精度限制会产生误差,并且误差会积累,使得末端骨骼精度不足,能够有效地克服动画精度损失问题。
步骤S201,确定基础虚拟角色模型与目标虚拟角色模型的对应关系;
通过利用神经网络模型可以使得动画效果在同类虚拟角色模型的不同角色骨架上呈现不同的风格化。由于神经网络模型是通过虚拟角色模型原有的部分动画数据训练得到的,其输出会受到训练数据的影响,从而有利于保持该虚拟角色模型的动作风格。例如,对于相同跑步动画而言,虚拟成人角色模型的动作风格与虚拟儿童角色模型的动作风格之间会存在差异,以分别体现成人运动特点与儿童运动特点。
在确定基础虚拟角色模型与目标虚拟角色模型的对应关系的过程中,可以根据基础虚拟角色模型的基础模型尺寸和目标虚拟角色模型的目标模型尺寸确定基础虚拟角色模型与目标虚拟角色模型的比例关系。
步骤S202,根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息。
在根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息的过程中,可以根据比例关系调整基础运动描述信息中的骨骼末端位置信息,以获取目标运动描述信息中的骨骼末端位置信息。
由于不同虚拟角色模型的角色骨架的身材比例存在差异,因此,为了将骨架末端的位置数据应用到同类虚拟角色模型的不同角色骨架上,需要按照身材比例信息对骨架末端的位置数据进行缩放调整。例如:如果希望将从虚拟成人角色模型获取到的骨架末端的位置数据应用到同类虚拟儿童角色模型的不同角色骨架上,那么由于虚拟成人角色模型与虚拟儿童角色模型之间骨骼关节尺寸存在差异,因此,为了使得从虚拟成人角色模型获取到的骨架末端的位置数据能够应用到同类虚拟儿童角色模型的不同角色骨架上,需要按照成人与儿童的身材比例信息对骨架末端的位置数据进行缩放调整(即姿态匹配),以使调整后的骨架末端的位置数据能够适用于同类虚拟儿童角色模型的不同角色骨架上。
在一个可选实施例中,通过神经网络模型预测得到指定虚拟角色模型的全部骨骼位置,能够形成指定虚拟角色模型在每帧动画中的完整姿态。若指定虚拟角色模型在特定游戏场景中存在附加的约束和限制条件(例如:指定虚拟角色模型的左脚需要踩在游戏场景中的虚拟地面上),则需要按照约束和限制条件对指定虚拟角色模型的特定姿态进行矫正。上述约束和限制条件通常作用于骨架末端的位置数据,因此,相当于重新确定骨架末端的位置数据。在重新确定骨架末端的位置数据之后,可以利用重新确定的骨架末端的位置数据进行IK求解,以矫正并重新确定全部骨骼位置。
图4是根据本公开其中一可选实施例的基于神经网络模型获取虚拟角色模型的目标动画数据的流程图,如图4所示,该流程可以包括以下处理步骤:
步骤S402,利用运动捕捉等方式可以为目标虚拟角色模型生成大量的动画训练数据,即记录全部骨骼位置的原始动画数据。
步骤S404,由于目标虚拟角色模型的角色骨架所对应的目标神经网络模型仅用于预测该目标虚拟角色模型的全部骨骼位置,因此,预测不同角色骨架需要分别训练各自的神经网络模型。目标神经网络模型的准确预测依赖于输入大量的动画训练数据。
步骤S406,采用运动捕捉等方式来采集基础虚拟角色模型的基础动画数据。
步骤S408,获取运动描述信息生成算法。
步骤S410,利用运动描述信息生成算法先从采集到的基础动画数据中获取骨骼关节位置,然后再利用骨骼关节位置计算获得骨架末端的位置数据,即基础运动描述信息。
步骤S412,由于不同虚拟角色模型的角色骨架的身材比例存在差异,因此,为了将骨架末端的位置数据应用到同类虚拟角色模型的不同角色骨架上,需要按照身材比例信息对骨架末端的位置数据进行姿态匹配,以获得目标运动描述信息。
步骤S414,利用目标运动描述信息记录的关键节点骨骼的位置信息来还原目标虚拟角色模型所执行的动作,通过将目标运动描述信息输入至目标神经网络模型进行预测,即可得到目标虚拟角色模型的目标动画数据。
步骤S416,判断目标虚拟角色模型在特定游戏场景中存在附加的约束和限制条件;如果是,则转到步骤S418;如果否,则继续执行步骤S420。
步骤S418,按照约束和限制条件对目标虚拟角色模型的特定姿态进行矫正。上述约束和限制条件通常作用于骨架末端的位置数据,因此,相当于重新确定骨架末端的位置数据。在重新确定骨架末端的位置数据之后,可以利用重新确定的骨架末端的位置数据进行IK求解,以矫正并重新确定全部骨骼位置。
步骤S420,最终获得目标虚拟角色模型的目标动画数据,以便根据目标动画数据驱动目标虚拟角色模型执行对应的动作。
图5是根据本公开其中一可选实施例的基于神经网络模型预测虚拟角色模型的完整姿态的示意图,如图5所示,首先,从原始动画数据中根据当前动画播放进度的时间信息采样得到特定帧的骨架末端的位置数据(即运动描述信息)。其次,将采样得到的骨架末端的位置数据分别与虚拟角色模型A、虚拟角色模型B的骨架进行姿态匹配处理,分别得到虚拟角色模型A对应的匹配后末端位置数据和虚拟角色模型B对应的匹配后末端位置数据。然后,再将虚拟角色模型A对应的匹配后末端位置数据输入至虚拟角色模型A对应的神经网络模型A以输出虚拟角色模型A对应的全部骨骼姿态以及将虚拟角色模型B对应的匹配后末端位置数据输入至虚拟角色模型B对应的神经网络模型B以输出虚拟角色模型B对应的全部骨骼姿态。最后,再利用IK技术调整全部骨 骼位置以得到最终的动画姿态。
在一个可选实施例中,可以基于目标动画数据的播放进度所对应的时间信息从目标动画数据获取连续多帧角色动画中目标虚拟角色模型的角色骨架末端位置信息;其次,对获得到的角色骨架末端位置信息进行动画融合处理以得到融合结果;然后,再利用目标神经网络模型对融合结果进行分析以预测目标虚拟角色模型的融合后动画数据。即,当需要对虚拟角色模型在多帧连续动画中的各个动画姿态进行融合时,可以只对骨架末端的位置数据进行融合计算,以加快融合速度。尤其是在虚拟角色模型处于复杂的运动状态时,需要对多达十几帧动画中的各个动画姿态进行融合,通过只对骨架末端的位置数据进行融合计算,能够极大地提升动画融合速度。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本公开各个实施例所述的方法。
在本实施例中还提供了一种动画数据处理装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图6是根据本公开其中一实施例的动画数据处理装置的结构框图,如图6所示,该装置包括:获取模块10,被配置为执行获取目标虚拟角色模型的目标运动描述信息,其中,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼;处理模块20,被配置为执行将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型,以获得目标虚拟角色模型的目标动画数据,其中,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括:多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息;驱动模块30,被配置为执行根据目标动画数据驱动目标虚拟角色模型执行对应的动作。
可选地,关键节点骨骼为完整骨骼中的末端骨骼。
可选地,获取模块10,被配置为执行获取基础虚拟角色模型的基础运动描述信息,其中,基础虚拟角色模型与目标虚拟角色模型为同一类型的角色模型;确定基础虚拟角色模型与目标虚拟角色模型的对应关系;根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息。
可选地,同一类型的角色模型表示基础虚拟角色模型与目标虚拟角色模型属于相同的生物分类。
可选地,获取模块10,被配置为执行根据基础虚拟角色模型的基础模型尺寸和目标虚拟角色模型的目标模型尺寸确定基础虚拟角色模型与目标虚拟角色模型的比例关系。
可选地,获取模块10,被配置为执行根据比例关系调整基础运动描述信息中的骨骼末端位置信息,以获取目标运动描述信息中的骨骼末端位置信息。
可选地,获取模块10,被配置为执行获取原始动画数据;根据与基础虚拟角色模型相对应的运动描述信息计算方式从原始动画数据中确定基础虚拟角色模型的基础运动描述信息。
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。
本公开的实施例还提供了一种非易失性存储介质,该非易失性存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述非易失性存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,获取目标虚拟角色模型的目标运动描述信息,其中,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼;
S2,将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型,以获得目标虚拟角色模型的目标动画数据,其中,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括:多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息;
S3,根据目标动画数据驱动目标虚拟角色模型执行对应的动作。
可选地,关键节点骨骼为完整骨骼中的末端骨骼。
可选地,获取目标虚拟角色模型的目标运动描述信息包括:获取基础虚拟角色模型的基础运动描述信息,其中,基础虚拟角色模型与目标虚拟角色模型为同一类型的角色模型;确定基础虚拟角色模型与目标虚拟角色模型的对应关系;根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息。
可选地,同一类型的角色模型表示基础虚拟角色模型与目标虚拟角色模型属于相同的生物分类。
可选地,确定基础虚拟角色模型与目标虚拟角色模型的对应关系包括:根据基础虚 拟角色模型的基础模型尺寸和目标虚拟角色模型的目标模型尺寸确定基础虚拟角色模型与目标虚拟角色模型的比例关系。
可选地,根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息包括:根据比例关系调整基础运动描述信息中的骨骼末端位置信息,以获取目标运动描述信息中的骨骼末端位置信息。
可选地,获取基础虚拟角色模型的基础运动描述信息包括:获取原始动画数据;根据与基础虚拟角色模型相对应的运动描述信息计算方式从原始动画数据中确定基础虚拟角色模型的基础运动描述信息。
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。
可选地,在本实施例中,上述非易失性存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
在本公开至少部分实施例中,采用获取目标虚拟角色模型的目标运动描述信息,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼的方式,通过将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型以获得目标虚拟角色模型的目标动画数据,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息,以及根据目标动画数据驱动目标虚拟角色模型执行对应的动作,达到了利用目标运动描述信息记录的关键节点骨骼的位置信息来还原目标虚拟角色模型所执行的动作,通过将目标运动描述信息输入至目标神经网络模型进行预测,即可得到目标虚拟角色模型的目标动画数据,由此驱动目标虚拟角色模型执行对应的动作的目的,从而实现了有效降低骨骼动画的加载时长、减少骨骼动画所占用内存的技术效果,进而解决了相关技术中提供的游戏场景中所使用的骨骼动画不仅需要耗费较长的加载时间,而且还需要占用过多的内存的技术问题。
本公开的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
可选地,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,获取目标虚拟角色模型的目标运动描述信息,其中,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼 为角色骨架的完整骨骼中的部分骨骼;
S2,将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型,以获得目标虚拟角色模型的目标动画数据,其中,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括:多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息;
S3,根据目标动画数据驱动目标虚拟角色模型执行对应的动作。
可选地,关键节点骨骼为完整骨骼中的末端骨骼。
可选地,获取目标虚拟角色模型的目标运动描述信息包括:获取基础虚拟角色模型的基础运动描述信息,其中,基础虚拟角色模型与目标虚拟角色模型为同一类型的角色模型;确定基础虚拟角色模型与目标虚拟角色模型的对应关系;根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息。
可选地,同一类型的角色模型表示基础虚拟角色模型与目标虚拟角色模型属于相同的生物分类。
可选地,确定基础虚拟角色模型与目标虚拟角色模型的对应关系包括:根据基础虚拟角色模型的基础模型尺寸和目标虚拟角色模型的目标模型尺寸确定基础虚拟角色模型与目标虚拟角色模型的比例关系。
可选地,根据对应关系调整基础运动描述信息以获取目标虚拟角色模型的目标运动描述信息包括:根据比例关系调整基础运动描述信息中的骨骼末端位置信息,以获取目标运动描述信息中的骨骼末端位置信息。
可选地,获取基础虚拟角色模型的基础运动描述信息包括:获取原始动画数据;根据与基础虚拟角色模型相对应的运动描述信息计算方式从原始动画数据中确定基础虚拟角色模型的基础运动描述信息。
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。
在本公开至少部分实施例中,采用获取目标虚拟角色模型的目标运动描述信息,目标运动描述信息记录每帧角色动画中目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,关键节点骨骼为角色骨架的完整骨骼中的部分骨骼的方式,通过将目标运动描述信息输入至目标虚拟角色模型对应的目标神经网络模型以获得目标虚拟角色模型的目标动画数据,目标神经网络模型为利用与目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,目标动画数据包括多帧目标角色动画,多帧目标角色动画的每帧角色动画中记录有目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息,以及根据目标动画数据驱动目标虚拟角色模型执行对应的动作,达到了利用目标运动描述信息记录的关键节点骨骼的位置信息来还原目标虚拟角色模型所执行的动作,通过将目标运动描述信息输入至目标神经网络模型进行预测,即可得到目标虚 拟角色模型的目标动画数据,由此驱动目标虚拟角色模型执行对应的动作的目的,从而实现了有效降低骨骼动画的加载时长、减少骨骼动画所占用内存的技术效果,进而解决了相关技术中提供的游戏场景中所使用的骨骼动画不仅需要耗费较长的加载时间,而且还需要占用过多的内存的技术问题。
上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
在本公开的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本公开的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。

Claims (11)

  1. 一种动画数据处理方法,包括:
    获取目标虚拟角色模型的目标运动描述信息,其中,所述目标运动描述信息记录每帧角色动画中所述目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,所述关键节点骨骼为所述角色骨架的完整骨骼中的部分骨骼;
    将所述目标运动描述信息输入至所述目标虚拟角色模型对应的目标神经网络模型,以获得所述目标虚拟角色模型的目标动画数据,其中,所述目标神经网络模型为利用与所述目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,所述目标动画数据包括:多帧目标角色动画,所述多帧目标角色动画的每帧角色动画中记录有所述目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息;
    根据所述目标动画数据驱动所述目标虚拟角色模型执行对应的动作。
  2. 根据权利要求1所述的动画数据处理方法,其中,所述关键节点骨骼为所述完整骨骼中的末端骨骼。
  3. 根据权利要求1所述的动画数据处理方法,其中,获取所述目标虚拟角色模型的所述目标运动描述信息包括:
    获取基础虚拟角色模型的基础运动描述信息,其中,所述基础虚拟角色模型与所述目标虚拟角色模型为同一类型的角色模型;
    确定所述基础虚拟角色模型与所述目标虚拟角色模型的对应关系;
    根据所述对应关系调整所述基础运动描述信息以获取所述目标虚拟角色模型的所述目标运动描述信息。
  4. 根据权利要求3所述的动画数据处理方法,其中,所述同一类型的角色模型表示所述基础虚拟角色模型与所述目标虚拟角色模型属于相同的生物分类。
  5. 根据权利要求3所述的动画数据处理方法,其中,确定所述基础虚拟角色模型与所述目标虚拟角色模型的对应关系包括:
    根据所述基础虚拟角色模型的基础模型尺寸和所述目标虚拟角色模型的目标模型尺寸确定所述基础虚拟角色模型与所述目标虚拟角色模型的比例关系。
  6. 根据权利要求5所述的动画数据处理方法,其中,根据所述对应关系调整所述基础运动描述信息以获取所述目标虚拟角色模型的所述目标运动描述信息包括:
    根据所述比例关系调整所述基础运动描述信息中的骨骼末端位置信息,以获取所述目标运动描述信息中的骨骼末端位置信息。
  7. 根据权利要求3所述的动画数据处理方法,其中,获取所述基础虚拟角色模型的所述基础运动描述信息包括:
    获取原始动画数据;
    根据与所述基础虚拟角色模型相对应的运动描述信息计算方式从所述原始动画数据中确定所述基础虚拟角色模型的基础运动描述信息。
  8. 一种动画数据处理装置,包括:
    获取模块,被配置为执行获取目标虚拟角色模型的目标运动描述信息,其中,所述目标运动描述信息记录每帧角色动画中所述目标虚拟角色模型的角色骨架的关键节点骨骼的位置信息,所述关键节点骨骼为所述角色骨架的完整骨骼中的部分骨骼;
    处理模块,被配置为执行将所述目标运动描述信息输入至所述目标虚拟角色模型对应的目标神经网络模型,以获得所述目标虚拟角色模型的目标动画数据,其中,所述目标神经网络模型为利用与所述目标虚拟角色模型对应的骨骼动画训练数据进行机器学习训练得到的模型,所述目标动画数据包括:多帧目标角色动画,所述多帧目标角色动画的每帧角色动画中记录有所述目标虚拟角色模型的角色骨架在当前姿态下的完整骨骼的位置信息;
    驱动模块,被配置为执行根据所述目标动画数据驱动所述目标虚拟角色模型执行对应的动作。
  9. 一种非易失性存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至7任一项中所述的动画数据处理方法。
  10. 一种处理器,所述处理器用于运行程序,其中,所述程序被设置为运行时执行所述权利要求1至7任一项中所述的动画数据处理方法。
  11. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至7任一项中所述的动画数据处理方法。
PCT/CN2022/085465 2021-08-11 2022-04-07 动画数据处理方法、非易失性存储介质及电子装置 WO2023015921A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110920138.3A CN113706666A (zh) 2021-08-11 2021-08-11 动画数据处理方法、非易失性存储介质及电子装置
CN202110920138.3 2021-08-11

Publications (1)

Publication Number Publication Date
WO2023015921A1 true WO2023015921A1 (zh) 2023-02-16

Family

ID=78652568

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/085465 WO2023015921A1 (zh) 2021-08-11 2022-04-07 动画数据处理方法、非易失性存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN113706666A (zh)
WO (1) WO2023015921A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706666A (zh) * 2021-08-11 2021-11-26 网易(杭州)网络有限公司 动画数据处理方法、非易失性存储介质及电子装置
CN114549706A (zh) * 2022-02-21 2022-05-27 成都工业学院 一种动画生成方法及动画生成装置
CN114602177A (zh) * 2022-03-28 2022-06-10 百果园技术(新加坡)有限公司 虚拟角色的动作控制方法、装置、设备和存储介质
CN114998491B (zh) * 2022-08-01 2022-11-18 阿里巴巴(中国)有限公司 数字人驱动方法、装置、设备及存储介质
CN115761074B (zh) * 2022-11-18 2023-05-12 北京优酷科技有限公司 动画数据处理方法、装置、电子设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024487A1 (en) * 2006-07-31 2008-01-31 Michael Isner Converting deformation data for a mesh to animation data for a skeleton, skinning and shading in a runtime computer graphics animation engine
CN102708582A (zh) * 2012-05-08 2012-10-03 电子科技大学 一种面向异构拓扑的角色运动重定向方法
CN106485773A (zh) * 2016-09-14 2017-03-08 厦门幻世网络科技有限公司 一种用于生成动画数据的方法和装置
CN106780681A (zh) * 2016-12-01 2017-05-31 北京像素软件科技股份有限公司 一种角色动作生成方法和装置
CN112037310A (zh) * 2020-08-27 2020-12-04 成都先知者科技有限公司 基于神经网络的游戏人物动作识别生成方法
CN113706666A (zh) * 2021-08-11 2021-11-26 网易(杭州)网络有限公司 动画数据处理方法、非易失性存储介质及电子装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3797404A4 (en) * 2018-05-22 2022-02-16 Magic Leap, Inc. SKELETAL SYSTEMS TO ANIMATE VIRTUAL AVATARS
CN111161427A (zh) * 2019-12-04 2020-05-15 北京代码乾坤科技有限公司 虚拟骨骼模型的自适应调节方法、装置及电子装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024487A1 (en) * 2006-07-31 2008-01-31 Michael Isner Converting deformation data for a mesh to animation data for a skeleton, skinning and shading in a runtime computer graphics animation engine
CN102708582A (zh) * 2012-05-08 2012-10-03 电子科技大学 一种面向异构拓扑的角色运动重定向方法
CN106485773A (zh) * 2016-09-14 2017-03-08 厦门幻世网络科技有限公司 一种用于生成动画数据的方法和装置
CN106780681A (zh) * 2016-12-01 2017-05-31 北京像素软件科技股份有限公司 一种角色动作生成方法和装置
CN112037310A (zh) * 2020-08-27 2020-12-04 成都先知者科技有限公司 基于神经网络的游戏人物动作识别生成方法
CN113706666A (zh) * 2021-08-11 2021-11-26 网易(杭州)网络有限公司 动画数据处理方法、非易失性存储介质及电子装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Master Thesis", 15 February 2021, HEFEI UNIVERSITY OF TECHNOLOGY, CN, article ZHOU, YANG: "Research on Motion Retargeting Method for Motion Data Represented by Joint Position", pages: 1 - 57, XP009543342, DOI: 10.27101/d.cnki.ghfgu.2020.000872 *

Also Published As

Publication number Publication date
CN113706666A (zh) 2021-11-26

Similar Documents

Publication Publication Date Title
WO2023015921A1 (zh) 动画数据处理方法、非易失性存储介质及电子装置
JP7198332B2 (ja) 画像正則化及びリターゲティングシステム
US11836843B2 (en) Enhanced pose generation based on conditional modeling of inverse kinematics
CN110766776B (zh) 生成表情动画的方法及装置
US20120218262A1 (en) Animation of photo-images via fitting of combined models
CN103548012A (zh) 远程仿真计算设备
US11670030B2 (en) Enhanced animation generation based on video with local phase
US20220327755A1 (en) Artificial intelligence for capturing facial expressions and generating mesh data
US20230177755A1 (en) Predicting facial expressions using character motion states
US11830121B1 (en) Neural animation layering for synthesizing martial arts movements
CN115908664B (zh) 人机交互的动画生成方法、装置、计算机设备、存储介质
US20220172431A1 (en) Simulated face generation for rendering 3-d models of people that do not exist
US11893671B2 (en) Image regularization and retargeting system
CN114419211A (zh) 控制虚拟角色骨骼的方法、装置、存储介质及电子装置
TWI814318B (zh) 用於使用模擬角色訓練模型以用於將遊戲角色之臉部表情製成動畫之方法以及用於使用三維(3d)影像擷取來產生遊戲角色之臉部表情之標籤值之方法
US20240233231A1 (en) Avatar generation and augmentation with auto-adjusted physics for avatar motion
CN113827959B (zh) 游戏动画的处理方法、装置及电子装置
CN114504825A (zh) 调整虚拟角色模型的方法、装置、存储介质及电子装置
CN112734940A (zh) Vr内容播放修改方法、装置、计算机设备及存储介质
Rajendran Understanding the Desired Approach for Animating Procedurally
WO2024151405A1 (en) Avatar generation and augmentation with auto-adjusted physics for avatar motion
CN114332308A (zh) 图像处理方法、装置、电子设备及存储介质
CN118160008A (zh) 实用3d资产的推断的骨骼结构
CN115937371A (zh) 人物模型的生成方法和系统
CN117339212A (zh) 控制虚拟游戏角色交互的方法、存储介质及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22854926

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE