CN115049767B - Animation editing method and device, computer equipment and storage medium - Google Patents
Animation editing method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN115049767B CN115049767B CN202210984049.XA CN202210984049A CN115049767B CN 115049767 B CN115049767 B CN 115049767B CN 202210984049 A CN202210984049 A CN 202210984049A CN 115049767 B CN115049767 B CN 115049767B
- Authority
- CN
- China
- Prior art keywords
- bone
- data
- animation
- skin
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 660
- 230000004927 fusion Effects 0.000 claims abstract description 110
- 238000006073 displacement reaction Methods 0.000 claims description 71
- 238000012937 correction Methods 0.000 claims description 41
- 238000013519 translation Methods 0.000 claims description 34
- 239000011159 matrix material Substances 0.000 claims description 32
- 239000013598 vector Substances 0.000 claims description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000000875 corresponding effect Effects 0.000 description 142
- 238000013144 data compression Methods 0.000 description 18
- 230000009471 action Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010219 correlation analysis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000001562 sternum Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application relates to an animation editing method, an animation editing device, computer equipment and a storage medium. The method comprises the following steps: acquiring current skeleton animation data, an animation skin model and preset skeleton animation data; acquiring target preset bone animation data with the bone data similarity of the current bone animation data within a preset threshold range from the preset bone animation data; obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and the target preset bone animation data; acquiring skin deformation parameters corresponding to the target preset bone animation data; obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertex of the animation skin model; and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data. The storage space can be effectively saved under the condition of ensuring the precision of editing the animation.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an animation editing method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology, the three-dimensional animation design obtains great economic benefit and social benefit in industrial development, and the animation editing method has important significance for completely generating continuous three-dimensional animation.
In the prior art, the skin deformation of the animated character is directly solved in real time according to a traditional skinning algorithm, the solving speed is low, the accuracy is low, or each piece of skin deformation information is stored, namely the position information of each vertex on the skin is stored, so that a large amount of storage space is occupied.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an animation editing method, apparatus, computer device and computer readable storage medium, which can balance the precision of an edited animation and the space occupied by the storage data corresponding to the animation, and effectively save the storage space while ensuring the precision of the edited animation.
An animation editing method, comprising:
acquiring current skeleton animation data, an animation skin model and preset skeleton animation data;
acquiring target preset bone animation data with the bone data similarity of the current bone animation data within a preset threshold range from the preset bone animation data, wherein the bone data similarity is used for representing the similarity between the current bone animation data and the preset bone animation data;
obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and target preset bone animation data;
acquiring skin deformation parameters corresponding to target preset bone animation data, wherein the skin deformation parameters are obtained by performing data compression on the skin deformation data corresponding to the target preset bone animation data;
obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertex of the animation skin model;
and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data.
In one embodiment, obtaining target preset bone animation data with a bone data similarity within a preset threshold range with current bone animation data from preset bone animation data comprises:
calculating to obtain corresponding bone data similarity based on bone pose data and bone speed data in the current bone animation data and preset bone animation data, wherein the bone pose data comprise bone displacement and bone rotation, and the bone speed data comprise bone rotation speed and bone displacement speed;
acquiring a preset threshold range of the similarity of the bone data;
and obtaining target preset skeleton animation data according to the preset threshold range of the skeleton data similarity and the skeleton data similarity.
In one embodiment, calculating a corresponding bone data similarity based on the bone pose data, the bone velocity data and the preset bone animation data in the current bone animation data comprises:
obtaining a displacement difference factor according to a difference value between the bone displacement of the current bone animation data and the bone displacement of preset bone animation data;
obtaining a displacement speed difference factor according to a difference value between the bone displacement speed of the current bone animation data and the bone displacement speed of preset bone animation data;
obtaining a rotation amount difference factor according to a difference value between the bone rotation amount of the current bone animation data and the bone rotation amount of preset bone animation data;
obtaining a rotation speed difference factor according to a difference value between the bone rotation speed of the current bone animation data and the bone rotation speed of the preset bone animation data;
and obtaining the bone data similarity based on the fusion of the displacement difference factor, the displacement speed difference factor, the rotation amount difference factor and the rotation speed difference factor.
In one embodiment, the obtaining of the skin deformation fusion weight according to the bone data similarity between the current bone animation data and the target preset bone animation data comprises:
acquiring the similarity of target skeleton data, wherein the similarity of the target skeleton data is the similarity of the target preset skeleton animation data and the current skeleton animation data;
obtaining a similarity fusion factor based on the similarity fusion of the target bone data;
and fusing the similarity of the target bone data with the similarity fusion factor to obtain the skin deformation fusion weight.
In an embodiment, before obtaining the skin deformation parameter corresponding to the target preset bone animation data, the method further includes:
obtaining an animation role skeleton model;
calculating the skinning weight of the vertex of the animation skin model relative to the animation role bone model, wherein the skinning weight is used for representing the relative position relationship of the vertex of the animation skin model relative to the animation role bone model;
fusing skin weight and position information of the vertex of the animation skin model to obtain an animation skin model;
and obtaining a skin deformation parameter corresponding to the target preset bone animation data according to the skin deformation data corresponding to the target preset bone animation data and the animation skinning model.
In one embodiment, obtaining the target skin deformation data according to the skin deformation parameters and the position information of the vertices of the animated skin model includes:
acquiring an animation role skeleton model, wherein the animation role skeleton model comprises a main skeleton model and a correction skeleton model;
acquiring corrected motion parameters of a corrected bone model, wherein the corrected motion parameters comprise a corrected bone rotation matrix and a corrected bone translation vector;
calculating the correction skinning weight of the vertex of the animation skin model relative to the correction bone model, wherein the correction skinning weight is used for representing the relative position relationship of the vertex of the animation skin model relative to the correction bone model;
calculating the skinning weight of the vertex of the animation skin model relative to the main skeleton model, wherein the skinning weight is used for representing the relative position relation of the vertex of the animation skin model relative to the main skeleton model;
and fusing skin weight, position information of the vertex of the animation skin model, skin deformation parameters corresponding to the target preset skeleton animation data, corrected skin weight and corrected motion parameters to obtain target skin deformation data.
In one embodiment, the fusion of the skinning weight, the position information of the vertex of the animation skin model, the skin deformation parameter corresponding to the target preset bone animation data, the corrected skinning weight and the corrected motion parameter to obtain the target skin deformation data includes:
fusing to obtain a skin correction term based on the corrected bone rotation matrix, the corrected bone translation vector, the position information of the vertex of the animation skin model and the corrected skin weight;
obtaining a skin resolving term based on skin weight, position information of the vertex of the animation skin model, skin deformation parameters corresponding to target preset skeleton animation data and position information fusion of the vertex of the animation skin model;
and fusing the skin resolving term and the skin correction term to obtain target skin deformation data.
An animation editing apparatus comprising:
the data acquisition module is used for acquiring current bone animation data, an animation skin model and preset bone animation data;
the target preset bone animation data determining module is used for acquiring target preset bone animation data, the similarity of which with the bone data of the current bone animation data is within a preset threshold range, from the preset bone animation data, and the bone data similarity is used for representing the similarity degree of the current bone animation data and the preset bone animation data;
the skin deformation fusion weight generation module is used for obtaining skin deformation fusion weight according to the bone data similarity between the current bone animation data and the target preset bone animation data;
the skin deformation parameter acquisition module is used for acquiring a skin deformation parameter corresponding to the target preset bone animation data, and the skin deformation parameter is obtained by performing data compression on the skin deformation data corresponding to the target preset bone animation data;
the target skin deformation data generation module is used for obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertex of the animation skin model;
and the target animation generation module is used for fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor when executing the computer program implementing the steps of:
acquiring current skeleton animation data, an animation skin model and preset skeleton animation data;
acquiring target preset skeleton animation data with the skeleton data similarity of the current skeleton animation data within a preset threshold range from the preset skeleton animation data, wherein the skeleton data similarity is used for representing the similarity degree of the current skeleton animation data and the preset skeleton animation data;
obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and target preset bone animation data;
acquiring a skin deformation parameter corresponding to target preset bone animation data, wherein the skin deformation parameter is obtained by performing data compression on skin deformation data corresponding to the target preset bone animation data;
obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertex of the animation skin model;
and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data.
A computer-readable storage medium storing a computer program which when executed by a processor performs the steps of:
acquiring current skeleton animation data, an animation skin model and preset skeleton animation data;
acquiring target preset bone animation data with the bone data similarity of the current bone animation data within a preset threshold range from the preset bone animation data, wherein the bone data similarity is used for representing the similarity between the current bone animation data and the preset bone animation data;
obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and target preset bone animation data;
acquiring skin deformation parameters corresponding to target preset bone animation data, wherein the skin deformation parameters are obtained by performing data compression on the skin deformation data corresponding to the target preset bone animation data;
obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertex of the animation skin model;
and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data.
According to the animation editing method, the animation editing device, the computer equipment and the storage medium, the current skeleton animation data, the animation skin model and the preset skeleton animation data are obtained; acquiring target preset skeleton animation data with the skeleton data similarity of the current skeleton animation data within a preset threshold range from the preset skeleton animation data, wherein the skeleton data similarity is used for representing the similarity degree of the current skeleton animation data and the preset skeleton animation data; obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and the target preset bone animation data; obtaining target skin deformation data according to skin deformation parameters corresponding to the target preset bone animation data and the position information of the vertex of the animation skin model, wherein the skin deformation parameters are obtained by performing data compression on the skin deformation data corresponding to the target preset bone animation data; and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current bone animation data. Therefore, the preset bone animation data and the corresponding skin deformation data are stored in advance through data compression, the storage space is effectively saved, the corresponding target animation is obtained through a data similarity matching and weighted fusion mode, the interference of redundant data with low similarity is reduced, the accidental iteration error data generated in the process of generating the target animation in real time through a skinning algorithm is effectively reduced through pre-storing the iterated preset bone animation data and the corresponding skin deformation data, the matching accuracy of the current bone animation data and the target preset bone animation data is improved, and the accuracy of finally generating the target animation is effectively improved.
Drawings
FIG. 1 is a diagram showing an application environment of the animation editing method in one embodiment;
FIG. 2 is a flow diagram that illustrates a method for animation editing in one embodiment;
FIG. 3 is a schematic flow chart illustrating the process of determining target pre-determined skeletal animation data according to an embodiment;
FIG. 4 is a schematic diagram of a process for generating skeletal data similarities, according to an embodiment;
FIG. 5 is a flowchart illustrating the generation of skin deformation fusion weights in one embodiment;
FIG. 6 is a schematic diagram illustrating a process of generating skin deformation parameters corresponding to target preset bone animation data according to an embodiment;
FIG. 7 is a schematic diagram of a process for generating target skin deformation data in one embodiment;
FIG. 8 is a flowchart illustrating the generation of target skin deformation data according to one embodiment;
FIG. 9 is a block diagram showing the construction of an animation editing apparatus according to an embodiment;
FIG. 10 is a diagram showing an internal structure of a computer device in one embodiment;
FIG. 11 is a diagram of an animated skin model in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The animation editing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. As shown in fig. 1, the computer device 102 obtains current bone animation data, an animation skin model, and preset bone animation data, performs similarity matching according to the current bone animation data and preset bone animation data in a database, uses preset bone animation data, of which the similarity with the current bone animation data in the preset bone animation data is within a preset threshold range, as target preset bone animation data, obtains target skin deformation data according to skin deformation parameters corresponding to the target preset bone animation data and position information of a vertex of the animation skin model, constructs a skin deformation fusion weight according to the bone similarity between the current bone animation data and the target preset bone animation data, performs weighted fusion on the skin deformation fusion weight and the target skin deformation data to obtain a target animation, and completes animation editing. The computer device 102 may specifically include, but is not limited to, various personal computers, laptops, servers, smartphones, tablets, smart cameras, portable wearable devices, and the like.
In one embodiment, as shown in fig. 2, an animation editing method is provided, which is described by taking the method as an example applied to the computer device 102 in fig. 1, and comprises the following steps:
step S202, obtaining current skeleton animation data, an animation skin model and preset skeleton animation data.
The current skeleton animation data is the action basis of the skin animation data of the character to be animated, namely, the computer equipment can generate corresponding skin animation of the character according to the action represented by the current skeleton animation data; the animation skin model is an external surface visual figure with the characteristics of shape, size, color and the like presented by the external surface of the animation character, and can be constructed by adopting a tetrahedral mesh algorithm, a triangular mesh algorithm, a hexahedral mesh algorithm and other finite element algorithms, the skin model is constructed by finite element units, each finite element unit comprises a vertex, namely the vertex of the animation skin model, and each vertex has position coordinate information as shown in figure 11; the preset skeleton animation data is various skeleton action data stored in a computer database in advance, such as various actions of waving hands, running, jumping, turning, bending and the like and skin deformation data corresponding to various skeleton actions, for example, when the skeleton action is waving hands, the skeleton drives the skin on the outer surface of the character to follow the motion of the skeleton, and the position information of each vertex on the skin corresponding to the deformation of the skin generated in an initial state is used as the skin deformation data corresponding to the preset skeleton animation data.
Step S204, target preset bone animation data with the bone data similarity of the current bone animation data within a preset threshold range is obtained from the preset bone animation data, and the bone data similarity is used for representing the similarity of the current bone animation data and the preset bone animation data.
The bone data similarity is a measurement index used for comparing the similarity between preset bone animation data and current bone animation data, and a calculation method for calculating the similarity can be used for calculating the similarity by using a Jacard similarity coefficient, a cosine similarity, a Pearson correlation coefficient, gray correlation analysis and the like, and calculating difference values of all dimensional data in the bone animation data, and then weighting and fusing the difference values of all dimensions to obtain a similarity value.
Specifically, the computer device calculates difference values of current skeleton animation data and data of each dimension in preset skeleton animation data to obtain difference factors of each dimension, performs weighted fusion on the difference factors of each dimension to obtain skeleton data similarity, compares the skeleton data similarity of the current skeleton animation data and all preset skeleton data with a preset threshold range, and takes the preset skeleton animation data corresponding to the skeleton data similarity meeting the preset threshold range as target preset skeleton animation data.
And step S206, obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and the target preset bone animation data.
Specifically, the computer device fuses all the bone data similarities between the target preset bone animation data and the current bone animation data to obtain a weight fusion factor, and then fuses the weight fusion factor with each bone data similarity to obtain a skin deformation fusion weight corresponding to each target preset bone animation data and the current bone animation data, wherein the skin deformation fusion weight is used for representing the similarity between the skin deformation data corresponding to the corresponding target preset bone animation data and the skin deformation data corresponding to the current bone animation data, and is used for fusing the skin deformation data corresponding to each target preset bone animation data.
And S208, acquiring a skin deformation parameter corresponding to the target preset bone animation data, wherein the skin deformation parameter is obtained by performing data compression on the skin deformation data corresponding to the target preset bone animation data.
The skin deformation parameter is used for representing skin deformation corresponding to the target preset bone animation data, the skin deformation parameter is obtained by performing data compression on computer equipment according to skin deformation data corresponding to the target preset bone animation data in advance, the data compression mode can be a mode of converting the skin deformation data into bone action data corresponding to the skin deformation data by using various mathematical models, such as a neural grid mode, a deep learning model mode and the like, or a reverse skinning algorithm can be established according to the skinning algorithm and the skin deformation data, residual error calculation is performed by using the skin deformation data and the output value of the skinning algorithm, then iteration is continuously performed until the value of the residual error reaches a preset range or the iteration frequency reaches the maximum iteration frequency, the bone action data output after the iteration of the reverse skinning algorithm is finished is used as the skin deformation parameter, and the data quantity of the skin deformation parameter is smaller than that of the skin deformation data, so that the storage space is effectively saved.
And step S210, obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertex of the animation skin model.
The skin deformation parameters comprise data such as bone rotation amount, bone translation amount and the like.
Specifically, the computer equipment obtains position information of a vertex of the animation skin model and the bone model, calculates skin weight of the vertex of the animation skin model relative to each bone on the whole bone model, the skin weight is used for representing the relative position relation of the vertex of the skin model relative to each bone, the computer equipment carries out rotation change represented by bone rotation quantity of skin deformation parameters on the position of the vertex of the animation skin model, then carries out translation change represented by bone translation quantity of the skin deformation parameters on the position of the vertex of the animation skin model, and adds an error correction term to finally obtain target skin deformation parameters.
And step S212, fusing the target skin deformation data and the skin deformation fusion weight to obtain a target animation corresponding to the current skeleton animation data.
The target skin deformation data correspond to respective target preset bone animation data, and the skin deformation fusion weight also corresponds to the target preset bone animation data, so that the target skin deformation data and the skin deformation fusion weight are in one-to-one correspondence relationship.
Specifically, the computer device performs weighted fusion on the skin deformation fusion weight corresponding to each skin deformation data to obtain a target animation corresponding to the current skeleton animation data.
According to the animation editing method, the animation editing device, the computer equipment and the storage medium, the current skeleton animation data, the animation skin model and the preset skeleton animation data are obtained; acquiring target preset bone animation data with the bone data similarity of the current bone animation data within a preset threshold range from the preset bone animation data, wherein the bone data similarity is used for representing the similarity between the current bone animation data and the preset bone animation data; obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and target preset bone animation data; obtaining target skin deformation data according to skin deformation parameters corresponding to the target preset bone animation data and the position information of the vertex of the animation skin model, wherein the skin deformation parameters are obtained by performing data compression on the skin deformation data corresponding to the target preset bone animation data; and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data. Therefore, the preset bone animation data and the corresponding skin deformation data are stored in advance through data compression, the storage space is effectively saved, the corresponding target animation is obtained through a data similarity matching and weighted fusion mode, the interference of redundant data with low similarity is reduced, the accidental iteration error data generated in the process of generating the target animation in real time through a skinning algorithm is effectively reduced through pre-storing the iterated preset bone animation data and the corresponding skin deformation data, the matching accuracy of the current bone animation data and the target preset bone animation data is improved, and the accuracy of finally generating the target animation is effectively improved.
In one embodiment, as shown in fig. 3, obtaining target preset bone animation data from the preset bone animation data, the bone data of which similarity to the current bone animation data is within a preset threshold range, includes:
step S302, calculating to obtain corresponding bone data similarity based on bone position data and bone speed data in current bone animation data and preset bone animation data, wherein the bone position data comprises bone displacement and bone rotation, and the bone speed data comprises bone rotation speed and bone displacement speed.
The bone pose data are used for representing the motion characteristics of the bone, and the bone speed data are used for representing the motion speed of the bone.
Specifically, the computer device constructs a database of skin actions by using the skeleton postures and the corresponding skin deformations, takes the spines and the sternums of the animated characters as root bones, and then connects all the bones into a tree structure by taking the root bones as a first root bone, wherein each bone action can be represented by a group of bone animation data m, as shown in the following formula 1:
Wherein,the amount of displacement of the root bone is,is the displacement speed of the root bone and is,for the amount of rotation of the ith bone,the rotational speed of the ith bone is the speed of rotation,indicating the number of bones.
The method comprises the steps that difference calculation is carried out on bone position data and bone speed data of current bone animation data, bone position data and bone speed data of preset bone animation data by computer equipment, weighting fusion is carried out according to a difference value of the current bone animation data and the bone position data of the preset bone animation data and a difference value of the bone speed data to obtain bone data similarity, fusion weights can be flexibly configured according to actual situation requirements, can be obtained in a manual configuration mode, and can be repeatedly iterated through a neural network model and a deep learning model to determine the most appropriate fusion weight.
Step S304, acquiring a preset threshold range of the bone data similarity.
And S306, obtaining target preset bone animation data according to the preset threshold range of the bone data similarity and the bone data similarity.
Specifically, the computer device compares the bone data similarity of the current bone animation data and preset bone animation data with a preset threshold range, and takes the preset bone animation data with the bone data similarity falling within the preset threshold range as target preset bone animation data.
For example, when determining the target preset bone animation data, the computer device determines the target preset bone animation data as shown in the following formula 2:
Wherein,bone animation data is preset for a target corresponding to the current bone animation data m, dist is a preset threshold range radius,in order to preset the skeletal animation data,and obtaining the bone data similarity between the current bone animation data and the preset bone animation data.
In the embodiment, the computer device respectively performs difference calculation on the bone pose data and the bone speed data of the current bone animation data and the preset bone animation data, then performs weighting fusion to obtain the bone data similarity, can flexibly and automatically adjust the value of the fusion weight, enables the bone pose data and the bone speed data to have different side points according to different conditions when calculating the bone data similarity, then compares the calculated bone data similarity with the preset threshold range, determines the target preset bone animation data with the bone data similarity within the preset threshold range, and improves the accuracy of screening the target preset bone animation data.
In one embodiment, as shown in fig. 4, calculating the corresponding bone data similarity based on the bone pose data, the bone velocity data and the preset bone animation data in the current bone animation data includes:
step S402, obtaining a displacement difference factor according to the difference value between the bone displacement of the current bone animation data and the bone displacement of the preset bone animation data.
The displacement difference factor is used for representing the difference value between the bone displacement of the current bone animation data and the bone displacement of the preset bone animation data, the bone displacement is the displacement of a bone representing an animation role in a three-dimensional space, and the current bone animation data and the preset bone animation data are data sequences formed by data with the same dimension.
Specifically, the computer device calculates a euclidean distance between the bone displacement amount of the current bone animation data and the bone displacement amount of the preset bone animation data, and uses the euclidean distance as a displacement difference factor, and may also use a calculated value capable of reflecting a difference value between two sets of data, such as a manhattan distance, a cosine distance, and the like, as the displacement difference factor, which is not specifically limited herein.
Step S404, obtaining a displacement speed difference factor according to the difference value between the bone displacement speed of the current bone animation data and the bone displacement speed of the preset bone animation data.
The displacement speed difference factor is used for representing the difference value between the bone displacement speed of the current bone animation data and the bone displacement speed of the preset bone animation data, and the bone displacement speed is used for representing the speed of the bones of the animation roles when the bones are displaced in the three-dimensional space.
Specifically, the computer device calculates a euclidean distance between the bone displacement speed of the current bone animation data and the bone displacement speed of the preset bone animation data, and uses the euclidean distance as a displacement speed difference factor, and may also use a calculated value such as a manhattan distance and a cosine distance that can reflect a difference value between two sets of data as a displacement speed difference factor, which is not specifically limited herein.
Step S406, obtaining a rotation amount difference factor according to a difference value between the bone rotation amount of the current bone animation data and the bone rotation amount of the preset bone animation data.
The rotation amount difference factor is used for representing the difference value between the bone rotation amount of the current bone animation data and the bone rotation amount of preset bone animation data, and the bone rotation amount is a numerical value used for measuring the rotation motion of bones of the animation character in a three-dimensional space.
Specifically, the computer device calculates a euclidean distance between a bone rotation amount of current bone animation data and a bone rotation amount of preset bone animation data, and uses the euclidean distance as a rotation amount difference factor, and may also use a calculation value capable of reflecting a difference value between two sets of data, such as a manhattan distance, a cosine distance, and the like, as the rotation amount difference factor, which is not specifically limited herein.
Step S408, a rotation speed difference factor is obtained according to the difference value between the bone rotation speed of the current bone animation data and the bone rotation speed of the preset bone animation data.
The rotation speed difference factor is used for representing the difference value between the bone rotation speed of the current bone animation data and the bone rotation speed of the preset bone animation data, and the bone rotation speed is used for representing the rotation speed of the bone of the animation character when the bone rotates in a three-dimensional space.
Specifically, the computer device calculates a euclidean distance between the bone rotation speed of the current bone animation data and the bone rotation speed of the preset bone animation data, and uses the euclidean distance as the rotation speed difference factor, and may also use a calculated value such as a manhattan distance and a cosine distance that can reflect a difference value between two sets of data as the rotation speed difference factor, which is not particularly limited herein.
Step S410, obtaining the bone data similarity based on the fusion of the displacement difference factor, the displacement speed difference factor, the rotation amount difference factor and the rotation speed difference factor.
Specifically, the computer device performs weighted fusion on the displacement difference factor, the displacement speed difference factor, the rotation amount difference factor and the rotation speed difference factor to obtain the bone data similarity, wherein the weight parameter can flexibly and automatically adjust the value, so that different side points can be provided for the bone pose data and the bone speed data according to different conditions when the bone data similarity is calculated, for example, when the bone rotation amount value in the current bone animation data is larger than the values of other dimension data, it is indicated that the bone rotation attribute of the current bone animation data is stronger than other attributes, so that the weight parameter corresponding to the rotation amount difference factor can be correspondingly improved, and the bone data similarity can better reflect the motion attribute of the current bone animation data.
For example, the formula for calculating the similarity of bone data is shown in the following formula 3:
Wherein,in order for the weight parameter to be adjustable,for the displacement of the root bone in the current bone animation data,in order to preset the displacement of the root bone in the bone animation data,for the displacement velocity of the root bone in the current bone animation data,to preset the displacement speed of the root bone in the bone animation data,for the rotation amount of the ith bone in the current bone animation data,to preset the rotation amount of the ith bone in the bone animation data,for the rotation speed of the ith bone in the current bone animation data,to preset the rotation speed of the ith bone in the bone animation data,indicating the number of bones.
In the embodiment, the computer device respectively performs difference calculation on the bone displacement amount, the bone displacement speed, the bone rotation amount and the bone rotation speed of the current bone animation data and the preset bone animation data, then performs weighting fusion to obtain the bone data similarity, and can flexibly and automatically adjust the numerical value of the weight, so that different emphasis points can be placed on the bone displacement amount, the bone displacement speed, the bone rotation amount and the bone rotation speed according to different conditions when the bone data similarity is calculated, and the accuracy of calculating the bone data similarity is improved.
In one embodiment, as shown in fig. 5, the obtaining of the skin deformation fusion weight according to the bone data similarity between the current bone animation data and the target preset bone animation data includes:
step S502, acquiring the similarity of target skeleton data, wherein the similarity of the target skeleton data is the similarity of the preset skeleton animation data of the target and the current skeleton animation data.
And step S504, obtaining a similarity fusion factor based on the similarity fusion of the target bone data.
The similarity fusion factor is used for representing the sum of the similarity of the skeleton data between the current skeleton animation data and all the target preset skeleton animation data.
Specifically, the computer device performs weighted fusion on the inverse square of the similarity of the target bone data to obtain a similarity fusion factor.
For example, the computer device constructs the similarity fusion factor as shown in equation 4 below:
Wherein,is a similarity fusion factor, m is the current skeletal animation data,bone animation data are preset for the ith target,a skeletal animation data set is preset for all targets,for the current skeleton animation data and the ith targetAnd setting the bone data similarity of the bone animation data.
And S506, fusing the similarity and the similarity fusion factor based on the target bone data to obtain skin deformation fusion weight.
Specifically, the computer device constructs a ratio of the similarity of each target bone data to a similarity fusion factor, and then uses the ratio as a skin deformation fusion weight corresponding to each target preset bone animation data.
For example, the skin deformation fusion weight can be generated as follows:
Wherein,for the ith skin deformation fusion weight,is the current skeleton animation data m and the ith target preset skeleton animation dataThe similarity of the bone data of (2),is a similarity fusion factor.
In this embodiment, the computer device constructs a corresponding skin deformation fusion weight according to the similarity of each target bone data, and uses the ratio of the similarity of each target bone data to the similarity fusion factor as the skin deformation fusion weight corresponding to each target preset bone animation data, so that the value of the skin deformation fusion weight corresponding to the greater similarity of the target bone data is set to be greater, the skin deformation generated by fusing the skin deformation fusion weight subsequently is more accurate, and the correlation between the skin deformation fusion weight and the similarity of the target bone data is improved.
In an embodiment, as shown in fig. 6, before obtaining the skin deformation parameter corresponding to the target preset bone animation data, the method further includes:
step S602, obtaining the skeleton model of the animation character.
The animation role skeleton model is used for driving the animation role skin model to make corresponding actions and deformation.
And step S604, calculating the skinning weight of the vertex of the animation skin model relative to the animation character skeleton model, wherein the skinning weight is used for representing the relative position relationship of the vertex of the animation skin model relative to the animation character skeleton model.
Specifically, the computer equipment acquires the position information of the vertex of the animation skin model and the space position information of each bone in the animation character bone model, and automatically calculates the skinning Weight of the vertex of the animation skin model relative to the animation character bone model by using a bound Biharmonic Weight algorithm, wherein the skinning Weight is used for representing the relative position relation of the vertex of the animation skin model relative to the animation character bone model.
And step S606, fusing skin weight and position information of the vertex of the animation skin model to obtain the animation skin model.
The animation skinning model is a model for generating corresponding skin deformation data according to the skeleton animation data of the animation role.
Specifically, the computer device obtains a bone rotation matrix and a bone translation vector of each bone in the animated character bone model, the bone rotation matrix is used for representing the rotational motion of the bone, the bone translation vector is used for representing the translation distance of the bone in a three-dimensional space, and then the bone animation data corresponding to each bone, the corresponding skin weight and the position information of the vertex of the animated skin model are fused to obtain the corresponding vertex position information after deformation.
For example, the computer device constructs an animated skin model according to equation 6 below:
Wherein,for the corresponding deformed vertex position information,as to the number of the bones,the skinning weight corresponding to the ith bone animation data of the jth bone,the bone rotation matrix for the jth bone,the bone translation vector of the jth bone,position information of vertices of the animated skin model.
Specifically, after the computer device obtains skin deformation data corresponding to target preset bone animation data, difference calculation is carried out on the corresponding skin deformation data and the animation skin model obtained in the step S606, and then weighting fusion is carried out, wherein fusion weights can be flexibly set according to specific conditions to obtain a reverse skin algorithm, a bone rotation matrix and a bone translation vector corresponding to a minimum value are obtained by calculating the minimum value of the reverse skin algorithm, and the bone rotation matrix and the bone translation vector corresponding to the minimum value are taken as skin deformation parameters by the reverse skin algorithm.
For example, the computer device constructs a reverse skinning algorithm according to the following formula 7, and obtains a skin deformation parameter corresponding to the target preset bone animation data according to the reverse skinning algorithm:
Wherein,presetting skin deformation data corresponding to the skeleton animation data for the target, N is the number of the skeleton animation data sequences preset for the target,in order to fuse the weight parameters,the number of the skeletons is the number of the skeletons,the skinning weight corresponding to the ith bone animation data of the jth bone,the bone rotation matrix for the jth bone,the bone translation vector of the jth bone,calculating the minimum value of the above formula 7 as the position information of the vertex of the animated skin model to obtain the corresponding minimum valueAnd withThe value of (i.e., the skin deformation parameter).
In this embodiment, the computer device obtains the animation character skeleton model, calculates skin weight of a vertex of the animation skin model with respect to the animation character skeleton model, obtains the animation skin model based on fusion of the skin weight and position information of the vertex of the animation skin model, obtains a skin deformation parameter corresponding to target preset skeleton animation data according to the skin deformation data corresponding to the target preset skeleton animation data and the animation skin model, and converts the skin deformation data corresponding to the target preset skeleton animation data into a corresponding skin deformation parameter by constructing a reverse skin algorithm, so that data size of an action library to be stored is greatly reduced, and a storage space is saved.
In one embodiment, as shown in fig. 7, obtaining the target skin deformation data according to the skin deformation parameters and the position information of the vertices of the animated skin model includes:
step S702, obtaining an animation character skeleton model, wherein the animation character skeleton model comprises a main skeleton model and a correction skeleton model.
The main skeleton model is used for constructing an animation skin model according to the skin deformation parameters and the position information of the vertex of the animation skin model, and then generating the position information of the vertex of the skin model after the deformation represented by the skin deformation parameters occurs according to the animation skin model; and the corrected skeleton model is used for constructing a corrected animation skin model by using the corrected skeleton model when the skin deformation data generated according to the main skeleton model has errors, and then fusing the animation skin model corresponding to the main skeleton and the corrected animation skin model corresponding to the corrected skeleton model to obtain the final target skin deformation data.
Step S704, obtaining a corrected motion parameter of the corrected bone model, wherein the corrected motion parameter comprises a corrected bone rotation matrix and a corrected bone translation vector.
The corrected skeleton rotation matrix is used for representing the rotation amount of the skeleton in the corrected skeleton model in the three-dimensional space, the corrected skeleton translation vector is used for representing the displacement amount of the skeleton in the corrected skeleton model in the three-dimensional space, corrected motion parameters can be set according to specific conditions according to empirical values, and optimal corrected motion parameters can be iterated through intelligent algorithms such as a neural network and deep learning.
Step S706, calculating a correction skinning weight of the vertex of the animation skin model relative to the correction bone model, wherein the correction skinning weight is used for representing the relative position relationship of the vertex of the animation skin model relative to the correction bone model.
Specifically, the computer device automatically calculates the correction skinning Weight of the vertex of the animation skin model relative to the correction bone model according to the bound Biharmonic Weight algorithm, and the correction skinning Weight is used for representing the relative position relation of the vertex of the animation skin model relative to the correction bone model.
In step S708, skinning weights of vertices of the animated skin model with respect to the main skeleton model are calculated, the skinning weights being used for representing relative positional relationships of the vertices of the animated skin model with respect to the main skeleton model.
Specifically, the computer device automatically calculates skinning weights of the vertexes of the animated skin model relative to the main bone model according to a bound Biharmonic Weight algorithm, wherein the skinning weights are used for representing the relative position relation of the vertexes of the animated skin model relative to the main bone model.
And step S710, fusing skin weight, position information of the vertex of the animation skin model, skin deformation parameters corresponding to the target preset skeleton animation data, skin weight correction and motion parameter correction to obtain target skin deformation data.
The position information of the vertexes of the animated skin model is the position information of each vertex of the animated skin model before the animated skin model is deformed.
Specifically, the computer device constructs an animation skin model corresponding to the main skeleton according to the skin weight, the position information of the vertex of the animation skin model and the skin deformation parameter corresponding to the target preset skeleton animation data, constructs a corrected animation skin model corresponding to the main skeleton according to the corrected skin weight, the position information of the vertex of the animation skin model and the corrected motion parameter, and fuses the animation skin model and the corrected animation skin model to obtain the target skin deformation data.
In this embodiment, the computer device obtains the corrected skeleton model, constructs a corrected animated skin model related to the corrected skeleton model according to the corrected skeleton model, and uses the entire corrected animated skin model as a correction term to correct the animated skin model corresponding to the main skeleton, thereby improving the accuracy of the target skin deformation data.
In one embodiment, as shown in fig. 8, fusing skin weight, position information of vertices of the animated skin model, skin deformation parameters corresponding to the target preset bone animation data, the corrected skin weight, and the corrected motion parameters to obtain target skin deformation data, including:
and S802, fusing to obtain a skin correction term based on the corrected bone rotation matrix, the corrected bone translation vector, the position information of the vertex of the animation skin model and the corrected skin weight.
The skin correction term is a calculation result of an animation skin model established according to the corrected skeleton model and is used for representing the position information of the vertex of the deformed skin model.
Specifically, the computer device constructs a skinning correction term according to equation 8 as follows:
Wherein,a skin deformation data correction term output for the skin correction term,to correct the number of bones in the bone model,in order to modify the skinning weights,in order to modify the matrix of bone rotation,in order to correct the bone translation vector(s),position information of vertices of the animated skin model.
And step S804, fusing skin weight, the position information of the vertex of the animation skin model, the skin deformation parameter corresponding to the target preset bone animation data and the position information of the vertex of the animation skin model to obtain a skin calculating item.
The skin calculating item is a calculating result of the animation skin model corresponding to the main skeleton model and is used for representing position information of a vertex of the deformed skin model, and skin deformation parameters corresponding to the target preset skeleton animation data are a skeleton rotation matrix and a skeleton translation matrix.
Specifically, the computer device constructs a skinning solution according to equation 9 below:
Wherein,resolving terms for the skin corresponding to the main skeleton model,the number of bones in the main bone model,is the weight of the skin, and is,a bone rotation matrix corresponding to the main bone model,a bone translation matrix corresponding to the main bone model,is the position information of the vertices of the animated skin model.
And step S806, fusing the skin resolving term and the skin correcting term to obtain target skin deformation data.
Specifically, the computer device obtains target skin deformation data according to the following formula 10:
Wherein,in order to obtain the target skin deformation data,the terms are solved for the skin and,a skin correction term.
In the embodiment, the computer device obtains a skin correction term through fusion according to the corrected bone rotation matrix, the corrected bone translation vector, the position information of the vertex of the animation skin model and the corrected skin weight, obtains a skin resolving term through fusion according to the skin weight, the position information of the vertex of the animation skin model, the skin deformation parameter corresponding to the target preset bone animation data and the position information of the vertex of the animation skin model, and finally obtains target skin deformation data through fusion of the skin resolving term and the skin correction term.
The application also provides an application scene, wherein the animation editing method is applied to the application scene, and the method is applied to a scene of three-dimensional animation editing, specifically, the application of the animation editing method to the application scene is as follows:
the method comprises the following steps that computer equipment obtains different skeleton animation data and skin deformation data corresponding to the skeleton animation data, the skin deformation data are position relations of skin vertexes after skin deformation occurs, a skin action database is established according to the data, in order to save storage space, the skin deformation data are subjected to data compression to be converted into skin deformation parameters corresponding to the skin deformation data, the skin deformation parameters comprise corresponding skeleton rotation matrixes and corresponding skeleton translation vectors, and the specific process of data compression is as follows:
setting a certain point on the skin modelCorrespond toThe weight of the skin of the root bone isTo is thatThe rotation matrix and displacement vector of the root skeleton areThen, the skin vertex positions after the skin algorithm settlement are set as follows:
Wherein,is the position of the top point of the skin,the number of bones in the bone model is,is the weight of the skin, and is,a bone rotation matrix corresponding to the bone model,a bone translation matrix corresponding to the bone model,is the position information of the vertices of the animated skin model.
Establishing a reverse skin model, wherein the specific construction mode is shown in the following formula 12:
Wherein,presetting skin deformation data corresponding to the skeleton animation data for the target, N is the number of the skeleton animation data sequences preset for the target,in order to fuse the weight parameters,the number of the skeletons is the number of the skeletons,the skinning weight corresponding to the ith bone animation data of the jth bone,the bone rotation matrix for the jth bone,the bone translation vector of the jth bone,for the position information of the vertices of the animated skin model,calculating the minimum value of the above formula 7 to obtain the corresponding minimum valueAndthe value of (i.e. the skin deformation parameter).
When the computer equipment obtains the minimum value of the reverse skin model by solving the minimum value of the formula 12Andas skin deformation data, using the values ofAnd (5) corresponding skin deformation parameters, thereby completing the compression of the data.
The method comprises the steps that computer equipment obtains current skeleton animation data and preset skeleton animation data, difference values of the current skeleton animation data and the preset skeleton animation data on each dimensional data are calculated, weighting fusion is conducted on the difference values to obtain skeleton data similarity, comparison is conducted according to the skeleton data similarity and a preset threshold range to determine target preset skeleton animation data, and then corresponding fusion weight is generated according to the target preset skeleton animation data; the computer equipment acquires a skin deformation parameter corresponding to target preset bone animation data, acquires a bone model, wherein the bone model comprises a main bone model and a corrected bone model, calculates skin weight of a vertex of the skin model relative to the main bone model and corrected skin weight of the vertex of the skin model relative to the corrected bone model, acquires a corrected rotation matrix and a corrected translation vector of the corrected bone model, establishes the main bone skin model according to the corrected skin weight, position information of the vertex of the skin model and the skin deformation parameter, establishes a corrected bone skin model according to the skin weight, the position information of the vertex of the skin model and the corrected rotation matrix and the corrected translation vector, fuses the main bone skin model and the corrected skin model to obtain position information of the skin vertex after deformation, and further obtains skin deformation data corresponding to the current bone animation data, as shown in the following formula 13:
WhereinFor the skin deformation data corresponding to the finally obtained current bone animation data,the number of bones in the main bone model,in order to correct the number of bones in the bone model,the skinning weights for the vertices of the skin model with respect to the main skeleton model,a revised skinning weight for the vertices of the skin model with respect to the revised bones,andrespectively a bone rotation item and a bone translation item in the skin deformation parameters,is the position information of the skin vertex,and withThe corrected rotation matrix and the corrected translation vector are respectively.
Thus, the computer device constructs skin deformation compression dataWhereinIs a quaternion expression form corresponding to the bone rotation matrix in the skin deformation parameters, t is a bone translation vector in the skin deformation parameters,to correct the representation of the bone rotation matrix of the bone in the bone model,a modified bone translation vector for modifying a bone in the bone model.
And after the computer equipment obtains the skin deformation data corresponding to the target preset bone animation data according to the steps, carrying out weighted fusion on the fusion weight corresponding to the target preset bone animation data and the corresponding skin deformation data to obtain final target skin deformation data, and further obtaining the corresponding target animation.
Setting the skin deformation corresponding to the target preset skeleton animation data of the input current skeleton animation data in the database asIn whichThe number of groups of the target preset skeleton animation data sequence corresponding to the current skeleton animation data is set, and the fusion weight corresponding to the current skeleton animation data and the target preset skeleton animation data sequence isThen, the target skin deformation S output by the database is shown in the following formula 14:
Wherein, the symbolFor the superposition operation of the skin space, the skin vertices may be superposed by simple addition one by one.
And the computer equipment synthesizes a target three-dimensional animation according to the target skin deformation S to finish animation editing.
The animation editing method comprises the steps of obtaining current skeleton animation data, an animation skin model and preset skeleton animation data; acquiring target preset skeleton animation data with the skeleton data similarity of the current skeleton animation data within a preset threshold range from the preset skeleton animation data, wherein the skeleton data similarity is used for representing the similarity degree of the current skeleton animation data and the preset skeleton animation data; obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and target preset bone animation data; obtaining target skin deformation data according to skin deformation parameters corresponding to the target preset bone animation data and the position information of the vertex of the animation skin model, wherein the skin deformation parameters are obtained by performing data compression on the skin deformation data corresponding to the target preset bone animation data; and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data. Therefore, the preset bone animation data and the corresponding skin deformation data are stored in advance through data compression, the storage space is effectively saved, the corresponding target animation is obtained through a data similarity matching and weighting fusion mode, the interference of redundant data with low similarity is reduced, the accidental iteration error data generated in the process of generating the target animation in real time through a skinning algorithm is effectively reduced through pre-storing the iterated preset bone animation data and the corresponding skin deformation data, the matching accuracy of the current bone animation data and the target preset bone animation data is improved, and the accuracy of finally generating the target animation is effectively improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
In one embodiment, as shown in fig. 9, there is provided an animation editing apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, and specifically includes: a data obtaining module 902, a target preset skeleton animation data determining module 904, a skin deformation fusion weight generating module 906, a skin deformation parameter obtaining module 908, a target skin deformation data generating module 910, and a target animation generating module 912, wherein:
a data obtaining module 902, configured to obtain current bone animation data, an animation skin model, and preset bone animation data;
a target preset bone animation data determining module 904, configured to obtain, from preset bone animation data, target preset bone animation data whose bone data similarity with the current bone animation data is within a preset threshold range, where the bone data similarity is used to represent a degree of similarity between the current bone animation data and the preset bone animation data;
the skin deformation fusion weight generation module 906 is configured to obtain a skin deformation fusion weight according to a bone data similarity between the current bone animation data and target preset bone animation data;
a skin deformation parameter obtaining module 908, configured to obtain a skin deformation parameter corresponding to the target preset bone animation data, where the skin deformation parameter is obtained by performing data compression on skin deformation data corresponding to the target preset bone animation data;
the target skin deformation data generating module 910 is configured to obtain target skin deformation data according to the skin deformation parameter and the position information of the vertex of the animation skin model;
and a target animation generation module 912, configured to perform fusion based on the target skin deformation data and the skin deformation fusion weight to obtain a target animation corresponding to the current skeleton animation data.
The animation editing device acquires current skeleton animation data, an animation skin model and preset skeleton animation data; acquiring target preset skeleton animation data with the skeleton data similarity of the current skeleton animation data within a preset threshold range from the preset skeleton animation data, wherein the skeleton data similarity is used for representing the similarity degree of the current skeleton animation data and the preset skeleton animation data; obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and target preset bone animation data; obtaining target skin deformation data according to skin deformation parameters corresponding to the target preset bone animation data and the position information of the vertex of the animation skin model, wherein the skin deformation parameters are obtained by performing data compression on the skin deformation data corresponding to the target preset bone animation data; and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data. Therefore, the preset bone animation data and the corresponding skin deformation data are stored in advance through data compression, the storage space is effectively saved, the corresponding target animation is obtained through a data similarity matching and weighted fusion mode, the interference of redundant data with low similarity is reduced, the accidental iteration error data generated in the process of generating the target animation in real time through a skinning algorithm is effectively reduced through pre-storing the iterated preset bone animation data and the corresponding skin deformation data, the matching accuracy of the current bone animation data and the target preset bone animation data is improved, and the accuracy of finally generating the target animation is effectively improved.
In one embodiment, the target preset bone animation data determination module 904 is further configured to calculate a corresponding bone data similarity based on bone pose data in the current bone animation data, the bone velocity data, and the preset bone animation data, wherein the bone pose data includes a bone displacement amount and a bone rotation amount, and the bone velocity data includes a bone rotation velocity and a bone displacement velocity; acquiring a preset threshold range of the similarity of the bone data; and obtaining target preset skeleton animation data according to the preset threshold range of the skeleton data similarity and the skeleton data similarity.
In one embodiment, the target preset bone animation data determining module 904 is further configured to obtain a displacement difference factor according to a difference value between a bone displacement of the current bone animation data and a bone displacement of the preset bone animation data; obtaining a displacement speed difference factor according to a difference value between the bone displacement speed of the current bone animation data and the bone displacement speed of preset bone animation data; obtaining a rotation amount difference factor according to a difference value between the bone rotation amount of the current bone animation data and the bone rotation amount of preset bone animation data; obtaining a rotation speed difference factor according to a difference value between the bone rotation speed of the current bone animation data and the bone rotation speed of preset bone animation data; and obtaining the bone data similarity based on the fusion of the displacement difference factor, the displacement speed difference factor, the rotation amount difference factor and the rotation speed difference factor.
In one embodiment, the skin deformation fusion weight generation module 906 is further configured to obtain a target bone data similarity, where the target bone data similarity is a bone data similarity between the target preset bone animation data and the current bone animation data; obtaining a similarity fusion factor based on the similarity fusion of the target bone data; and fusing the similarity of the target bone data with the similarity fusion factor to obtain the skin deformation fusion weight.
In one embodiment, the skin deformation parameter obtaining module 908 is further configured to obtain an animated character skeleton model; calculating the skinning weight of the vertex of the animation skin model relative to the animation role bone model, wherein the skinning weight is used for representing the relative position relationship of the vertex of the animation skin model relative to the animation role bone model; fusing skin weight and position information of the vertex of the animation skin model to obtain an animation skin model; and obtaining a skin deformation parameter corresponding to the target preset bone animation data according to the skin deformation data corresponding to the target preset bone animation data and the animation skinning model.
In one embodiment, the skin deformation parameter obtaining module 908 is further configured to obtain an animated character skeleton model, wherein the animated character skeleton model includes a main skeleton model and a modified skeleton model; acquiring corrected motion parameters of a corrected bone model, wherein the corrected motion parameters comprise a corrected bone rotation matrix and a corrected bone translation vector; calculating a correction skinning weight of the vertex of the animation skin model relative to the correction bone model, wherein the correction skinning weight is used for representing the relative position relationship of the vertex of the animation skin model relative to the correction bone model; calculating the skinning weight of the vertex of the animation skin model relative to the main skeleton model, wherein the skinning weight is used for representing the relative position relation of the vertex of the animation skin model relative to the main skeleton model; and fusing skin weight, position information of the vertex of the animation skin model, skin deformation parameters corresponding to the target preset skeleton animation data, corrected skin weight and corrected motion parameters to obtain target skin deformation data.
In one embodiment, the skin deformation parameter obtaining module 908 is further configured to obtain a skin correction term based on the corrected bone rotation matrix, the corrected bone translation vector, the position information of the vertex of the animated skin model, and the corrected skin weight fusion; obtaining a skin resolving item based on skin weight, position information of a vertex of the animation skin model, skin deformation parameters corresponding to target preset bone animation data and position information fusion of the vertex of the animation skin model; and fusing the skin resolving term and the skin correction term to obtain target skin deformation data.
For the specific definition of the animation editing apparatus, reference may be made to the above definition of the animation editing method, which is not described herein again. The modules in the animation editing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an animation editing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.
Claims (10)
1. A method for animation editing, the method comprising:
acquiring current skeleton animation data, an animation skin model and preset skeleton animation data;
acquiring target preset bone animation data with the bone data similarity of the current bone animation data within a preset threshold range from the preset bone animation data, wherein the bone data similarity is used for representing the similarity of the current bone animation data and the preset bone animation data;
obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and the target preset bone animation data;
obtaining an animation role skeleton model;
calculating skinning weights of the vertexes of the animated skin model relative to the animated character skeleton model, wherein the skinning weights are used for representing the relative position relation of the vertexes of the animated skin model relative to the animated character skeleton model;
fusing the skin weight and the position information of the vertex of the animation skin model to obtain an animation skin model;
obtaining a skin deformation parameter corresponding to the target preset bone animation data according to the skin deformation data corresponding to the target preset bone animation data and the animation skinning model;
obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertex of the animation skin model;
and fusing based on the target skin deformation data and the skin deformation fusion weight to obtain the target animation corresponding to the current skeleton animation data.
2. The method according to claim 1, wherein the obtaining target preset bone animation data with a bone data similarity with the current bone animation data within a preset threshold range from the preset bone animation data comprises:
calculating to obtain corresponding bone data similarity based on bone pose data, bone speed data and the preset bone animation data in the current bone animation data, wherein the bone pose data comprise bone displacement and bone rotation, and the bone speed data comprise bone rotation speed and bone displacement speed;
acquiring a preset threshold range of the bone data similarity;
and obtaining target preset bone animation data according to the preset threshold range of the bone data similarity and the bone data similarity.
3. The method of claim 2, wherein the calculating a corresponding bone data similarity based on the bone pose data and the bone velocity data in the current bone animation data and the preset bone animation data comprises:
obtaining a displacement difference factor according to a difference value between the bone displacement of the current bone animation data and the bone displacement of the preset bone animation data;
obtaining a displacement speed difference factor according to a difference value between the bone displacement speed of the current bone animation data and the bone displacement speed of the preset bone animation data;
obtaining a rotation amount difference factor according to a difference value between the bone rotation amount of the current bone animation data and the bone rotation amount of the preset bone animation data;
obtaining a rotation speed difference factor according to the difference value between the bone rotation speed of the current bone animation data and the bone rotation speed of the preset bone animation data;
and obtaining the bone data similarity based on the fusion of the displacement difference factor, the displacement speed difference factor, the rotation amount difference factor and the rotation speed difference factor.
4. The method according to claim 1, wherein the obtaining a skin deformation fusion weight according to the bone data similarity between the current bone animation data and the target preset bone animation data comprises:
acquiring target bone data similarity, wherein the target bone data similarity is the bone data similarity of the target preset bone animation data and the current bone animation data;
obtaining a similarity fusion factor based on the similarity fusion of the target bone data;
and fusing the similarity of the target bone data with the similarity fusion factor to obtain the skin deformation fusion weight.
5. The method according to claim 1, wherein obtaining the skin deformation parameter corresponding to the target preset bone animation data according to the skin deformation data corresponding to the target preset bone animation data and the animation skinning model comprises:
performing difference calculation on the corresponding skin deformation data and the animation skinning model, and then weighting and fusing to obtain a reverse skinning algorithm expression;
calculating a bone rotation matrix and a bone translation vector corresponding to the minimum value of the reverse skinning algorithm expression;
and taking the corresponding bone rotation matrix and the corresponding bone translation vector as skin deformation parameters.
6. The method of claim 1, wherein obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertices of the animated skin model comprises:
acquiring an animation role skeleton model, wherein the animation role skeleton model comprises a main skeleton model and a correction skeleton model;
acquiring corrected motion parameters of the corrected bone model, wherein the corrected motion parameters comprise a corrected bone rotation matrix and a corrected bone translation vector;
calculating correction skinning weights of the vertexes of the animation skin model relative to the correction bone model, wherein the correction skinning weights are used for representing the relative position relation of the vertexes of the animation skin model relative to the correction bone model;
calculating skinning weights of the vertexes of the animated skin model relative to the main skeleton model, wherein the skinning weights are used for representing the relative position relation of the vertexes of the animated skin model relative to the main skeleton model;
and fusing the skin weight, the position information of the vertex of the animation skin model, the skin deformation parameter corresponding to the target preset skeleton animation data, the corrected skin weight and the corrected motion parameter to obtain target skin deformation data.
7. The method according to claim 6, wherein the fusing the skin weight, the position information of the vertex of the animated skin model, the skin deformation parameter corresponding to the target preset bone animation data, the corrected skin weight and the corrected motion parameter to obtain the target skin deformation data comprises:
fusing to obtain a skin correction term based on the corrected bone rotation matrix, the corrected bone translation vector, the position information of the vertex of the animation skin model and the corrected skin weight;
fusing skin calculating items based on the skin weight, the position information of the vertex of the animation skin model, the skin deformation parameters corresponding to the target preset skeleton animation data and the position information of the vertex of the animation skin model;
and fusing the skin resolving term and the skin correction term to obtain target skin deformation data.
8. An animation editing apparatus, characterized in that the apparatus comprises:
the data acquisition module is used for acquiring current bone animation data, an animation skin model and preset bone animation data;
the target preset bone animation data determining module is used for acquiring target preset bone animation data, the bone data similarity of which with the current bone animation data is within a preset threshold range, from the preset bone animation data, and the bone data similarity is used for representing the similarity degree of the current bone animation data with the preset bone animation data;
the skin deformation fusion weight generation module is used for obtaining skin deformation fusion weight according to the bone data similarity of the current bone animation data and the target preset bone animation data;
the skin deformation parameter acquisition module is used for acquiring an animation role skeleton model; calculating skinning weights of the vertexes of the animated skin model relative to the animated character skeleton model, wherein the skinning weights are used for representing the relative position relation of the vertexes of the animated skin model relative to the animated character skeleton model; fusing the skin weight and the position information of the vertex of the animation skin model to obtain an animation skin model; obtaining a skin deformation parameter corresponding to the target preset bone animation data according to the skin deformation data corresponding to the target preset bone animation data and the animation skinning model;
the target skin deformation data generation module is used for obtaining target skin deformation data according to the skin deformation parameters and the position information of the vertex of the animation skin model;
and the target animation generation module is used for fusing based on the target skin deformation data and the skin deformation fusion weight to obtain a target animation corresponding to the current skeleton animation data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210984049.XA CN115049767B (en) | 2022-08-17 | 2022-08-17 | Animation editing method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210984049.XA CN115049767B (en) | 2022-08-17 | 2022-08-17 | Animation editing method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115049767A CN115049767A (en) | 2022-09-13 |
CN115049767B true CN115049767B (en) | 2022-11-04 |
Family
ID=83168415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210984049.XA Active CN115049767B (en) | 2022-08-17 | 2022-08-17 | Animation editing method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049767B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6487083B1 (en) * | 2018-03-07 | 2019-03-20 | 株式会社スクウェア・エニックス | Skinning decomposition acceleration method and skinning decomposition acceleration program considering locality of weight map |
CN111383309A (en) * | 2020-03-06 | 2020-07-07 | 腾讯科技(深圳)有限公司 | Skeleton animation driving method, device and storage medium |
CN112354186A (en) * | 2020-11-10 | 2021-02-12 | 网易(杭州)网络有限公司 | Game animation model control method, device, electronic equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10262447B2 (en) * | 2016-09-30 | 2019-04-16 | Disney Enterprises, Inc. | Systems and methods for virtual entity animation |
CN112598773A (en) * | 2020-12-31 | 2021-04-02 | 珠海金山网络游戏科技有限公司 | Method and device for realizing skeleton skin animation |
-
2022
- 2022-08-17 CN CN202210984049.XA patent/CN115049767B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6487083B1 (en) * | 2018-03-07 | 2019-03-20 | 株式会社スクウェア・エニックス | Skinning decomposition acceleration method and skinning decomposition acceleration program considering locality of weight map |
CN111383309A (en) * | 2020-03-06 | 2020-07-07 | 腾讯科技(深圳)有限公司 | Skeleton animation driving method, device and storage medium |
CN112354186A (en) * | 2020-11-10 | 2021-02-12 | 网易(杭州)网络有限公司 | Game animation model control method, device, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
Automatic Generation of Dynamic Skin Deformation for Animated Characters;Bian, Shaojun et al.;《Symmetry》;20180331;第10卷(第4期);第1-15页 * |
改进的骨骼蒙皮算法模拟皮肤变形;夏开建 等;《计算机应用与软件》;20091231;第26卷(第12期);第174-176页 * |
Also Published As
Publication number | Publication date |
---|---|
CN115049767A (en) | 2022-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112614213B (en) | Facial expression determining method, expression parameter determining model, medium and equipment | |
CN111832468B (en) | Gesture recognition method and device based on biological recognition, computer equipment and medium | |
Holden et al. | Subspace neural physics: Fast data-driven interactive simulation | |
US20200410733A1 (en) | Method for skinning character model, device for skinning character model, storage medium and electronic device | |
CN115049769B (en) | Character animation generation method and device, computer equipment and storage medium | |
CN114677572B (en) | Object description parameter generation method and deep learning model training method | |
CN113593001A (en) | Target object three-dimensional reconstruction method and device, computer equipment and storage medium | |
CN110956131A (en) | Single-target tracking method, device and system | |
CN116824092B (en) | Three-dimensional model generation method, three-dimensional model generation device, computer equipment and storage medium | |
CN112233222A (en) | Human body parametric three-dimensional model deformation method based on neural network joint point estimation | |
CN115330979A (en) | Expression migration method and device, electronic equipment and storage medium | |
CN115618208A (en) | Data restoration method and device, computer equipment and storage medium | |
CN109934926B (en) | Model data processing method, device, readable storage medium and equipment | |
CN117372604B (en) | 3D face model generation method, device, equipment and readable storage medium | |
CN115049767B (en) | Animation editing method and device, computer equipment and storage medium | |
CN115984440B (en) | Object rendering method, device, computer equipment and storage medium | |
CN116977503A (en) | Track information processing method, track information processing device, computer equipment and readable storage medium | |
CN115294280A (en) | Three-dimensional reconstruction method, apparatus, device, storage medium, and program product | |
Lee et al. | Holistic 3D face and head reconstruction with geometric details from a single image | |
CN112037315A (en) | Method and device for generating local descriptor and method and device for generating model | |
CN115049766B (en) | Method and device for constructing muscle fiber model, computer equipment and storage medium | |
CN116206026B (en) | Track information processing method, track information processing device, computer equipment and readable storage medium | |
CN117611727B (en) | Rendering processing method, device, equipment and medium | |
CN111651623B (en) | Method, device, equipment and storage medium for constructing high-precision facial expression library | |
Torres-Muñoz et al. | Exploring a novel facial animation technique using numerical traced algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder | ||
CP02 | Change in the address of a patent holder |
Address after: C2C, Building C2, TCL science park, No. 1001, Zhongshan Garden Road, Shuguang Community, Xili Street, Nanshan District, Shenzhen, Guangdong 518051 Patentee after: Shenzhen Zesen Software Technology Co.,Ltd. Address before: 518051 5th floor, building 10a, Shenzhen Bay Science Park, Gaoxin South nine road, Nanshan District, Shenzhen City, Guangdong Province Patentee before: Shenzhen Zesen Software Technology Co.,Ltd. |