CN112509098A - Animation image generation method and device and electronic equipment - Google Patents

Animation image generation method and device and electronic equipment Download PDF

Info

Publication number
CN112509098A
CN112509098A CN202011372618.2A CN202011372618A CN112509098A CN 112509098 A CN112509098 A CN 112509098A CN 202011372618 A CN202011372618 A CN 202011372618A CN 112509098 A CN112509098 A CN 112509098A
Authority
CN
China
Prior art keywords
node
pose
target node
vertex
position data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011372618.2A
Other languages
Chinese (zh)
Other versions
CN112509098B (en
Inventor
彭昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011372618.2A priority Critical patent/CN112509098B/en
Publication of CN112509098A publication Critical patent/CN112509098A/en
Application granted granted Critical
Publication of CN112509098B publication Critical patent/CN112509098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an animation image generation method, an animation image generation device and electronic equipment, and relates to the field of artificial intelligence such as computer vision, deep learning and augmented reality technologies. The specific implementation scheme is as follows: acquiring first position data of a first vertex of a patch model of a target object, acquiring second position data of a second vertex of a grid skin model in a skin skeleton model in the same topology with the patch model and first local poses of M nodes of the skeleton model in the skin skeleton model, wherein M is a positive integer greater than 1; determining a second local pose of each node according to the sequence of the tree-like hierarchical structure of the M nodes from the root node to the top node, and respectively aiming at each node, based on the first position data, the second position data and the first local pose of the node; adjusting the poses of the M nodes based on the second local poses of the M nodes to generate an animated image of the target object. According to the technology of the application, the problem that the difficulty of generating the animation image is high is solved, and the difficulty of generating the animation image is simplified.

Description

Animation image generation method and device and electronic equipment
Technical Field
The application relates to the field of artificial intelligence, in particular to the technical field of computer vision, deep learning and augmented reality, and specifically relates to an animation image generation method and device and electronic equipment.
Background
The existing animation image generation technology is generally realized based on the character expression BlendShape, each final model is linearly weighted and synthesized by a plurality of BlendShape coefficients and the corresponding BlendShape model, and the skin skeleton model can be used for generating the animation image for the purpose of light weight.
In the related art, generating an animated image based on a skinned skeleton model generally includes manually adjusting a skeleton driving coefficient of the skinned skeleton model to realize skeleton deformation, so as to generate an animated image of a target shape.
Disclosure of Invention
The disclosure provides an animation image generation method and device and electronic equipment.
According to a first aspect of the present disclosure, there is provided an animated character generating method including:
acquiring first position data of a first vertex of a patch model of a target object, and acquiring second position data of a second vertex of a grid skin model in a skin skeleton model in the same topology with the patch model and first local poses of M nodes of the skeleton model in the skin skeleton model, wherein M is a positive integer greater than 1;
determining a second local pose of each node based on the first position data, the second position data and the first local pose of the node respectively for each node according to the sequence of the tree-shaped hierarchical structure of the M nodes from the root node to the top;
adjusting the poses of the M nodes based on the second local poses of the M nodes to generate an animated image of the target object.
According to a second aspect of the present disclosure, there is provided an animated character generating apparatus including:
the first acquisition module is used for acquiring first position data of a first vertex of a patch model of a target object;
the second acquisition module is used for acquiring second position data of a second vertex of a grid skin model in a skin skeleton model which has the same topology as the patch model and first local poses of M nodes of the skeleton model in the skin skeleton model, wherein M is a positive integer greater than 1;
a determining module, configured to determine, for each node, a second local pose of the node based on the first position data, the second position data, and the first local pose of the node in an order from a root node to a top node according to the tree-like hierarchical structure of the M nodes;
an adjustment module to adjust the poses of the M nodes based on the second local poses of the M nodes to generate an animated image of the target object.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform any one of the methods of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product for performing any one of the methods of the first aspect when the computer program product is run on an electronic device.
According to the technology of the application, the problem that the difficulty of generating the animation image is high in the animation image generation technology is solved, and the difficulty of generating the animation image is simplified.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic flow chart diagram of an animated character generation method according to a first embodiment of the present application;
FIG. 2 is a schematic illustration of a display of a skinned bone model;
FIG. 3 is a schematic representation of a tree hierarchy of nodes in a skeletal model;
FIG. 4 is a schematic view of a hierarchical relationship of nodes;
FIG. 5 is a schematic illustration of the effect of nodes at different levels on the extent of the skin;
FIG. 6 is a schematic view of a traversal of various nodes;
FIG. 7 is a schematic diagram of the change of the pose change control vertex position of the node;
FIG. 8 is a schematic structural diagram of an animated character generating apparatus according to a second embodiment of the present application;
fig. 9 is a block diagram of an electronic device for implementing the animated character generating method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First embodiment
As shown in fig. 1, the present application provides an animated character generating method, comprising the steps of:
step S101: the method comprises the steps of obtaining first position data of a first vertex of a patch model of a target object, obtaining second position data of a second vertex of a grid skin model in a skin skeleton model which has the same topology with the patch model, and obtaining first local poses of M nodes of the skeleton model in the skin skeleton model, wherein M is a positive integer larger than 1.
In the embodiment, the animation image generation method relates to the technical field of artificial intelligence such as computer vision, deep learning and augmented reality, and can be applied to generating scenes of personalized three-dimensional animation images based on images. The method may be applied to an electronic device, which may be a server or a terminal, and is not limited specifically herein.
The target object may be an image, the image may include image data of the target object, and the target object may be a cartoon image such as a cartoon character or a cartoon animal, or may be a non-cartoon image, which is not limited herein.
The patch model of the target object refers to a model in which the target object is modeled based on an image including image data of the target object, and the patch model specifically models the appearance shape of the target object. It is usually represented by some connected polygons, such as by some connected triangles, which may be referred to as three-dimensional patch models, and each vertex of the polygon is the first vertex of the patch model.
The skinned bone model may be a pre-stored object model that includes a bone model and a mesh skinned model, as shown in fig. 2.
The skeleton model refers to a skeleton structure formed by connecting "bones" in an object model according to a certain hierarchy, and the hierarchy among the bones describes the structure of the object, as shown in fig. 3, the skeleton model is formed by a tree-shaped hierarchical structure of a plurality of nodes. And the skeletal model is not visible when the object model is presented to the user after modeling is complete. As shown in fig. 2, the nodes of the bone model in the skinned bone model are not visible.
The mesh skinning model refers to the skin of an object, is a deformable mesh which changes under the influence of bones, and is usually represented by connected polygons, and each vertex of the skinning in the mesh skinning model is attached to a bone in the bone model, so that each node of the bone model is covered by a skinning area formed by the polygons. As shown in fig. 2, the model represented by a plurality of connected triangles in the skinned bone model is a mesh skinning model, and each vertex of the triangle is a second vertex of the skinning bone model.
The vertex in the mesh skin model is controlled by the node in the skeleton model, and the position of the vertex can be changed, so that the mesh skin model can be deformed, the whole appearance is deformed, and the animation image of the target object is generated.
The vertex in the mesh skin model is controlled by one or more nodes according to the skin weight, the total weight of the skin of which one vertex is controlled by the nodes is 1, namely, the position transformation of one vertex is the result of the combined action of the control nodes. And the positions of the vertexes are controlled to be transformed through the nodes, so that the appearance of the mesh skin model is deformed.
Also, one node may control a plurality of vertices. When the node is transformed, namely the pose of the node is adjusted, the transformation acts on each vertex in the covering range of the node according to the covering weight.
The fact that the surface patch model and the skin bone model are in the same topology means that the two models have the same number of vertexes, the relative position of the vertex of the skin bone model is consistent with the relative position of the vertex of the surface patch model, and the connection sequence of the vertex of the skin bone model is also consistent with the connection sequence of the vertex of the surface patch model.
The patch model can be characterized by the vertex sequence and the vertex index information of the first vertex, and the appearance shape of the target object can be constructed according to the vertex sequence and the vertex index information of the first vertex. The vertex sequence of the first vertices includes first position data of the respective first vertices, and the vertex index information includes point sequence numbers of the respective first vertices. The first position data may be coordinate data of a first vertex in a three-dimensional coordinate system created at the time of modeling.
The target object is modeled based on an image including image data of the target object to obtain a vertex sequence and vertex index information characterizing a patch model, so that first position data of a first vertex of the patch model of the target object can be obtained based on the vertex sequence of the patch model.
The skin skeleton model is a pre-stored object model, and the skeleton model can be characterized by a stored node sequence and a link relation of nodes. Each node in the node sequence stores a local pose of the node, the local pose of the node represents the relative position relationship between the node and a father node of the node, the global pose of the father node can be transmitted layer by layer from the node behind the root node, and finally the global pose of each node, namely the absolute position of each node, can be obtained.
For example, starting from the root node, the local pose of the root node is the global pose of the root node, the child nodes of the root node can calculate the global pose of the child node according to the local pose and the global pose of the root node, the global pose calculation of other nodes is similar to the global pose calculation of the child nodes of the root node, and details are not repeated here.
The address of the next node is stored in the link relation of the node, and the node of the next node of the node can be obtained through the link relation of the node.
In addition, the poses (whether local poses or global poses) of the nodes mentioned in the embodiments of the present application refer to rigid poses of the nodes, and the rigid poses of the nodes refer to poses in a three-dimensional orthogonal space coordinate system, which can be represented by 9 numerical values including 3-axis translation values of three-dimensional orthogonal coordinate axes xyz, 3 euler angular rotation values, and 3 scaling values.
The mesh skin model can be characterized by the vertex sequence and the vertex index information of the second vertex, and the appearance shape of the object can be constructed according to the vertex sequence and the vertex index information of the second vertex. The vertex sequence of the second vertices includes second position data of the respective second vertices, and the vertex index information includes point sequence numbers of the respective second vertices. The second position data may be coordinate data of a second vertex in a three-dimensional coordinate system created at the time of modeling.
And acquiring second position data of the first local poses and the second vertices of the M nodes by using a pre-stored node sequence of a skeleton model in a skin skeleton model which has the same topology as the patch model and a pre-stored vertex sequence of a mesh skin model. The M nodes may be all nodes in the skeleton model, and the number of the second vertices may also be all vertices in the mesh skin model.
Step S102: and determining a second local pose of the node based on the first position data, the second position data and the first local pose of the node respectively aiming at each node according to the sequence of the tree-shaped hierarchical structure of the M nodes from the root node to the top.
The tree-like hierarchical structure of the M nodes represents that a plurality of hierarchies are formed by the M nodes, the hierarchy where the M nodes are located is the highest hierarchy from the root Node, and then the hierarchies where the M nodes are located are sequentially descended, as shown in fig. 4, the next hierarchy is a child Node relative to the Node of the previous hierarchy, such as a NodeRootIs a root Node, a NodeENode for NodeRootA child Node of, a NodeFNode for NodeEA child Node of, a NodeGNode for NodeFA child Node of, a NodeHNode for NodeGThe child node of (1). Correspondingly, the previous level is a parent Node relative to the next level, e.g. NodeFNode for NodeGAnd so on.
For the M nodes, the purpose of this step is to solve the bone driving coefficient of each node based on the first position data of the first vertex of the patch model to adjust the poses of the M nodes. Correspondingly, the positions of the second vertexes in the mesh skin model are driven to change through the pose change of the M nodes, so that the appearance shape of the mesh skin model is changed, and an animation image similar to the appearance shape of the patch model is generated.
The skeleton driving coefficient of the node can be the second local pose of the node, and when the second local pose of the node represents the pose to which the node needs to be adjusted, the position adjustment of the second vertex in the covering range can be controlled and adjusted to the position with the data distance similar to the first position data distance of the first vertex, so that the grid covering model can be deformed to the appearance shape similar to the surface patch model.
Because the node pose is influenced by the father node, when the pose of the father node changes, the pose of the node can be changed accordingly, and correspondingly, the skin corresponding to the node can be influenced. Therefore, the node levels are different, and the skin range influenced by the node levels is different.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating the influence of nodes of different levels on the skinning range, as shown in fig. 5, the tree-like hierarchical structure of fig. 4 is used as a reference, from left to right, as nodes NodeRootNode, NodeENode, NodeFNode, NodeGAnd NodeHImpact on skinning range, where the range in dashed box 501 represents the skinning range of the node. It can be known that the higher the node level is, the larger the skin range it affects is, and for the root node, it affects the whole skin of the head in the mesh skin model, i.e. the root node may affect the position of the corresponding second vertex in the whole skin, and other nodes only affect the corresponding skin range corresponding to the nose on the head.
In addition, one second vertex in the mesh skin model is controlled by one or more nodes, and one node can control a plurality of vertices, so that the calculation sequence of the second local poses of the nodes of the M nodes is different, and a distinct result can be obtained.
For example, if the Node is calculated firstGAfter the result is obtained, the Node is calculatedFSecond local pose of, since NodeGPosition and pose receiving NodeFInfluence of, its NodeGThe position of the second vertex in the corresponding covering range can be received by the NodeFThe change of the grid skin model is changed, so that the appearance shape of the grid skin model cannot be controlled.
Therefore, the calculation order of each node may be set, and the nodes are sequentially traversed from the root node to the top node in the tree-like hierarchical structure of the M nodes using a width-first algorithm to determine the second local pose of each node based on the first position data of the first vertex, the second position data of the second vertex, and the first local pose of each node.
Referring to fig. 6, fig. 6 is a schematic diagram of traversal of each node, as shown in fig. 6, the traversal starts from the root node (the level of the root node is the first level), and the second level, the third level, the fourth level and the fifth level are sequentially traversed.
Higher level nodes affect a greater number and range of second vertices resulting in a more stable and slow transformation, while lower level nodes affect a lesser number of second vertices resulting in a more specific and fast transformation. And under the condition that the difference between the appearance shapes of the patch model and the skin skeleton model of the same topology is large, a more robust result can be obtained by a high-to-low traversal mode.
Step S103: adjusting the poses of the M nodes based on the second local poses of the M nodes to generate an animated image of the target object.
In this step, the second local poses of the M nodes are the bone driving coefficients of the skin bone model, and the poses of the M nodes can be adjusted by the second local poses of the M nodes, so that the poses of the nodes are adjusted to the poses corresponding to the second local poses.
Correspondingly, the position of the second vertex in the mesh skin model can be controlled to change, and the changed position is determined by the initial local pose of the node, namely the first local pose, and the adjusted local pose, namely the second local pose, so that the animation image of the target object can be generated, and the appearance shape of the animation image is similar to the appearance shape of the patch model.
In this embodiment, by obtaining a patch model of a target object, each node is sequentially traversed from the root node to the bottom according to the tree-like hierarchical structure of the nodes in the skeleton model in the skin skeleton model. And for each node, calculating a bone driving coefficient of each node based on first position data of a first vertex in the patch model, second position data of a second vertex in the mesh skin model in the skin bone model and the first local pose of each node, so that the appearance shape of the skin bone model can be driven to deform based on the bone driving coefficients of all nodes. Therefore, the animation image with the appearance shape similar to that of the patch model can be automatically generated based on the skin skeleton model, and the generation difficulty of the animation image is simplified.
And only transmitting the skeleton driving coefficient, the appearance shape of the skin skeleton model can be driven to deform, and the animation image can be generated quickly, so that the network transmission flow is small, the appearance of the skin skeleton model can be driven very lightly, the size of an application program package generated by the animation image can be reduced, and the skin skeleton model is favorable for deployment at a mobile terminal.
Optionally, the second local pose of the node is a local pose at which the difference between the position distances of the second vertex and the first vertex reaches a minimum value among the difference between the position distances corresponding to the node.
In this embodiment, the second local pose of the node represents the local pose of the node after the pose adjustment, and during the adjustment process, the position of the second vertex in the mesh skin model can be controlled to change, and the changed position is related to the second local pose of the node.
In order to make the appearance shape of the skinned skeleton model very similar to that of the patch model, the second local pose of the node needs to be satisfied, and the second vertex in the mesh skinned model can be controlled to adjust the position, so that the difference of the position distance between the second vertex and the first vertex is minimized. And at the moment, enabling the position distance difference value between the second vertex and the first vertex to reach the local pose of the minimum value in the position distance difference values corresponding to the nodes, namely the second local pose of the nodes is the optimal bone driving coefficient of the nodes.
In a specific implementation process, the optimal bone driving coefficient of the node can be solved through a minimum extremum function, as shown in formula (1).
Figure BDA0002806563250000081
Wherein, in the formula (1), VertexA'iAfter the pose of the node is transformed according to the second local pose of the node, the second top in the grid skin model is controlledPosition data, VertexB, after adjustment of the dotsiThe first position data of the first vertex in the patch model is shown, i is the serial number of the first vertex in the patch model, and N is the number of the first vertex.
The position data of the second vertex after adjustment can be obtained by calculating the second position data of the second vertex, the first local pose of the node and the preset local pose. During initialization, the preset local pose can be a default bone driving coefficient, and for each node, an initialized bone driving coefficient can be set by default.
The function argmin () is the preset local pose obtained when the difference between the position distance between the second vertex in the mesh skin model and the first vertex in the patch model is the minimum, and at this time, the preset local pose is the second local pose of the node, that is, the optimal bone driving coefficient of the node.
In this embodiment, the optimal bone driving coefficient of each node is solved, so that the position distance difference between the second vertex and the first vertex after the position adjustment is minimized, and thus, the skin bone model can be driven to deform based on the optimal bone driving coefficients of the M nodes, and an animation shape with an appearance shape very similar to that of the patch model can be generated.
Optionally, the step S102 specifically includes:
determining a first global pose of a first target node based on a first local pose of the first target node and a first local pose of a second target node, the first target node being any one of the M nodes except the root node, the second target node being a node of a target hierarchy of the M nodes, the target hierarchy being higher than a hierarchy of the first target node;
determining a second global pose of a parent node of the first target node based on a second local pose of the parent node;
determining a second local pose of the first target node based on the second global pose of the parent node and the first global pose of the first target node.
In this embodiment, the first target node is any one of the M nodes except the root node, and the determination manner of the second local pose of the node is similar for any one of the M nodes except the root node, and here, the first target node is taken as an example for description.
As can be seen from the above description, each node in the node sequence stores its own local pose, the local pose of the node represents the relative position relationship between the node and its parent node, and the global pose of the parent node can be transmitted layer by layer from the node after the root node, and finally the global pose of each node can be obtained.
Specifically, the global pose of the parent node may be transmitted layer by layer based on the first local pose of the high-level node, the global pose of the parent node of the first target node is obtained first, and then the first global pose of the first target node is determined based on the global pose of the parent node and the first local pose of the first target node.
In a specific implementation, the global pose and the local pose can be represented by a rigid pose matrix, and the rigid pose matrix can be a 4x 4 matrix which contains rigid poses of a three-dimensional space. The pose and the rigid pose matrix can be converted into each other, and the first global pose of the first target node can be obtained by multiplying the rigid pose matrix representing the global pose of the father node of the first target node and the rigid pose matrix representing the first local pose of the first target node.
It should be noted that the first global pose of the first target node is the initial pose of the first target node.
The second local poses of the nodes are calculated sequentially from the root node to the bottom according to the tree-like hierarchical structure of the M nodes, so that the father node of the first target node is traversed before the first target node is traversed, namely the second local pose of the father node of the first target node is obtained.
In order to make the skin-skeleton model smaller and smaller in the position distance difference between the second vertex and the first vertex every time the skin-skeleton model traverses once, the second local pose of the parent node of the first target node can be applied to the skin-skeleton model, the second vertex influenced by the parent node of the first target node in the mesh skin-skeleton model is adjusted to a corresponding position, and the skin-skeleton model is driven to deform, so that the second local pose of the subsequent nodes at the lower level can be determined based on the second local pose of the parent node.
Specifically, after the second local pose of each node in the upper level is obtained, the second local pose of each node in the upper level may be transmitted layer by layer to obtain the global pose of a node in the upper level with respect to the parent node of the first target node, and then the second global pose of the parent node may be determined based on the second local pose of the parent node of the first target node and the global pose of the upper level.
Then, a second local pose of the first target node may be determined based on the second global pose of the parent node and the first global pose of the first target node, and a specific implementation manner is described in detail in the next embodiment.
In this embodiment, after determining the second local pose of the parent node of the first target node, the second local pose of the parent node of the first target node may be applied to the skinned skeleton model, so that a second vertex, which is affected by the parent node of the first target node, in the mesh skinned model is adjusted to a corresponding position, and the skinned skeleton model is driven to deform. In this way, the second local pose of the subsequent low-level nodes, such as the first target node, can be determined based on the second local pose of the parent node, so that the second local pose of each node can be recursively determined layer by layer, and the calculation difficulty of the second local pose of the node is reduced.
Optionally, the determining a second local pose of the first target node based on the second global pose of the parent node and the first global pose of the first target node includes:
determining a second global pose of the first target node based on the second global pose of the parent node and a preset local pose of the first target node;
determining a pose transformation matrix of the first target node based on a first global pose and a second global pose of the first target node;
determining third position data of the second vertex based on the pose transformation matrix and the second position data of the second vertex;
updating the preset local pose based on the first position data and the third position data to obtain a second local pose of the first target node.
In this embodiment, during the initial calculation, default preset local poses may be set for all nodes, and the second global pose of the first target node may be determined based on the second global pose of the parent node and the preset local pose of the first target node, as shown in formula (2).
currentNodejGlobal=parentNodejGlobal*localTRSj (2)
Wherein, in the above formula (2), j represents the serial number of the node, which may be the first target node, currentNodejGlobal is a rigid pose matrix representing a second Global pose of the first target node, a parentNodejGlobal is a rigid pose matrix, localTRS, that characterizes a second Global pose of a parent node of a first target nodejA rigid pose matrix characterizing the preset local pose of the first target node.
localTRSjI.e. to solve the localTRS of all nodesjTo achieve
Figure BDA0002806563250000111
The value of (a) is minimal.
In order to drive the deformation of the skinned skeleton model to generate an animated image similar to the appearance shape of the patch model, the pose of the first target node needs to be transformed from a first global pose to a second global pose, i.e. the second global pose of the first target node is the target of the pose transformation.
Based on the first global pose and the second global pose of the first target node, the pose transformation matrix of the first target node can be determined, as shown in formula (3).
Derformj=currentNodejGlobal*initNodejGlobal-1 (3)
Wherein initNodejGlobal-1Is a first global pose, Deform, of a first target nodejThe pose transformation matrix is a pose transformation matrix of the first target node, the pose transformation matrix is a 4x 4 matrix, the pose transformation matrix also contains pose related information, and the pose transformation matrix is a matrix which can enable one rigid pose to reach another rigid pose after multiplication.
And under the condition that the first target node is transformed to another pose through the pose transformation matrix, the position of a second vertex in the covering range of the first target node can be controlled to change, and the appearance shape of the covering skeleton model is determined by the position of each vertex.
Referring to fig. 7, fig. 7 is a schematic diagram of the change of the position of the pose change control vertex of the Node, as shown in fig. 7, the NodeGThe initial pose 701 and the initial position of the second vertex 702 within the range of the skin are shown in the left diagram, and when the pose changes, the position of the second vertex within the range of the skin follows the change as shown in the right diagram.
In addition, because the node pose is influenced by the parent node, when the pose of the parent node is transformed, the second vertexes of the lower levels are also influenced, so that relative to one second vertex, the second vertex is controlled by a plurality of nodes, and the total weight of the skinning of the control of the plurality of nodes is 1.
The third position data of the second vertex may be calculated as equation (4).
Figure BDA0002806563250000121
Wherein, VertexA'iThird position data of the second vertex, which is the position of the second vertex adjusted under the rigid transformation of the node j, VertexAiThe initial position, i.e. the position corresponding to the second position data,for any second vertex satisfies
Figure BDA0002806563250000122
As can be seen from equation (4) above, the third position data of each second vertex is obtained by multiplying the sum of the skinning weight of the relevant node and the pose transformation matrix, and multiplying the sum to the initial second position data.
Then, updating the preset local pose based on the first position data and the third position data to obtain the preset local pose
Figure BDA0002806563250000123
And when the minimum preset local pose is reached, the preset local pose is the second local pose of the first target node.
In this embodiment, the pose transformation matrix of the first target node is determined based on the second global pose of the parent node of the first target node and the first global pose of the first target node, and then the third position data of the second vertex is determined based on the pose transformation matrix. In this way, the preset local pose can be updated based on the first position data and the third position data, so that the second local pose of the first target node when the distance difference between the first position data and the third position data is minimized can be solved.
Optionally, the updating the preset local pose based on the first position data and the third position data to obtain a second local pose of the first target node includes:
determining a positional distance difference between the second vertex and the first vertex based on the first positional data and third positional data;
updating the preset local pose based on the position distance difference value to obtain a plurality of position distance difference values corresponding to the first target node; and determining the preset local pose corresponding to the minimum position distance difference as a second local pose of the first target node under the condition that the position distance difference reaches the minimum position distance difference in the plurality of position distance differences corresponding to the first target node.
In this embodiment, can be based on
Figure BDA0002806563250000131
And calculating a position distance difference value between the second vertex and the first vertex, and calculating a preset local pose of the first target node when the position distance difference value reaches the minimum value based on the formula (1), wherein the preset local pose is the second local pose of the first target node obtained by solving. Therefore, by comparing the position distance between the second vertex and the first vertex after the position adjustment, the optimal bone driving coefficient of each node can be solved so as to drive the skin bone model to deform to the appearance shape most similar to the surface patch model.
Optionally, the number of the second vertices includes a plurality of second vertices, and the determining third position data of the second vertices based on the pose transformation matrix and the second position data of the second vertices includes:
for a plurality of second vertexes within the skin range of the first target node and the third target node, determining third position data of each second vertex based on the pose transformation matrix, the skin weight of each second vertex by the first target node and the third target node and second position data of each second vertex, wherein the third target node is a child node of the first target node;
the updating the preset local pose based on the first position data and the third position data to obtain a second local pose of the first target node includes:
updating the preset local pose based on the first position data of the first vertex corresponding to the plurality of second vertices and the third position data of the plurality of second vertices to obtain a second local pose of the first target node.
In this embodiment, because the node pose is affected by the parent node, when the parent node pose changes, the vertices in the skin range of the child nodes are affected, and the influence on the vertices in the skin range of the high-level nodes of the parent node is small. Thus, for a node, typically only the second vertices within its skinning range and within the skinning range of the low-level nodes are affected, and therefore, the third position data for these second vertices may be determined from the first target node and all of the second vertices within the skinning range of the low-level nodes relative to the first target node.
Then, the preset local pose may be updated based on the third position data of the second vertices and the corresponding first position data of the first vertices, so as to obtain the second local pose of the first target node, thereby reducing the amount of computation and optimizing the computation resources.
Second embodiment
As shown in fig. 8, the present application provides an animated character generating device 800 comprising:
a first obtaining module 801, configured to obtain first position data of a first vertex of a patch model of a target object;
a second obtaining module 802, configured to obtain second position data of a second vertex of a mesh skin model in a skin bone model having the same topology as the patch model and first local poses of M nodes of the bone model in the skin bone model, where M is a positive integer greater than 1;
a determining module 803, configured to determine, for each node, a second local pose of the node based on the first position data, the second position data, and the first local pose of the node, respectively, in order from a root node to a top node according to the tree-like hierarchical structure of the M nodes;
an adjusting module 804, configured to adjust the poses of the M nodes based on the second local poses of the M nodes to generate an animated image of the target object.
Optionally, the second local pose of the node is a local pose at which the position distance difference between the second vertex and the first vertex reaches a minimum value among the position distance differences corresponding to the node.
Optionally, wherein the determining module 803 includes:
a first determination unit configured to determine a first global pose of a first target node based on a first local pose of the first target node and a first local pose of a second target node, the first target node being any one of the M nodes except the root node, the second target node being a node of a target hierarchy among the M nodes, the target hierarchy being higher than a hierarchy of the first target node;
a second determination unit configured to determine a second global pose of a parent node of the first target node based on a second local pose of the parent node;
a third determining unit, configured to determine a second local pose of the first target node based on the second global pose of the parent node and the first global pose of the first target node.
Optionally, the third determining unit includes:
a first determining subunit, configured to determine a second global pose of the first target node based on a second global pose of the parent node and a preset local pose of the first target node;
a second determining subunit, configured to determine a pose transformation matrix of the first target node based on the first global pose and the second global pose of the first target node;
a third determining subunit, configured to determine third position data of the second vertex based on the pose transformation matrix and the second position data of the second vertex;
and the updating subunit is configured to update the preset local pose based on the first position data and the third position data, so as to obtain a second local pose of the first target node.
Optionally, the updating subunit is specifically configured to determine, based on the first location data and the third location data, a location distance difference between the second vertex and the first vertex; updating the preset local pose based on the position distance difference value to obtain a plurality of position distance difference values corresponding to the first target node; and determining the preset local pose corresponding to the minimum position distance difference as a second local pose of the first target node under the condition that the position distance difference reaches the minimum position distance difference in the plurality of position distance differences corresponding to the first target node.
Optionally, the number of the second vertices includes a plurality of second vertices, and the third determining subunit is specifically configured to determine, for a plurality of second vertices within a skinning range of the first target node and the third target node, third position data of each second vertex based on the pose transformation matrix, the skinning weight of each second vertex by the first target node and the third target node, and the second position data of each second vertex, where the third target node is a child node of the first target node;
the updating subunit is specifically configured to update the preset local pose based on the first position data of the first vertex and the third position data of the second vertices, where the first position data of the first vertex corresponds to the second vertices, so as to obtain a second local pose of the first target node.
The animation image generation device 800 provided by the application can realize each process realized by the embodiment of the animation image generation method, and can achieve the same beneficial effects, and for avoiding repetition, the details are not repeated here.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 9 is a block diagram of an electronic device according to an animated character generation method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 901.
Memory 902 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the animated character generating method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the animated character generating method provided by the present application.
The memory 902, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the animation character generation method in the embodiment of the present application (for example, the first obtaining module 801, the second obtaining module 802, the determining module 803, and the adjusting module 804 shown in fig. 8). The processor 901 executes various functional applications of the server and data processing, i.e., implements the animation image generation method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 902.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the electronic device according to the method of the embodiment of the present application, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include a memory remotely disposed from the processor 901, and these remote memories may be connected to the electronic device of the animated character generating method through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of the embodiment of the application may further include: an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903 and the output device 904 may be connected by a bus or other means, and fig. 9 illustrates the connection by a bus as an example.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device of the method of embodiments of the present application, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
In this embodiment, by obtaining a patch model of a target object, each node is sequentially traversed from the root node to the bottom according to the tree-like hierarchical structure of the nodes in the skeleton model in the skin skeleton model. And for each node, calculating a bone driving coefficient of each node based on first position data of a first vertex in the patch model, second position data of a second vertex in the mesh skin model in the skin bone model and the first local pose of each node, so that the appearance shape of the skin bone model can be driven to deform based on the bone driving coefficients of all nodes. Therefore, the animation image with the appearance shape similar to that of the patch model can be automatically generated based on the skin skeleton model, and the generation difficulty of the animation image is simplified. Therefore, according to the technical scheme of the embodiment of the application, the problem that the difficulty of generating the animation image is high in the animation image generation technology is well solved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (15)

1. An animated character generation method comprising:
acquiring first position data of a first vertex of a patch model of a target object, and acquiring second position data of a second vertex of a grid skin model in a skin skeleton model in the same topology with the patch model and first local poses of M nodes of the skeleton model in the skin skeleton model, wherein M is a positive integer greater than 1;
determining a second local pose of each node based on the first position data, the second position data and the first local pose of the node respectively for each node according to the sequence of the tree-shaped hierarchical structure of the M nodes from the root node to the top;
adjusting the poses of the M nodes based on the second local poses of the M nodes to generate an animated image of the target object.
2. The method according to claim 1, wherein the second local pose of the node is the local pose at which the difference in positional distance between the second vertex and the first vertex reaches the minimum of the difference in positional distance corresponding to the node.
3. The method of claim 1, wherein the determining, for each node, a second local pose of the node based on the first location data, second location data, and first local pose of the node, respectively, comprises:
determining a first global pose of a first target node based on a first local pose of the first target node and a first local pose of a second target node, the first target node being any one of the M nodes except the root node, the second target node being a node of a target hierarchy of the M nodes, the target hierarchy being higher than a hierarchy of the first target node;
determining a second global pose of a parent node of the first target node based on a second local pose of the parent node;
determining a second local pose of the first target node based on the second global pose of the parent node and the first global pose of the first target node.
4. The method of claim 3, wherein the determining a second local pose of the first target node based on the second global pose of the parent node and the first global pose of the first target node comprises:
determining a second global pose of the first target node based on the second global pose of the parent node and a preset local pose of the first target node;
determining a pose transformation matrix of the first target node based on a first global pose and a second global pose of the first target node;
determining third position data of the second vertex based on the pose transformation matrix and the second position data of the second vertex;
updating the preset local pose based on the first position data and the third position data to obtain a second local pose of the first target node.
5. The method of claim 4, wherein the updating the preset local pose based on the first and third position data to obtain a second local pose of the first target node comprises:
determining a positional distance difference between the second vertex and the first vertex based on the first positional data and third positional data;
updating the preset local pose based on the position distance difference value to obtain a plurality of position distance difference values corresponding to the first target node; and determining the preset local pose corresponding to the minimum position distance difference as a second local pose of the first target node under the condition that the position distance difference reaches the minimum position distance difference in the plurality of position distance differences corresponding to the first target node.
6. The method of claim 4, wherein the number of second vertices includes a plurality, the determining third location data for the second vertices based on the pose transformation matrix and the second location data for the second vertices comprising:
for a plurality of second vertexes within the skin range of the first target node and the third target node, determining third position data of each second vertex based on the pose transformation matrix, the skin weight of each second vertex by the first target node and the third target node and second position data of each second vertex, wherein the third target node is a child node of the first target node;
the updating the preset local pose based on the first position data and the third position data to obtain a second local pose of the first target node includes:
updating the preset local pose based on the first position data of the first vertex corresponding to the plurality of second vertices and the third position data of the plurality of second vertices to obtain a second local pose of the first target node.
7. An animated character generation apparatus comprising:
the first acquisition module is used for acquiring first position data of a first vertex of a patch model of a target object;
the second acquisition module is used for acquiring second position data of a second vertex of a grid skin model in a skin skeleton model which has the same topology as the patch model and first local poses of M nodes of the skeleton model in the skin skeleton model, wherein M is a positive integer greater than 1;
a determining module, configured to determine, for each node, a second local pose of the node based on the first position data, the second position data, and the first local pose of the node in an order from a root node to a top node according to the tree-like hierarchical structure of the M nodes;
an adjustment module to adjust the poses of the M nodes based on the second local poses of the M nodes to generate an animated image of the target object.
8. The apparatus according to claim 7, wherein the second local pose of the node is a local pose at which a difference in positional distance between the second vertex and the first vertex reaches a minimum value among the difference in positional distance corresponding to the node.
9. The apparatus of claim 7, wherein the means for determining comprises:
a first determination unit configured to determine a first global pose of a first target node based on a first local pose of the first target node and a first local pose of a second target node, the first target node being any one of the M nodes except the root node, the second target node being a node of a target hierarchy among the M nodes, the target hierarchy being higher than a hierarchy of the first target node;
a second determination unit configured to determine a second global pose of a parent node of the first target node based on a second local pose of the parent node;
a third determining unit, configured to determine a second local pose of the first target node based on the second global pose of the parent node and the first global pose of the first target node.
10. The apparatus of claim 9, wherein the third determining unit comprises:
a first determining subunit, configured to determine a second global pose of the first target node based on a second global pose of the parent node and a preset local pose of the first target node;
a second determining subunit, configured to determine a pose transformation matrix of the first target node based on the first global pose and the second global pose of the first target node;
a third determining subunit, configured to determine third position data of the second vertex based on the pose transformation matrix and the second position data of the second vertex;
and the updating subunit is configured to update the preset local pose based on the first position data and the third position data, so as to obtain a second local pose of the first target node.
11. The apparatus according to claim 10, wherein the updating subunit is specifically configured to determine, based on the first position data and third position data, a position distance difference between the second vertex and the first vertex; updating the preset local pose based on the position distance difference value to obtain a plurality of position distance difference values corresponding to the first target node; and determining the preset local pose corresponding to the minimum position distance difference as a second local pose of the first target node under the condition that the position distance difference reaches the minimum position distance difference in the plurality of position distance differences corresponding to the first target node.
12. The apparatus according to claim 10, wherein the number of second vertices includes a plurality, and the third determining subunit is specifically configured to determine, for a plurality of second vertices within a skinning range of the first target node and the third target node, third position data of each second vertex based on the pose transformation matrix, the skinning weight of each second vertex by the first target node and the third target node, and the second position data of each second vertex, respectively, the third target node being a child node of the first target node;
the updating subunit is specifically configured to update the preset local pose based on the first position data of the first vertex and the third position data of the second vertices, where the first position data of the first vertex corresponds to the second vertices, so as to obtain a second local pose of the first target node.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product for performing the method of any one of claims 1 to 6 when the computer program product is run on an electronic device.
CN202011372618.2A 2020-11-30 2020-11-30 Animation image generation method and device and electronic equipment Active CN112509098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011372618.2A CN112509098B (en) 2020-11-30 2020-11-30 Animation image generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011372618.2A CN112509098B (en) 2020-11-30 2020-11-30 Animation image generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112509098A true CN112509098A (en) 2021-03-16
CN112509098B CN112509098B (en) 2024-02-13

Family

ID=74967765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011372618.2A Active CN112509098B (en) 2020-11-30 2020-11-30 Animation image generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112509098B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022043244A (en) * 2021-03-24 2022-03-15 北京百度網訊科技有限公司 Slider processing method and apparatus used for avatar, electronic apparatus, storage medium, and computer program
CN117095086A (en) * 2023-10-18 2023-11-21 腾讯科技(深圳)有限公司 Animation processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130107003A1 (en) * 2011-10-31 2013-05-02 Electronics And Telecommunications Research Institute Apparatus and method for reconstructing outward appearance of dynamic object and automatically skinning dynamic object
CN103530897A (en) * 2013-09-30 2014-01-22 华为软件技术有限公司 Movement redirection processing method and device
CN104021584A (en) * 2014-06-25 2014-09-03 无锡梵天信息技术股份有限公司 Implementation method of skinned skeletal animation
CN111223171A (en) * 2020-01-14 2020-06-02 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111738935A (en) * 2020-05-15 2020-10-02 完美世界(北京)软件科技发展有限公司 Ghost rendering method and device, storage medium and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130107003A1 (en) * 2011-10-31 2013-05-02 Electronics And Telecommunications Research Institute Apparatus and method for reconstructing outward appearance of dynamic object and automatically skinning dynamic object
CN103530897A (en) * 2013-09-30 2014-01-22 华为软件技术有限公司 Movement redirection processing method and device
CN104021584A (en) * 2014-06-25 2014-09-03 无锡梵天信息技术股份有限公司 Implementation method of skinned skeletal animation
CN111223171A (en) * 2020-01-14 2020-06-02 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111738935A (en) * 2020-05-15 2020-10-02 完美世界(北京)软件科技发展有限公司 Ghost rendering method and device, storage medium and electronic device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FEDERICA BOGO等: "FAUST: Dataset and Evaluation for 3D Mesh Registration", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
丁鹏;贾月乐;张静;罗东芳;: "骨骼蒙皮动画设计与实现", 技术与市场, no. 10 *
吴伟和;郝爱民;赵永涛;万巧慧;李帅;: "一种人体运动骨骼提取和动画自动生成方法", 计算机研究与发展, no. 07 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022043244A (en) * 2021-03-24 2022-03-15 北京百度網訊科技有限公司 Slider processing method and apparatus used for avatar, electronic apparatus, storage medium, and computer program
US11842457B2 (en) 2021-03-24 2023-12-12 Beijing Baidu Netcom Science Technology Co., Ltd. Method for processing slider for virtual character, electronic device, and storage medium
CN117095086A (en) * 2023-10-18 2023-11-21 腾讯科技(深圳)有限公司 Animation processing method, device, equipment and storage medium
CN117095086B (en) * 2023-10-18 2024-02-09 腾讯科技(深圳)有限公司 Animation processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112509098B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
JP7227292B2 (en) Virtual avatar generation method and device, electronic device, storage medium and computer program
US20210383605A1 (en) Driving method and apparatus of an avatar, device and medium
CN112509099B (en) Avatar driving method, apparatus, device and storage medium
US20210312685A1 (en) Method for synthesizing figure of virtual object, electronic device, and storage medium
CN113643412B (en) Virtual image generation method and device, electronic equipment and storage medium
WO2022143178A1 (en) Motion retargeting method and apparatus, electronic device, and storage medium
CN112819971B (en) Method, device, equipment and medium for generating virtual image
CN112862933B (en) Method, apparatus, device and storage medium for optimizing model
CN112330805B (en) Face 3D model generation method, device, equipment and readable storage medium
CN112509098B (en) Animation image generation method and device and electronic equipment
CN113240778B (en) Method, device, electronic equipment and storage medium for generating virtual image
CN111368137A (en) Video generation method and device, electronic equipment and readable storage medium
CN111739167B (en) 3D human head reconstruction method, device, equipment and medium
CN111968203B (en) Animation driving method, device, electronic equipment and storage medium
CN111340905B (en) Image stylization method, device, equipment and medium
CN111754431A (en) Image area replacement method, device, equipment and storage medium
CN112562043B (en) Image processing method and device and electronic equipment
CN111768467A (en) Image filling method, device, equipment and storage medium
CN112562048A (en) Control method, device and equipment of three-dimensional model and storage medium
CN111833391A (en) Method and device for estimating image depth information
CN112562047B (en) Control method, device, equipment and storage medium for three-dimensional model
US9734616B1 (en) Tetrahedral volumes from segmented bounding boxes of a subdivision
CN117788653A (en) Animation generation method, animation generation system, and computer-readable medium
CN114882156A (en) Animation generation method and device, electronic equipment and storage medium
Wenhui et al. Design of flight system based on Direct3D

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant