CN113658305A - Skinning method, skinning device, skinning equipment and computer-readable storage medium for three-dimensional model - Google Patents

Skinning method, skinning device, skinning equipment and computer-readable storage medium for three-dimensional model Download PDF

Info

Publication number
CN113658305A
CN113658305A CN202110814927.9A CN202110814927A CN113658305A CN 113658305 A CN113658305 A CN 113658305A CN 202110814927 A CN202110814927 A CN 202110814927A CN 113658305 A CN113658305 A CN 113658305A
Authority
CN
China
Prior art keywords
dimensional
model
dimensional model
skeleton
skinning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110814927.9A
Other languages
Chinese (zh)
Inventor
马光辉
贾西亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202110814927.9A priority Critical patent/CN113658305A/en
Publication of CN113658305A publication Critical patent/CN113658305A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a skinning method, a skinning device, skinning equipment and a computer-readable storage medium for a three-dimensional model, wherein the skinning method for the three-dimensional model comprises the following steps: generating a corresponding three-dimensional skeleton according to a pre-established three-dimensional model; normalizing the three-dimensional model and the three-dimensional skeleton; carrying out voxelization processing on the three-dimensional model to obtain a voxelization model; determining weight coefficients for the three-dimensional bone based on the voxelized model; and optimizing the influence range and the weight coefficient of the three-dimensional skeleton to complete the covering of the three-dimensional model. According to the scheme, the covering of the three-dimensional model can be automatically, quickly and accurately carried out.

Description

Skinning method, skinning device, skinning equipment and computer-readable storage medium for three-dimensional model
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a skinning method, device, and apparatus for a three-dimensional model, and a computer-readable storage medium.
Background
Since the occurrence of epidemic situations, three-dimensional virtual numbers are gradually favored by various industries. The virtual customer service is used for reducing contact among people and improving the service quality and teaching level in a non-contact scene, and is a key point of attention in each field. The traditional clothing industry is also exploring how to combine virtual digital people with the service contents in the field to create a unique virtual image and high quality service experience. The reproducibility and customizability of the virtual digital man greatly improve the service level and the service experience, simplify the service flow and further reduce the service cost. In the future, a digital person can be used as a quick carrier, and more personalized and intelligent service contents are given to the industry by combining artificial intelligence and a virtual reality technology. Creating a digital virtual image requires complex steps, such as modeling of the three-dimensional virtual image, manual binding of three-dimensional bones, animation of three-dimensional characters, and planning and rendering of scenes. The three-dimensional virtual image skeleton binding and animation are indispensable links in the virtual digital person manufacturing process, and a large amount of manpower and material resources are required to be invested to ensure the binding effect.
The three-dimensional skeleton binding is widely applied to the fields of games, virtual reality and the like due to simplicity and small occupied memory space. There are many modeling software (e.g., Maya, Blender, etc.) available on the market that enable the binding of three-dimensional bones. However, for a common user, it takes a lot of time to learn modeling software to bind the skeleton, which increases the production threshold of the virtual digital person. Even professional artists, it takes a lot of time and effort to process complex three-dimensional models.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a three-dimensional model skinning method, device, equipment and computer readable storage medium, which can automatically, quickly and accurately skinning a three-dimensional model.
In order to solve the above problem, a first aspect of the present application provides a skinning method for a three-dimensional model, including: generating a corresponding three-dimensional skeleton according to a pre-established three-dimensional model; normalizing the three-dimensional model and the three-dimensional skeleton; carrying out voxelization processing on the three-dimensional model to obtain a voxelization model; determining weight coefficients for the three-dimensional bone based on the voxelized model; and optimizing the influence range and the weight coefficient of the three-dimensional skeleton to complete the covering of the three-dimensional model.
In order to solve the above problem, a second aspect of the present application provides a skinning apparatus for a three-dimensional model, comprising: the skeleton generation module is used for generating a corresponding three-dimensional skeleton according to a pre-established three-dimensional model; a normalization module for normalizing the three-dimensional model and the three-dimensional bone; the voxelization module is used for voxelizing the three-dimensional model to obtain a voxelization model; a weight determination module to determine weight coefficients for the three-dimensional bone based on the voxelized model; and the optimization processing module is used for optimizing the influence range and the weight coefficient of the three-dimensional skeleton to complete the covering of the three-dimensional model.
In order to solve the above problem, a third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, and the processor is configured to execute program instructions stored in the memory to implement the skinning method for the three-dimensional model of the first aspect.
In order to solve the above problem, a fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the skinning method of the three-dimensional model of the first aspect described above.
The invention has the beneficial effects that: different from the situation of the prior art, the three-dimensional model skin covering method and the three-dimensional model skin covering device have the advantages that the corresponding three-dimensional skeleton is generated according to the pre-established three-dimensional model, then the three-dimensional model is subjected to voxelization after the three-dimensional model and the three-dimensional skeleton are subjected to normalization processing, the voxelization model is obtained, then the weight coefficient of the three-dimensional skeleton is determined based on the voxelization model, the influence range and the weight coefficient of the three-dimensional skeleton are subjected to optimization processing, and the skin covering of the three-dimensional model is completed. According to the three-dimensional model processing method and device, a three-dimensional model is voxelized to obtain a voxelized model, the position relation between a skeleton point and a three-dimensional model vertex is determined based on the voxelized model, the weight coefficient of the three-dimensional skeleton is determined, the three-dimensional model and the three-dimensional skeleton are quickly bound, a skinning result is automatically and quickly given to the three-dimensional model to be bound based on the three-dimensional model and the three-dimensional skeleton, and the skinning operation of the complex three-dimensional model can be processed.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a skinning method for a three-dimensional model according to the present application;
FIG. 2 is a flowchart illustrating an embodiment of step S11 in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of step S12 in FIG. 1;
FIG. 4 is a flowchart illustrating an embodiment of step S13 in FIG. 1;
FIG. 5 is a flowchart illustrating an embodiment of step S14 in FIG. 1;
FIG. 6 is a flowchart illustrating an embodiment of step S15 in FIG. 1;
FIGS. 7a to 7g are schematic diagrams illustrating an effect of an application scenario of the skinning method of the three-dimensional model according to the present application;
FIG. 8 is a frame diagram of an embodiment of a skinning device of the three-dimensional model of the present application;
FIG. 9 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 10 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a skinning method for a three-dimensional model according to the present application. Specifically, the skinning method for the three-dimensional model in this embodiment may include the following steps:
step S11: and generating a corresponding three-dimensional skeleton according to the pre-established three-dimensional model.
During the creation of a virtual character, a three-dimensional model of the virtual character is created, typically as a Mesh data structure. The pre-established three-dimensional model is a model corresponding to the virtual character, and mainly includes appearance characteristics and an external structure of the virtual character to be created, such as a human-shaped character including clothing, an animal character including outer skins, and the like. In animation, it is often desirable for a three-dimensional model of a virtual character to exhibit good deformation at different poses. The corresponding three-dimensional skeleton refers to data related to the bone structure of the virtual character, the three-dimensional skeleton includes various bone points and bones among the bone points, and the three-dimensional skeleton represents the internal solid structure of the virtual character, such as the internal skeleton of the virtual character.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an embodiment of step S11 in fig. 1. In an embodiment, the step S11 may specifically include:
step S111: and correcting the position and the orientation of the three-dimensional model.
Step S112: and generating the corresponding three-dimensional bone by using a three-dimensional bone detection method based on deep learning or a manual adding mode.
Step S113: and determining the parent bone point of each bone point in the three-dimensional bone based on the adjacent relationship of the bones, thereby determining the position relationship of all the bone pairs.
The skeleton detection model can be constructed through a three-dimensional skeleton detection method based on deep learning, for example, the skeleton detection model can be obtained through deep learning framework TensorFlow training based on an OpenPose model, skeleton key points of the three-dimensional model can be effectively identified, and the position relation among all the skeleton key points of the three-dimensional model can be rapidly detected through the skeleton detection model obtained through deep learning framework TensorFlow training. Therefore, according to the pre-established three-dimensional model of the virtual character, the corresponding three-dimensional bone can be generated by righting the position and the orientation of the three-dimensional model and then utilizing the bone detection model established by the three-dimensional bone detection method based on deep learning. And based on the adjacent relation of the skeletons, the father skeleton point of each skeleton point in the three-dimensional skeleton can be determined, so that the position relation of all skeleton pairs is determined; for example, one of the skeleton points of the three-dimensional skeleton may be selected as a root skeleton point, and then the spatial coordinates of the skeleton points of the three-dimensional skeleton may be sequentially calculated according to the spatial coordinates of the root skeleton point, the distance between each two adjacent skeleton points, and the spatial pointing vectors of each two adjacent skeleton points, and according to the relationship between the parent skeleton point and the child skeleton point, thereby determining the positional relationship of all the skeleton pairs.
It is understood that in other embodiments, the corresponding three-dimensional skeleton may be generated by manual addition based on the three-dimensional model of the virtual character.
Step S12: and carrying out normalization processing on the three-dimensional model and the three-dimensional skeleton.
The three-dimensional model and the three-dimensional skeleton are subjected to normalization processing, and the heights of the three-dimensional model and the three-dimensional skeleton are mapped to a range from 0 to 1, so that the skinning method of the three-dimensional model is suitable for virtual roles with any height, and the universality of the skinning method of the three-dimensional model can be enhanced.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an embodiment of step S12 in fig. 1. In an embodiment, the step S12 may specifically include:
step S121: and calculating a bounding box of the three-dimensional model and determining a zooming translation parameter.
Step S122: and carrying out normalization processing on the three-dimensional model and the corresponding three-dimensional skeleton by using the scaling translation parameters.
Complex geometric objects, including AABB bounding boxes, bounding spheres, and directional bounding boxes OBB, for example, which are the smallest hexahedrons containing three-dimensional models with sides parallel to coordinate axes, can be approximately replaced by slightly larger and characteristically simpler geometries (i.e., bounding boxes). Therefore, after the three-dimensional model of the virtual character is determined, the bounding box of the three-dimensional model can be calculated, and since the heights of the three-dimensional model and the three-dimensional bone need to be mapped between 0 and 1, after the bounding box of the three-dimensional model is obtained through calculation, the corresponding scaling translation parameter can be determined, and then the three-dimensional model and the corresponding three-dimensional bone are normalized by using the scaling translation parameter.
Step S13: and carrying out voxelization processing on the three-dimensional model to obtain a voxelization model.
Voxelization (Voxelization) is the transformation of a geometric representation of an object into a voxel representation closest to the object, Voxelization of a model, which contains not only surface information of the model but also internal properties of the model. The voxel processing is carried out on the three-dimensional model, the model can be simplified, uniform grids can be obtained, and the method has good application in the processes of solving geodesic lines of the model and the like. With respect to the generation of the voxelized model, the three-dimensional model may be voxelized by a 2D-CNN encoding network, a 3D convolutional LSTM network, or a 3D decoding neural network to generate the voxelized model.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an embodiment of step S13 in fig. 1. In an embodiment, the step S13 may specifically include:
step S131: and calculating a model view projection matrix of the three-dimensional model in a plurality of different directions.
Step S132: and acquiring the depth images in the plurality of different directions by utilizing a frame buffer technology based on the model view projection matrix.
Step S133: determining a voxel range of the three-dimensional model in the several different directions using the depth image.
Step S134: and performing voxelization on the three-dimensional model by combining the voxel ranges in the plurality of different directions to obtain a voxelized model.
Specifically, based on the bounding box of the three-dimensional model, model view projection matrixes of the three-dimensional model in the front direction, the rear direction, the upper direction, the lower direction, the left direction and the right direction can be calculated, and the vertices can be converted into the canonical cube from the local coordinate system through the model view projection matrixes. The model view projection matrix is a projection matrix and a view matrix, the model matrix converts the vertex from the local coordinate system into the world coordinate system, the view matrix converts the vertex from the world coordinate system into the view coordinate system, and the projection matrix converts the vertex from the view coordinate system into the canonical cube.
With respect to frame buffering techniques, the graphics depicted on the screen are composed of pixels, each having a fixed color or other information with corresponding points, such as depth, etc.; therefore, when drawing a graph, data must be uniformly stored for each pixel in the memory, and the memory area for storing data for all pixels is called a buffer area and a buffer (buffer); different buffers may contain unequal numbers of bits of data for each pixel, but in a given one of the buffers, each pixel is assigned the same number of bits of data, all buffers in the system are collectively referred to as frame buffers, and these different buffers may be used for color setting, hidden surface removal, scene antialiasing, and template operations. The OpenGL frame buffer consists of a color buffer, a depth buffer, a stencil buffer and an accumulation buffer, wherein the depth buffer holds the depth value of each pixel, the depth being usually measured in terms of the distance from the viewpoint to the object, such that pixels with larger depth values are replaced by pixels with smaller depth values, i.e. distant objects are blocked by nearby objects.
Because the coordinates of the vertex in the canonical cube can be directly obtained by multiplying the coordinates of the vertex by the model view projection matrix, the depth images of the three-dimensional model in the front, back, upper, lower, left and right directions can be obtained by utilizing the frame buffer technology based on the model view projection matrix. Then, using the depth images of the three-dimensional model in six directions, the voxel ranges of the three-dimensional model in six directions, namely, front, back, up, down, left and right directions, can be determined. And then, determining voxel coordinates corresponding to each vertex of the three-dimensional model by combining voxel ranges in the front direction, the rear direction, the upper direction, the lower direction, the left direction and the right direction, and carrying out voxelization on the three-dimensional model to obtain the voxelized model.
Step S14: determining weight coefficients for the three-dimensional bone based on the voxelized model.
The method and the device convert the three-dimensional model into the voxelized model, so that the distance between the vertex of the three-dimensional model and the bone can be determined by the distance between the voxels, and the weight coefficient of the three-dimensional bone can be determined based on the voxelized model.
Specifically, referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of step S14 in fig. 1. The step S14 may specifically include:
step S141: determining the position of each vertex of the three-dimensional model in the voxelized model.
Step S142: and determining the distance between each vertex of the three-dimensional model and the three-dimensional bone in the voxelized model according to the position relation of all the bone pairs and the adjacent relation between the voxels in the voxelized model.
Step S143: and determining the weight coefficient of the three-dimensional bone according to the distance between each vertex of the three-dimensional model and the three-dimensional bone in the voxelized model.
It can be understood that the skinning method of the existing three-dimensional model can determine the distance characteristics corresponding to the vertex and the bone by calculating the euclidean distance between the vertex and the bone of the three-dimensional model, so that the skinning weight between the vertex and the three-dimensional bone of the three-dimensional model can be obtained. However, in practical applications involving curved surfaces, the euclidean distance cannot accurately represent the distance between the bone and the vertex on the three-dimensional model, which in turn leads to less than accurate final skinning weight settings. For example, in the case that the three-dimensional human body clothing model stands naturally, the bone structure where the elbow joint is located may be placed close to the underside of the rib, at this time, the euclidean distance between the elbow joint and the vertex of the area under the rib is determined to be small, so that the finally determined skin weight between the vertex of the area under the rib and the elbow joint is large, that is, the vertex of the area under the rib is influenced by the elbow joint to a large extent, and the vertex of the area under the rib should be small or even not influenced by the elbow joint in an actual situation, so that when the three-dimensional human body clothing model moves along with the bone structure, abnormal situations such as serious deformation and interpenetration of a surface patch may occur in the three-dimensional human body clothing model, and the accuracy of the skin weight is reduced.
By converting the three-dimensional model into the voxelized model, the positions of the vertexes of the three-dimensional model in the voxelized model can be determined. Further, since the positions of the respective bones and the bone points of the three-dimensional bone in the voxel-based model can be determined based on the positional relationship between all the pairs of bones, the distance between the respective vertices of the three-dimensional model and the three-dimensional bone in the voxel-based model can be determined based on the positions of the respective vertices of the three-dimensional model in the voxel-based model and the positions of the respective bones and the bone points in the voxel-based model, and the distance between the respective vertices of the three-dimensional model and the respective bone points in the voxel-based model is influenced by the neighborhood relationship between the voxels in the voxel-based model, the distance between the arbitrary vertex and the arbitrary bone point in the voxel-based model needs to be calculated along the voxel point, and therefore, the weight coefficient of the three-dimensional bone determined based on the distance between the respective vertices of the three-dimensional model and the three-dimensional bone in the voxel-based model is more accurately determined.
Step S15: and optimizing the influence range and the weight coefficient of the three-dimensional skeleton to complete the covering of the three-dimensional model.
It is understood that there is a hierarchical parent-child relationship between skeleton points, and a parent skeleton point controls a child skeleton point, so that some parent skeleton points can affect the motion of a region although they do not control the region, because the child skeleton point controlled by the parent skeleton point affects the region, and the effect of the child skeleton point is passed by the parent skeleton point. The weight coefficient of the three-dimensional bone can be expressed as the degree of influence of the bone point on the vertex of the three-dimensional model, and it is described that the position of the vertex changes greatly when the position of the bone point changes greatly as the degree of influence of the bone point on the vertex is larger. Therefore, by adjusting the influence range and the weight coefficient of the three-dimensional skeleton, the actions of different parts of the virtual character can be flexibly adjusted, and a richer animation effect is achieved.
According to the scheme, the corresponding three-dimensional skeleton is generated through the pre-established three-dimensional model, then the three-dimensional model and the three-dimensional skeleton are subjected to normalization processing, the three-dimensional model is subjected to voxelization processing to obtain the voxelization model, then the weight coefficient of the three-dimensional skeleton is determined based on the voxelization model, the influence range and the weight coefficient of the three-dimensional skeleton are subjected to optimization processing, and covering of the three-dimensional model is completed. According to the three-dimensional model processing method and device, a three-dimensional model is voxelized to obtain a voxelized model, the position relation between a skeleton point and a three-dimensional model vertex is determined based on the voxelized model, the weight coefficient of the three-dimensional skeleton is determined, the three-dimensional model and the three-dimensional skeleton are quickly bound, a skinning result is automatically and quickly given to the three-dimensional model to be bound based on the three-dimensional model and the three-dimensional skeleton, and the skinning operation of the complex three-dimensional model can be processed.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating an embodiment of step S15 in fig. 1. In an embodiment, the step S15 may specifically include:
step S151: and defining the influence range of the three-dimensional bone.
The size and the position of the bone are matched with the motion area of the three-dimensional model controlled by the bone through adjusting the influence range of the bone. Specifically, the step S151 may include: determining all connected vertexes according to the topological connection relation between the vertexes of the pre-bound three-dimensional models of all bones and the voxelized model; then, calculating the shortest distance between all the connected vertexes, and combining all the connected vertexes with the shortest distance smaller than a preset distance threshold value to obtain a plurality of divided connected vertex sets; and reserving the maximum connected vertex set, and clearing the weight coefficients of all the vertices of other connected vertex sets. Therefore, all the communication vertexes with close distances are merged, only the communication vertex set with the largest influence range is reserved, and the weight coefficients of all the vertexes of the communication vertex set with the smaller influence range are eliminated, so that the final bone influence range is determined, and the retained bone topological structure is optimal.
Step S152: and smoothing the weight coefficient of the three-dimensional skeleton based on the geodesic line between each vertex of the three-dimensional model and the three-dimensional skeleton in the voxelized model.
The method comprises the steps of calculating the distance between each vertex of a three-dimensional model in a voxelized model and a geodesic line between three-dimensional skeletons, then calculating the influence degree of the skeletons on the deformation of the vertexes of the three-dimensional model according to the distance of the geodesic lines, and then smoothing the weight coefficient of the three-dimensional skeletons based on the influence degree obtained by the geodesic lines, so that the smoothness of a skin can be improved.
Specifically, the step S152 may include: determining the position relation between the top points which are not subjected to cleaning processing and the top points which are subjected to cleaning processing based on the geodesic lines between the top points of the three-dimensional model and the three-dimensional skeleton in the voxelized model; calculating the weight coefficient of the top point which is not subjected to the clearing treatment by using the weight coefficients of the nearest top point and the adjacent top points; and smoothing all the vertexes by using the weight coefficients based on the topological connection relation between the vertexes of the pre-bound three-dimensional models of all the bones and the voxelized model. Therefore, the position relation between the vertex which is not subjected to the cleaning processing and the vertex which is subjected to the cleaning processing can be determined by calculating the distance between each vertex of the three-dimensional model and the geodesic line of the three-dimensional skeleton in the voxelized model, and then the weight coefficient of the vertex which is not subjected to the cleaning processing, namely the weight coefficient of the influence degree of the vertex obtained according to the distance of the geodesic line, is calculated by utilizing the weight coefficient of the vertex which is closest to and adjacent to the vertex which is not subjected to the cleaning processing; then, based on the topological connection relationship between the vertices of the pre-bound three-dimensional models of all bones and the voxelized model, the weight coefficients of all vertices obtained in step S14 are smoothed by using the weight coefficient of the influence degree of the vertices obtained according to the distance between geodesic lines, so as to improve the smoothness of the skin.
Step S153: and performing skinning operation by using the weight coefficient of the smoothed three-dimensional skeleton, and binding the three-dimensional model to the three-dimensional skeleton.
After the weight coefficient of the three-dimensional skeleton is determined, the three-dimensional model of the virtual character can be bound to the three-dimensional skeleton to complete the skinning operation of the virtual character. Specifically, the weight coefficient of the three-dimensional skeleton represents the influence degree of each vertex on each skeleton point, for each vertex, the skeleton point with the largest influence degree can be selected as the binding point, when skinning operation is executed, one vertex corresponds to one binding point, and the three-dimensional model is bound to the three-dimensional skeleton, namely, skinning operation of the virtual character is completed.
Referring to fig. 7a to 7g, fig. 7a to 7g are schematic views illustrating an application scenario of the skinning method of the three-dimensional model according to the present application. Fig. 7a shows a three-dimensional model of a virtual character, according to the three-dimensional model of fig. 7a, a three-dimensional skeleton shown in fig. 7b can be generated, then the three-dimensional skeleton and the three-dimensional model are bound according to the skinning method of the three-dimensional model of the application, so that the fusion effect shown in fig. 7c is obtained, the virtual character after skinning operation is completed as shown in fig. 7d to 7g, a good deformation effect can be shown in different postures, and a good display effect is also provided for the virtual character with a multilayer close-fitting garment.
Referring to fig. 8, fig. 8 is a schematic frame diagram of an embodiment of a skinning device according to the three-dimensional model of the present application. The skinning device 80 of the three-dimensional model comprises: a bone generation module 800, said bone generation module 800 being configured to generate a corresponding three-dimensional bone according to a pre-established three-dimensional model; a normalization module 802, the normalization module 802 configured to normalize the three-dimensional model and the three-dimensional bone; the voxelization module 804 is used for voxelization processing of the three-dimensional model to obtain a voxelization model; a weight determination module 806, the weight determination module 806 to determine weight coefficients for the three-dimensional bone based on the voxelized model; and the optimization processing module 808 is configured to perform optimization processing on the influence range and the weight coefficient of the three-dimensional skeleton, so as to complete skinning of the three-dimensional model.
In some embodiments, the bone generation module 800 performs the step of generating a corresponding three-dimensional bone from a pre-established three-dimensional model, comprising: rectifying the position and orientation of the three-dimensional model; generating a corresponding three-dimensional skeleton by using a three-dimensional skeleton detection method based on deep learning or a manual adding mode; and determining the parent bone point of each bone point in the three-dimensional bone based on the adjacent relationship of the bones, thereby determining the position relationship of all the bone pairs.
In some embodiments, normalization module 802 performs the step of normalizing the three-dimensional model and the three-dimensional bone, including: calculating a bounding box of the three-dimensional model, and determining a zooming translation parameter; and carrying out normalization processing on the three-dimensional model and the corresponding three-dimensional skeleton by using the scaling translation parameters.
In some embodiments, the voxelization module 804 performs voxelization on the three-dimensional model to obtain a voxelized model, including: calculating model view projection matrixes of the three-dimensional model in a plurality of different directions; based on the model view projection matrix, acquiring depth images in the plurality of different directions by using a frame caching technology; determining a voxel range of the three-dimensional model in the plurality of different directions using the depth image; and performing voxelization on the three-dimensional model by combining the voxel ranges in the plurality of different directions to obtain a voxelized model.
In some embodiments, weight determination module 806 performs the step of determining weight coefficients for the three-dimensional bone based on the voxelized model, including: determining the positions of the vertexes of the three-dimensional model in the voxelized model; determining the distance between each vertex of the three-dimensional model and the three-dimensional bone in the voxelized model according to the position relation of all the bone pairs and the adjacent relation between the voxels in the voxelized model; and determining the weight coefficient of the three-dimensional bone according to the distance between each vertex of the three-dimensional model and the three-dimensional bone in the voxelized model.
In some embodiments, the optimization module 808 performs an optimization process on the influence range and the weight coefficient of the three-dimensional bone to complete the skinning of the three-dimensional model, including: defining the influence range of the three-dimensional bone; based on geodesic lines between each vertex of the three-dimensional model and the three-dimensional skeleton in the voxelized model, smoothing the weight coefficient of the three-dimensional skeleton; and performing skinning operation by using the weight coefficient of the smoothed three-dimensional skeleton, and binding the three-dimensional model to the three-dimensional skeleton.
In some embodiments, the optimization module 808 performs the step of delineating the three-dimensional bone's area of influence, including: determining all connected vertexes according to the topological connection relation between the vertexes of the three-dimensional model and the voxelized model, which are pre-bound to all bones; calculating the shortest distance between all the connected vertexes, and combining all the connected vertexes with the shortest distance smaller than a preset distance threshold value to obtain a plurality of divided connected vertex sets; and reserving the maximum connected vertex set, and clearing the weight coefficients of all the vertices of other connected vertex sets.
In some embodiments, the optimization module 808 performs the step of smoothing the weight coefficients of the three-dimensional bone based on geodesic lines between vertices of the three-dimensional model and the three-dimensional bone in the voxelized model, including: determining the position relation between the top points which are not subjected to cleaning processing and the top points which are subjected to cleaning processing based on the geodesic lines between the top points of the three-dimensional model and the three-dimensional skeleton in the voxelized model; calculating the weight coefficient of the top point which is not subjected to the clearing processing by using the weight coefficients of the nearest top point and the adjacent top points; and smoothing all the vertexes by using the weight coefficients based on the topological connection relation between the vertexes of the three-dimensional model and the voxelized model, which are pre-bound to all the bones.
Referring to fig. 9, fig. 9 is a schematic diagram of a frame of an embodiment of an electronic device according to the present application. The electronic device 90 comprises a memory 901 and a processor 902 coupled to each other, and the processor 902 is configured to execute program instructions stored in the memory 901 to implement the steps of any one of the embodiments of the skinning method for three-dimensional models described above. In one particular implementation scenario, the electronic device 90 may include, but is not limited to: microcomputer, server.
In particular, processor 902 is configured to control itself and memory 901 to implement the steps of any of the above-described embodiments of skinning methods for three-dimensional models. The processor 902 may also be referred to as a CPU (Central Processing Unit). The processor 902 may be an integrated circuit chip having signal processing capabilities. The Processor 902 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 902 may be commonly implemented by integrated circuit chips.
Referring to fig. 10, fig. 10 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 100 stores program instructions 1000 executable by a processor, the program instructions 1000 for implementing the steps of any one of the above-described embodiments of the skinning method for a three-dimensional model.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (11)

1. A method for skinning a three-dimensional model, the method comprising:
generating a corresponding three-dimensional skeleton according to a pre-established three-dimensional model;
normalizing the three-dimensional model and the three-dimensional skeleton;
carrying out voxelization processing on the three-dimensional model to obtain a voxelization model;
determining weight coefficients for the three-dimensional bone based on the voxelized model;
and optimizing the influence range and the weight coefficient of the three-dimensional skeleton to complete the covering of the three-dimensional model.
2. The method for skinning a three-dimensional model according to claim 1, wherein said generating a corresponding three-dimensional bone from a pre-established three-dimensional model comprises:
rectifying the position and orientation of the three-dimensional model;
generating a corresponding three-dimensional skeleton by using a three-dimensional skeleton detection method based on deep learning or a manual adding mode;
and determining the parent bone point of each bone point in the three-dimensional bone based on the adjacent relationship of the bones, thereby determining the position relationship of all the bone pairs.
3. The method for skinning a three-dimensional model according to claim 1, wherein said normalizing said three-dimensional model and said three-dimensional skeleton comprises:
calculating a bounding box of the three-dimensional model, and determining a zooming translation parameter;
and carrying out normalization processing on the three-dimensional model and the corresponding three-dimensional skeleton by using the scaling translation parameters.
4. The method for skinning a three-dimensional model according to claim 1, wherein the voxel processing of the three-dimensional model to obtain a voxel model comprises:
calculating model view projection matrixes of the three-dimensional model in a plurality of different directions;
based on the model view projection matrix, acquiring depth images in the plurality of different directions by using a frame caching technology;
determining a voxel range of the three-dimensional model in the plurality of different directions using the depth image;
and performing voxelization on the three-dimensional model by combining the voxel ranges in the plurality of different directions to obtain a voxelized model.
5. The method for skinning a three-dimensional model according to claim 1, wherein said determining weight coefficients of said three-dimensional bone based on said voxelized model comprises:
determining the positions of the vertexes of the three-dimensional model in the voxelized model;
determining the distance between each vertex of the three-dimensional model and the three-dimensional bone in the voxelized model according to the position relation of all the bone pairs and the adjacent relation between the voxels in the voxelized model;
and determining the weight coefficient of the three-dimensional bone according to the distance between each vertex of the three-dimensional model and the three-dimensional bone in the voxelized model.
6. The skinning method of the three-dimensional model according to claim 1, wherein the optimizing the influence range and the weight coefficient of the three-dimensional skeleton to complete the skinning of the three-dimensional model comprises:
defining the influence range of the three-dimensional bone;
based on geodesic lines between each vertex of the three-dimensional model and the three-dimensional skeleton in the voxelized model, smoothing the weight coefficient of the three-dimensional skeleton;
and performing skinning operation by using the weight coefficient of the smoothed three-dimensional skeleton, and binding the three-dimensional model to the three-dimensional skeleton.
7. The method for skinning a three-dimensional model according to claim 6, wherein said defining a range of influence of said three-dimensional bone comprises:
determining all connected vertexes according to the topological connection relation between the vertexes of the three-dimensional model and the voxelized model, which are pre-bound to all bones;
calculating the shortest distance between all the connected vertexes, and combining all the connected vertexes with the shortest distance smaller than a preset distance threshold value to obtain a plurality of divided connected vertex sets;
and reserving the maximum connected vertex set, and clearing the weight coefficients of all the vertices of other connected vertex sets.
8. The method for skinning a three-dimensional model according to claim 7, wherein smoothing the weight coefficients of the three-dimensional skeleton based on a geodesic between each vertex of the three-dimensional model and the three-dimensional skeleton in the voxelized model comprises:
determining the position relation between the top points which are not subjected to cleaning processing and the top points which are subjected to cleaning processing based on the geodesic lines between the top points of the three-dimensional model and the three-dimensional skeleton in the voxelized model;
calculating the weight coefficient of the top point which is not subjected to the clearing processing by using the weight coefficients of the nearest top point and the adjacent top points;
and smoothing all the vertexes by using the weight coefficients based on the topological connection relation between the vertexes of the three-dimensional model and the voxelized model, which are pre-bound to all the bones.
9. A skinning apparatus for a three-dimensional model, comprising:
the skeleton generation module is used for generating a corresponding three-dimensional skeleton according to a pre-established three-dimensional model;
a normalization module for normalizing the three-dimensional model and the three-dimensional bone;
the voxelization module is used for voxelizing the three-dimensional model to obtain a voxelization model;
a weight determination module to determine weight coefficients for the three-dimensional bone based on the voxelized model;
and the optimization processing module is used for optimizing the influence range and the weight coefficient of the three-dimensional skeleton to complete the covering of the three-dimensional model.
10. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the skinning method of the three-dimensional model of any of claims 1-8.
11. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the skinning method of the three-dimensional model of any of claims 1 to 8.
CN202110814927.9A 2021-07-19 2021-07-19 Skinning method, skinning device, skinning equipment and computer-readable storage medium for three-dimensional model Pending CN113658305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110814927.9A CN113658305A (en) 2021-07-19 2021-07-19 Skinning method, skinning device, skinning equipment and computer-readable storage medium for three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110814927.9A CN113658305A (en) 2021-07-19 2021-07-19 Skinning method, skinning device, skinning equipment and computer-readable storage medium for three-dimensional model

Publications (1)

Publication Number Publication Date
CN113658305A true CN113658305A (en) 2021-11-16

Family

ID=78477497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110814927.9A Pending CN113658305A (en) 2021-07-19 2021-07-19 Skinning method, skinning device, skinning equipment and computer-readable storage medium for three-dimensional model

Country Status (1)

Country Link
CN (1) CN113658305A (en)

Similar Documents

Publication Publication Date Title
US9305387B2 (en) Real time generation of animation-ready 3D character models
CN110807836B (en) Three-dimensional face model generation method, device, equipment and medium
US8797328B2 (en) Automatic generation of 3D character animation from 3D meshes
US9978175B2 (en) Real time concurrent design of shape, texture, and motion for 3D character animation
Stoll et al. Fast articulated motion tracking using a sums of gaussians body model
Wang et al. Feature based 3D garment design through 2D sketches
US10489956B2 (en) Robust attribute transfer for character animation
WO2010060113A1 (en) Real time generation of animation-ready 3d character models
JP7294788B2 (en) Classification of 2D images according to the type of 3D placement
CN111382618B (en) Illumination detection method, device, equipment and storage medium for face image
JP4463597B2 (en) 3D drawing model generation method and program thereof
CN116863044A (en) Face model generation method and device, electronic equipment and readable storage medium
Starck et al. Animated statues
CN113658305A (en) Skinning method, skinning device, skinning equipment and computer-readable storage medium for three-dimensional model
KR101634461B1 (en) Apparatus and Method for generating facial wrinkles generation using sketch
Villa-Uriol et al. Automatic creation of three-dimensional avatars
Erdem A new method for generating 3-D face models for personalized user interaction
US20240096016A1 (en) System And Method For Virtual Object Asset Generation
CN116912433B (en) Three-dimensional model skeleton binding method, device, equipment and storage medium
US20230196702A1 (en) Object Deformation with Bindings and Deformers Interpolated from Key Poses
US20240029358A1 (en) System and method for reconstructing 3d garment model from an image
CN117876492A (en) Posture adjustment method, device, equipment and storage medium
Bakerman Creating 3D human character mesh prototypes from a single front-view sketch
CN118115663A (en) Face reconstruction method and device and electronic equipment
Khoo et al. Automated Body Structure Extraction from Arbitrary 3D Mesh

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination