CN116362133A - Framework-based two-phase flow network method for predicting static deformation of cloth in target posture - Google Patents

Framework-based two-phase flow network method for predicting static deformation of cloth in target posture Download PDF

Info

Publication number
CN116362133A
CN116362133A CN202310349759.XA CN202310349759A CN116362133A CN 116362133 A CN116362133 A CN 116362133A CN 202310349759 A CN202310349759 A CN 202310349759A CN 116362133 A CN116362133 A CN 116362133A
Authority
CN
China
Prior art keywords
grid
character
skeleton
under
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310349759.XA
Other languages
Chinese (zh)
Inventor
李玉迪
唐敏
陈潇瑞
童若锋
安柏霖
杨双才
李垚
寇启龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310349759.XA priority Critical patent/CN116362133A/en
Publication of CN116362133A publication Critical patent/CN116362133A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/12Cloth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a two-phase flow network method for predicting static deformation of cloth under a target posture based on a framework. The method can process any three-dimensional roles with skeleton structures, and the specific method comprises the following steps: inputting a framework joint transformation matrix of a three-dimensional character under a clothing grid and a target posture in a template state, wherein the framework joint transformation matrix is respectively input into a two-phase flow network: a grid residue stream and a skeleton residue stream. The skeleton residual stream mesh learns smooth residual components relative to the standard-pose garment mesh, and the mesh residual stream learns detail residual components relative to the standard-pose garment mesh. And weighting the two-phase flow residual error and the clothing grid template under the standard posture, and then performing skinning operation to obtain deformation of the target clothing. The invention can be widely applied to scenes with high real-time performance such as virtual fitting, games and the like, and can capture the sense of reality characteristics in the simulation based on physics.

Description

Framework-based two-phase flow network method for predicting static deformation of cloth in target posture
Technical Field
The invention relates to the technical field of cloth simulation in flexible body motion simulation, in particular to a two-phase flow network method for predicting static deformation of cloth under a target posture based on a framework.
Background
Cloth animation is an important issue in computer graphics, and its application is very wide, including video games, special effects, and virtual fitting. Cloth simulation has been considered a challenging task due to the complexity of cloth modeling and the irregular appearance of cloth deformation. In addition, many applications require interactive performance on commodity hardware (including mobile devices), which places high demands on the performance of real-time results of fabric simulation. For these difficulties, extensive studies have been conducted in the literature. These studies can be broadly divided into two categories based on the targeting of the results: based on physical simulation and real-time cloth deformation technology. The physical-based simulation aims at obtaining a realistic high-quality cloth deformation result, and aims at the authenticity of the simulation, so that a large amount of calculation time is needed, and the real-time performance of the simulation is poor; the real-time-based cloth deformation technology aims at obtaining a cloth dynamic deformation result with high real-time response, and aims at real-time simulation, so that the accuracy of the simulated cloth is not very high, and the reality of the simulation is poor.
In order to pursue higher simulation reality and simulation instantaneity at the same time, related research works adopt a GPU hardware acceleration method to optimize the physical-based cloth simulation. The method of adopting single or multiple pieces of GPU hardware can accelerate the simulation speed on the basis of ensuring the simulation authenticity, but is limited by the performance of the GPU hardware, the acceleration effect is limited, and the GPU algorithm has the bottleneck of acceleration, so that the bottleneck is difficult to break through, and the faster simulation speed is obtained continuously. In addition, the real-time cloth simulation algorithm based on the GPU hardware can only be deployed on a platform with the GPU, and is difficult to apply to other scenes without GPU conditions, such as mobile equipment.
In order to pursue higher simulation reality and simulation instantaneity simultaneously, other researches start from a real-time deformation technology, and on the premise of keeping high instantaneity existing in the real-time deformation technology, other constraint conditions are added to obtain a cloth deformation result with higher precision, so that the simulation reality is improved. However, these methods improved from real-time morphing techniques often contain insufficient physical information, and it is often difficult for the simulation results to reach the physical reality that the physical-based simulation results have.
In order to pursue higher simulation reality and simulation instantaneity at the same time, related researchers recently try to combine a physical-based simulation method and a deep learning method, and aim to calculate cloth deformation with high reality in real time by utilizing physical reality characteristics of the physical-based simulation and instantaneity of the deep learning method. The method based on the combination of physical simulation and deep learning generally takes cloth deformation obtained based on the physical simulation as a data set, and builds a neural network architecture by using the deep learning method, so that the network is trained on the deformation data set to be converged, and the converged network can capture nonlinearity existing in the cloth deformation in the simulation result based on the physical simulation. In operation, the neural network can predict the deformation of the cloth in real time, and the predicted deformation implicitly contains the deformation attribute based on physics. On the other hand, the deep learning network does not depend on GPU hardware facilities, can be conveniently deployed on platforms such as mobile equipment without GPU, and has wider application range compared with an acceleration algorithm based on GPU.
Although the method based on the combination of physical simulation and deep learning is used for training a neural network, because the trained neural network has different functions in the overall structure, some methods aim to replace one part or a plurality of parts in the whole fabric deformation structure by using the neural network, and some methods directly predict the deformation of the fabric by using an end-to-end neural network mode, and various different neural network methods still have the following limitations:
(1) The traditional method constructs the SMPL model of the clothes based on the SMPL parameterized model of the human body, the method is often limited by the SMPL parameters, and is difficult to be directly deployed and applied to other human body models, or for loose clothes, the SMPL model of the clothes of the type is often difficult to be directly constructed, so that the reality of the final deformation effect is greatly reduced;
(2) The existing method adopts a network architecture of graph convolution to extract features on grid data, and performs subsequent operations on the extracted features, which are limited by the type of the grid data, and most operations adopt network structures such as MLP and the like to perform subsequent steps on the extracted features. Network architectures such as MLP and the like can cause overlarge trained model parameters, so that the reasoning speed is reduced;
(3) The existing method is based on a human skeleton structure, and is difficult to be directly applied to any character data with skeleton structures, such as characters with skeleton, such as fish, cat and the like. The deformation of the clothing worn thereon lacks corresponding research results for any skeleton-type character.
Disclosure of Invention
In order to overcome the problems of the prior art, the invention provides a two-phase flow network method for predicting static deformation of cloth under a target posture based on a framework, which aims to build a skin model of clothes worn on the framework based on a framework structure of an input role, wherein the input transformation matrix and grid data of the framework of the target role respectively pass through a framework residual flow and a grid residual flow to obtain a smooth residual component and a detail residual component, and the smooth residual component and the grid of the clothes under a standard posture are added, and then the skin obtains the deformation of the clothes under a specific posture. The invention can process not only human body roles with skeleton structures, but also other roles with skeleton structures, such as fish roles, cats and other animal roles, which are not human bodies. In fact, the present invention is able to predict the deformation of the clothing it is wearing, as long as it is directed to the role in which the skeletal structure is present. The types of clothing that can be predicted by the present invention include not only the slimming type of clothing, but also the loose type of clothing.
In order to achieve the above functions, according to an aspect of the present disclosure, there is provided a two-phase flow network method for predicting static deformation of cloth in a target posture based on a skeleton, including:
constructing a two-phase flow network, wherein the two-phase flow network comprises two network branches of a skeleton residual flow and a grid residual flow, and the two network branches are specifically: inputting the character skeleton transformation matrix under the target posture into skeleton residual error flow network branches to obtain smooth residual error components relative to the clothing grids under the standard posture;
the character skeleton under the target posture is subjected to skin operation to obtain a character grid under the target posture, the closest point index is utilized to form graph structure data according to the character grid under the standard posture and the clothing grid worn, the graph structure data is input into grid residual error flow grid branches, and detail residual error components relative to the clothing grid under the standard posture are obtained;
and adding the smooth residual component and the detail residual component obtained in the framework residual branch and the grid residual branch to the clothes grid under the standard posture, and obtaining the deformed grid of the clothes under the target posture after the LBS covering according to the covering weight of the vertex of the clothes.
The characters in the technical scheme comprise human characters containing skeleton structures, and also comprise any other characters containing skeleton structures, such as fish characters containing skeleton structures, kitten characters containing skeleton structures and the like. The number of skeletons of the character is not particularly limited.
The types of clothes in the above-described technical scheme include a slimming type of clothes, and also a loose type of clothes, and there is no particular limitation on the shape of the clothes. For a non-human skeletal character, the type of clothing it is wearing may be its exclusive type of clothing, such as pet clothing.
The transformation matrix of each skeleton in the technical scheme can be of a 4 4 matrix structure, the transformation matrices of all skeletons of the roles are flattened and spliced to form one-dimensional vectors, and the formed one-dimensional vectors are input into subsequent skeleton residual error stream branches.
The specific structure of the framework residual stream branch in the technical scheme can be composed of the following components:
a Pose encoder (Pose encoding) module aimed at extracting features of the character skeleton transformation matrix as Pose vectors, which may be composed of MLP in particular;
a trainable mesh group module for training a group of grid matrices for subsequent fusion with the pose vector to obtain a smooth residual component;
in the technical scheme, the gesture vectors in the skeleton residual error stream branches are used as weights, and the grid matrixes in the trainable mesh group are subjected to weighted summation, so that smooth residual error components are obtained.
According to the technical scheme, the character skeleton performs LBS skin operation by using the weight information of the binding vertexes of the character skeleton, so that character grid data under a target posture is obtained.
The nearest point index in the above technical solution refers to, for each vertex on the clothing mesh under the standard posture, finding the vertex closest to the vertex on the character mesh under the standard posture, and storing the index of the nearest vertex on the character mesh. The nearest point index operation is only needed to be calculated once at the beginning of the program operation, and the index value is kept unchanged in the subsequent process.
In the above technical solution, the graph structure data formed by using the nearest point index specifically comprises the following operations: according to the index of the nearest vertex from the character grid at each vertex on the clothing grid found in the technical scheme, the spatial position coordinates of the vertices are found on the character grid under the target posture. The spatial position coordinates of the vertices on the character mesh and the topology information of the clothing mesh are used to construct new graph structure data. The topology of the clothing grid is not changed throughout the neural network architecture.
The specific structure of the grid residual flow branch in the technical scheme is as follows:
a Graph Encoder (Graph Encoder) layer for extracting features from the new Graph structure data constructed by the above technical scheme;
a regularization (LayerNorm) layer intended to normalize the output of the graph encoder layer to speed up the convergence of the network;
an activation function (PReLU) layer for adding nonlinearity to the network residual stream branches to increase the expression capacity of the network residual stream branches;
the node level MLP layer aims at processing the extracted features on the graph structure data and adding more nonlinearity to the features at each graph node;
a trainable mesh matrix is intended to optimize a set of mesh matrices that are weighted summed with the output of the previous node level MLP layer.
In the above technical solution, the smooth residual component and the detail residual component are added to the clothing grid under the standard posture, and the vertex weight information used in the subsequent LBS skin operation is initialized to the skin weight of the nearest point on the character network at the beginning of the network training, and these vertex weights can be trained during the network training process, and are optimized together with the loss function.
Compared with the existing method based on the combination of physical simulation and deep learning, the method has the beneficial effects that:
(1) The invention does not depend on various parameterized models similar to human SMPL, directly processes roles with skeleton structures, and can also process various non-human skeleton roles, such as roles with skeleton of fish, cat and the like, compared with a method based on the SMPL.
(2) The present invention is capable of handling multiple types of clothing types, both slimming and relaxed for personas and irregularly shaped clothing specific to non-personas.
(3) According to the invention, skin models can be constructed for various different types of clothes, and the two-phase flow structure of the smooth residual branches and the detail residual branches can be used for more accurately capturing the detail characteristics such as wrinkles on the deformation grid of the clothes.
(4) According to the invention, the skin structure of the role is utilized, and meanwhile, a similar skin structure is constructed for the clothes, compared with other methods, fewer parameters can be used to obtain a better effect, and the model has fewer parameters and better instantaneity.
Compared with the existing simulation method based on physics, the method has the beneficial effects that:
(1) The method has the advantages that the accelerating bottleneck of the GPU accelerating algorithm is avoided, high instantaneity can be obtained, meanwhile, the method does not depend on GPU hardware, the method can run on a GPU and a CPU at the same time, and the method can be conveniently deployed and applied to platforms such as mobile equipment without the GPU.
(2) The invention can process multiple clothes grids with different resolutions, and aiming at clothes grids with higher resolution, compared with the problem that the calculation is obviously slow in a simulation method based on physics, the invention has no real-time degradation.
Drawings
FIG. 1 is a schematic diagram of the overall process of the present invention.
Fig. 2 is a schematic flow diagram of a framework-based method for predicting a cloth static deformation two-phase flow network under a target posture according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a grid residual stream structure according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made more fully hereinafter with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The cloth static deformation two-phase flow network method based on the skeleton prediction target posture can construct a skin model for clothes worn on any role body with a skeleton structure, and the applicable skeleton roles are not limited to human shapes, but animals such as fishes, kittens and the like. According to the invention, the clothing skin model is constructed by utilizing character skeleton information and through a two-phase flow branch architecture, so that smooth characteristics and detailed characteristics of clothing deformation under a target posture can be captured.
According to a standard skeleton skin model, a grid of the skin model of a skeleton character in a target posture is expressed as follows:
M B (γ)=W(T B ,J,γ,W B )
wherein, gamma represents the joint transformation matrix of the character skeleton, M B (gamma) represents the grid expression of the character under the target posture, T B Template mesh representing character under standard posture, J representing skeleton structure of character, W B The skin weight matrix representing vertices, W (·) represents the skin function.
In particular, parameterized skin phantom SMPL captures dynamics of human soft tissue using a set of orthonormal principal component composed shape and pose parameters. The model is specifically expressed as
M B (β,θ)=W(T B (β,θ),J(β),θ,W B )
T B (β,θ)=T B +B S (β)+B P (θ)
Wherein, beta and theta respectively represent the shape factor of the SMPL human body and the posture vector containing the joint transformation information, J (beta) represents the spatial position coordinate representing the skeleton, J (beta) is a function of the shape factor beta, T B (beta, theta) represents the human body template grid under the standard posture, T B (beta, theta) is a function of the shape factor beta and the pose vector theta, B S (beta) represents the shape mixing offset, B P (θ) represents the gesture mixing offset.
Similarly, the present invention builds a skin model for character clothing formulated as follows:
M C (γ)=W(T C (γ),J,γ,W C )
T C (γ)=T Cs (γ)+Δ M (γ)
wherein M is C (gamma) represents the clothing mesh worn on the character in the target posture, T C Representing a template grid of clothing worn in a skeletal character in a standard posture, W C Representing the skinning weight matrix of the garment, for the skinning function W (·), the LBS (·) skinning method, T, is used in the present invention C (gamma) represents the corrected clothing grid in standard posture, delta S (gamma) represents skeleton residual flow, delta M (gamma) represents the grid residual stream.
Overall, the network architecture proposed by the present invention is formulated as follows:
Figure BDA0004161067140000071
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004161067140000072
representing a skin-based network architecture.
Specifically, as shown in fig. 2, for the skeleton residual stream branch, a joint transformation matrix of a target Pose is input, and a Pose vector expression is obtained by a Pose encoder (Pose encoding) module
Figure BDA0004161067140000073
Figure BDA0004161067140000074
m represents the dimension of the vector, and the formula is as follows:
Figure BDA0004161067140000075
where Φ (·) represents an MLP-based gesture encoder (Pose encoding) network.
As a further technical solution, the goal of the skeleton residual stream branching is to learn a set of grid residual matrices d= { B for each pair of characters and the clothing they wear 1 ,B 2 ,B 3 ,…B m For matrix B } j Where j ε {1,2, …, m }, is formulated as follows:
Figure BDA0004161067140000076
wherein b 00 ,…,b 02 ,…,b n0 ,…,b n2 Representing the trainable parameters and n representing the number of vertices of the template garment mesh.
As a further technical proposal, pose enhancement vector
Figure BDA0004161067140000077
Fusion as weights with the grid residual matrix set D to obtain a smoothed residual delta S (γ) is formulated as follows:
Figure BDA0004161067140000078
on the other hand, for the grid residual flow branch, the skeleton transformation matrix gamma of the character under the target posture is subjected to skin operation to obtain the grid of the character under the target posture before being input into the grid residual flow.
As a further technical scheme, the method constructs the KD-tree for the character grid and the clothes grid under the standard posture, and then finds out the vertex index I closest to the vertex on the character grid for each vertex on the clothes grid by using tree data C . Character grid and index I under the target gesture obtained above C Obtaining vertex V= { V on the role grid i -a }; i=1, …, n. In order to enhance the acting force of the grid residual flow, the invention constructs a reference grid
Figure BDA0004161067140000081
Wherein e i,j =<v i ,v j >E represents the connecting edge between two vertices on the garment mesh. Grid->
Figure BDA0004161067140000082
Can be represented by an adjacency matrix a, wherein if e i,j E, then A i,j =1, otherwise a i,j =0。x i E X represents each vertex v i Such as spatial location coordinates, color, etc.
As a further technical solution, as shown in fig. 3, the specific structure composition of the grid residual branches is as follows:
a Graph Encoder (Graph Encoder) layer for extracting features of the Graph data;
a regularization (LayerNorm) layer for neural network training acceleration;
an activation function (PReLU) layer for enhancing the expression ability of the neural network;
node level MLP for learning grid fusion coefficients;
a trainable grid set for composing the detail residual component.
Wherein a Graph Encoder (Graph Encoder) layer for extracting features from a Graph data structure is formulated as follows:
Z (l+1) =f(Z (l) ,A|W (l) )
wherein Z is (l) Representing the input of the layer of the graph encoder,
Figure BDA0004161067140000083
(m features of n vertices), Z (l+1) Representing the output of the graph encoder layer, W (l) Representing network parameters. Specifically, the function f (Z (l) ,A|W (l) ) Expressed as follows:
Figure BDA0004161067140000084
Figure BDA0004161067140000085
Figure BDA0004161067140000086
wherein I represents the identity matrix of a.
Specifically, as shown in fig. 3, the graph encoder layer, regularization layer and activation function (pralu) layer may be sequentially cycled three to five times to fully extract features of the graph structure data.
Specifically, the node-level MLP is formulated as follows:
Q=MLP(Z)
wherein Z is a one-dimensional vector obtained by flattening the characteristics of the extracted graph structure data, and Q is an output vector expression Q= { Q 1 ,Q 2 ,Q 2 ,…,Q m M represents the dimension of the vector.
Specifically, the trainable mesh set is denoted as m= { M 1 ,M 2 ,M 3 ,…M k }. For matrix M j Where j ε {1,2, …, k }, is formulated as follows:
Figure BDA0004161067140000091
wherein m is 00 ,…,m 02 ,…,m n0 ,…,m n2 Representing the trainable parameters and n representing the number of vertices of the template garment mesh.
As a further technical proposal, detail residual delta M (γ) is formulated as follows:
Figure BDA0004161067140000092
the detail residual component is formulated as follows:
Figure BDA0004161067140000093
where ψ (·) represents the trellis residual stream branch.
As a further technical proposal, to solve the problem of final pre-preparation of different types of clothesThe effect of the measured deformation results, for different types of clothing, the invention learns the skin weight residual DeltaW C
Figure BDA0004161067140000094
Wherein w is 00 ,…,w nk Representing trainable network parameters, k is the number of character joints.
As a further technical solution, the fused skin weight matrix is expressed as follows:
Figure BDA0004161067140000095
wherein W is C Representing the new garment mesh skinning weights after fusion,
Figure BDA0004161067140000096
the initial clothing skin weights obtained from the character grid skin weights by the KD-tree method are shown.
Overall, the Pose Embedding functions Φ (γ) and D are used to train the skeleton residual stream branching network to get a smooth residual,
Figure BDA0004161067140000097
and training the grid residual error stream branch network to obtain detail residual error.
The standard posture of the character and the clothes described in the invention is a T posture, and other postures are obtained by transforming the grid skins under the T posture.
The training data used in the invention is obtained by a simulation method based on physics, the simulation based on physics needs to provide a character grid and a clothing grid under a T gesture and a character action sequence grid of a target gesture, and the clothing deformation grid under the target gesture is obtained after simulation.
To evaluate the ability of the present invention to handle complex characters and clothing, skeletal characters as used in the present invention include not only humanoid characters but also non-humanoid characters such as monsters, dolphins, cats, and the like. The skeleton of monster character is similar to that of human character, and the dolphin and cat have larger difference from that of human character. The dolphin skeleton has no joint structure of hands and feet, and the cat skeleton has no joint structure of hands, but has a joint structure of four legs instead.
The method provided by the invention predicts the static deformation of the clothes under the target gesture, and the simulation data used for training is statically balanced for a period of time under each gesture when being obtained, so that the influence of dynamic factors in the simulation process on the deformation result is eliminated.
The method proposed in the present invention is capable of handling various types of clothing, and when generating training data, it is necessary not only to design a slimming or relaxed type of clothing for a persona, but also to design its proprietary type of clothing for a non-persona.
The loss function used in the network training process is as follows:
Figure BDA0004161067140000101
wherein b is the size of the batch data in the training process;
Figure BDA0004161067140000102
representing the spatial position coordinates at node j on the predicted deformed cloth grid, +.>
Figure BDA0004161067140000103
Representing the spatial position coordinates of the node j on the group trunk, and N represents the number of nodes on the grid, |··|| 2 Representing calculation L 2 Distance.
According to the method, the predicted penetration conditions exist between the deformed cloth grids and the role grids, and aiming at the penetration conditions, the post-treatment mode is adopted to eliminate the penetration conditions, and specifically, the predicted penetration errors between the cloth grids and the role grids are required to be optimized to be minimum:
Figure BDA0004161067140000104
wherein V is pene Representing the existence of a set of penetrating vertices on a network predicted deformed clothing mesh, for each penetrating vertex v i Calculating its nearest point on the character network
Figure BDA0004161067140000105
And a normal vector at the nearest point +.>
Figure BDA0004161067140000106
The e represents the tiny distance that the penetrating vertex is pulled out of the character mesh.
Aiming at the deformed clothing grid predicted by the network, the invention uses the following criteria to measure the quality of the predicted result of the network:
Figure BDA0004161067140000111
Figure BDA0004161067140000112
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004161067140000113
representing the spatial position coordinates of node i on the predicted deformed cloth,/->
Figure BDA0004161067140000114
And the spatial position coordinates of the node i on the group trunk cloth are represented. />
Figure BDA0004161067140000115
And->
Figure BDA0004161067140000116
And respectively representing the normal vector of the node i on the predicted deformed cloth and the ground trunk cloth. N represents the number of nodes on the cloth grid. Epsilon dist Representing spatial position coordinates between a predicted clothing grid and a ground tryEvaluation metrics ε nom Representing a normal vector evaluation metric between the predicted garment grid and the ground trunk.
The two metrics proposed by the present invention include two aspects, ε dist The purpose is to measure the deformation degree epsilon of the vertices on the predicted deformed clothing grid norm The surface bending degree of the predicted cloth deformation grid is measured.
Figure BDA0004161067140000117
TABLE 1
The running time and parameter sizes of the network model proposed by the invention are shown in the following table 1, and it can be seen that the invention solves the problems of excessive model parameters and oversized model existing in the previous clothes deformation prediction method Based on learning on the basis of the framework structure of the application role, such as the model parameter size of 928.8MB and the running time of 3.33E-2s in the prior art such as N-Cloth method (D.Li Y, tang M, yangY, et al N-Cloth: predicting 3D Cloth Deformation withMesh-Based Networks [ C ]// Computer Graphics Forum.2022, 41 (2): 547-558.). Meanwhile, the method provided by the invention can avoid huge calculation amount required by the traditional cloth simulation method based on physics, such as simulation running time of 3.45s in the prior art such as ARCSim method (Narain R, samii A, O' brien J F.adaptive anisotropic remeshing for cloth simulation [ J ]. ACM Transactions On Graphics (TOG), 2012, 31 (6): 1-10).
It should be noted that the skeleton-based method for the cloth static deformation two-phase flow network under the target posture still has the following limitations:
(1) The object to which the invention is directed is a role with a skeleton, for which a skeleton structure is first to be constructed if a grid without a skeleton structure is to be directed.
(2) The method provided by the invention predicts the static deformation of the clothes under the target posture, and cannot predict the influence of the dynamic deformation between the postures on the clothes.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; these modifications or substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention.

Claims (8)

1. The two-phase flow network method for predicting static deformation of cloth under target posture based on the framework is characterized by comprising the following steps: constructing a two-phase flow network, wherein the two-phase flow network comprises two network branches of a skeleton residual flow and a grid residual flow, and the two network branches are specifically:
inputting the character skeleton transformation matrix under the target posture into skeleton residual error flow network branches to obtain smooth residual error components relative to the clothing grids under the standard posture;
the character skeleton under the target posture is subjected to skin operation to obtain a character grid under the target posture, the closest point index is utilized to form graph structure data according to the character grid under the standard posture and the clothing grid worn, the graph structure data is input into grid residual error flow network branches, and detail residual error components relative to the clothing grid under the standard posture are obtained;
and adding the obtained smooth residual error component and the detail residual error component to the clothes grid under the standard posture, and obtaining the deformed grid of the clothes under the target posture after passing through the LBS skin according to the skin weight of the vertex of the clothes.
2. The skeletal-based two-phase flow network method of predicting target Pose static fabric deformation of claim 1, wherein skeletal residual flow network branches comprise a Pose encoder (Pose encoding) module and a trainable mesh set module; the gesture encoder module takes a joint transformation matrix of a target character as input to obtain an implicit gesture vector expression, and the implicit gesture vector is taken as a weight coefficient to carry out weighted summation on a group of grid matrixes obtained by training the trainable grid group module to obtain a smooth residual component relative to a grid of the garment under a standard gesture.
3. The skeleton-based two-phase flow network method of predicting static deformation of cloth in target pose according to claim 1, wherein the mesh residual flow network branches comprise a Graph Encoder (Graph Encoder) layer, a regularization (LayerNorm) layer, an activation function (pralu) layer, a node-level MLP module, and a trainable mesh group; the input of the grid residual flow branch is data of a graph structure type, node attributes and topological connection relations of the data of the graph structure type are applied to a graph encoder layer to perform feature extraction, a regularization layer performs standardized operation on the output of the graph encoder layer to accelerate the convergence speed of a network, after the PReLU layer is activated, the graph features are compressed into implicit vector expression by using a node-level MLP module, and the output of the hidden vector expression and a group of grid matrixes trained by a trainable grid group are subjected to weighted summation to obtain detail residual components of clothes grids under a standard posture.
4. A two-phase flow network method for predicting static deformation of cloth in a target pose based on skeleton of claim 3, wherein the data of the graph structure type are:
for each vertex on the clothing grid under the standard posture, finding the vertex closest to the vertex on the character grid under the standard posture, and storing the index of the closest vertex on the character grid; according to the index, finding the space position coordinates of the vertexes on the character grid under the target posture; constructing new graph structure data by using the space position coordinates of the vertexes on the character grid and the topology information of the clothing grid; i.e.
Figure FDA0004161067130000021
Wherein v= { V i I=1,..n, n represents the vertices of the character mesh; e, e i,j =<v i ,v j >E, representing the connecting edge between two vertices on the mesh of the garment, mesh +.>
Figure FDA0004161067130000022
Can be represented by an adjacency matrix a, wherein if e i,j ∈E,A i,j =1; otherwise A i,j =0,x i E K represents each vertex v i Is a characteristic attribute of (a).
5. The skeletal-based two-phase flow network method of predicting target pose static fabric deformation of claim 4, wherein a Graph Encoder (Graph Encoder) layer for extracting features from a Graph data structure is formulated as follows:
Z (l+1) =f(Z (l) ,A∣W (l) )
wherein Z is (l) Representing the input of the layer of the graph encoder,
Figure FDA0004161067130000023
(m features of n vertices), Z (l+1) Representing the output of the graph encoder layer, W (l) Representing network parameters; specifically, the function f (Z (l) ,A∣W (l) ) Expressed as follows:
Figure FDA0004161067130000024
Figure FDA0004161067130000025
Figure FDA0004161067130000026
wherein I represents the identity matrix of a.
6. The skeleton-based two-phase flow network method for predicting static deformation of cloth in a target posture according to claim 1, wherein the garment vertex skin weights in the LBS skin operation are trainable network parameters, and are formed by adding skin weights of character network points obtained by indexing and garment skin increments consisting of trainable network parameters.
7. The skeleton-based two-phase flow network method for predicting static deformation of cloth in a target posture according to claim 1, wherein the loss function in the network training process is a vertex position coordinate loss function of training data and predicted data, and the problem of penetration existing in a predicted result is solved by a post-processing mode.
8. The skeleton-based two-phase flow network method for predicting static deformation of cloth in target posture according to claim 7, wherein the post-treatment specifically comprises: the penetration error between the predicted clothing grid and the character grid needs to be optimized to be minimized:
Figure FDA0004161067130000031
wherein V is pene Representing the existence of a set of penetrating vertices on a network predicted deformed clothing mesh, for each penetrating vertex v i Calculate its nearest point on the character grid
Figure FDA0004161067130000032
And a normal vector at the nearest point +.>
Figure FDA0004161067130000033
The e represents the tiny distance that the penetrating vertex is pulled out of the character mesh.
CN202310349759.XA 2023-04-04 2023-04-04 Framework-based two-phase flow network method for predicting static deformation of cloth in target posture Pending CN116362133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310349759.XA CN116362133A (en) 2023-04-04 2023-04-04 Framework-based two-phase flow network method for predicting static deformation of cloth in target posture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310349759.XA CN116362133A (en) 2023-04-04 2023-04-04 Framework-based two-phase flow network method for predicting static deformation of cloth in target posture

Publications (1)

Publication Number Publication Date
CN116362133A true CN116362133A (en) 2023-06-30

Family

ID=86907707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310349759.XA Pending CN116362133A (en) 2023-04-04 2023-04-04 Framework-based two-phase flow network method for predicting static deformation of cloth in target posture

Country Status (1)

Country Link
CN (1) CN116362133A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116543093A (en) * 2023-07-04 2023-08-04 腾讯科技(深圳)有限公司 Flexible object rendering method, device, computer equipment and storage medium
CN117152327A (en) * 2023-10-31 2023-12-01 腾讯科技(深圳)有限公司 Parameter adjusting method and related device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116543093A (en) * 2023-07-04 2023-08-04 腾讯科技(深圳)有限公司 Flexible object rendering method, device, computer equipment and storage medium
CN116543093B (en) * 2023-07-04 2024-04-02 腾讯科技(深圳)有限公司 Flexible object rendering method, device, computer equipment and storage medium
CN117152327A (en) * 2023-10-31 2023-12-01 腾讯科技(深圳)有限公司 Parameter adjusting method and related device
CN117152327B (en) * 2023-10-31 2024-02-09 腾讯科技(深圳)有限公司 Parameter adjusting method and related device

Similar Documents

Publication Publication Date Title
Bertiche et al. Pbns: Physically based neural simulator for unsupervised garment pose space deformation
CN109993819B (en) Virtual character skin method and device and electronic equipment
CN109636831B (en) Method for estimating three-dimensional human body posture and hand information
US11928765B2 (en) Animation implementation method and apparatus, electronic device, and storage medium
CN116362133A (en) Framework-based two-phase flow network method for predicting static deformation of cloth in target posture
CN110827383B (en) Attitude simulation method and device of three-dimensional model, storage medium and electronic equipment
CN112991502B (en) Model training method, device, equipment and storage medium
CN107610208B (en) Motion simulation method of animation character in particle medium environment
WO2023185703A1 (en) Motion control method, apparatus and device for virtual character, and storage medium
CN114662172A (en) Garment fabric dynamic simulation method based on neural network
CN116363308A (en) Human body three-dimensional reconstruction model training method, human body three-dimensional reconstruction method and equipment
CN116959094A (en) Human body behavior recognition method based on space-time diagram convolutional network
CN117635897B (en) Three-dimensional object posture complement method, device, equipment, storage medium and product
CN112308952B (en) 3D character motion generation system and method for imitating human motion in given video
CN112819930A (en) Real-time role garment fabric animation simulation method based on feedforward neural network
Carvalho et al. Interactive low‐dimensional human motion synthesis by combining motion models and PIK
Kobayashi et al. Motion capture dataset for practical use of AI-based motion editing and stylization
Kant et al. Invertible neural skinning
Reda et al. Physics-based Motion Retargeting from Sparse Inputs
CN116248920A (en) Virtual character live broadcast processing method, device and system
CN116401723A (en) Cloth static deformation prediction method based on triangular meshes
CN115049764A (en) Training method, device, equipment and medium for SMPL parameter prediction model
Diao et al. Combating Spurious Correlations in Loose‐fitting Garment Animation Through Joint‐Specific Feature Learning
Li et al. SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging Garments
CN112507940A (en) Skeleton action recognition method based on difference guidance representation learning network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination