CN115272539A - Clothing data processing method based on virtual scene, storage medium and related equipment - Google Patents

Clothing data processing method based on virtual scene, storage medium and related equipment Download PDF

Info

Publication number
CN115272539A
CN115272539A CN202210855346.4A CN202210855346A CN115272539A CN 115272539 A CN115272539 A CN 115272539A CN 202210855346 A CN202210855346 A CN 202210855346A CN 115272539 A CN115272539 A CN 115272539A
Authority
CN
China
Prior art keywords
data set
clothing
preset
virtual character
bone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210855346.4A
Other languages
Chinese (zh)
Inventor
张旭辉
胡路
张科峰
侯杰
秦嘉
赵亦飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210855346.4A priority Critical patent/CN115272539A/en
Publication of CN115272539A publication Critical patent/CN115272539A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a clothing data processing method, a storage medium and related equipment based on a virtual scene, wherein the method comprises the following steps: extracting a preset bone deformation data set and a preset clothing vertex data set of a virtual character model acquired in advance under different postures; determining a bone dress weight matrix according to the preset bone deformation data set and the preset dress vertex data set; driving the virtual character model to obtain a real-time bone deformation data set and a real-time clothing vertex data set of the virtual character model in the current posture; obtaining a first bone data set according to the preset bone deformation data set and the real-time bone deformation data set; and determining the clothing cloth state of the virtual character model under the current posture according to the real-time clothing vertex data set, the first skeleton data set and the skeleton clothing weight matrix.

Description

Clothing data processing method based on virtual scene, storage medium and related equipment
Technical Field
The present application relates to the field of computer graphics technologies, and in particular, to a method for processing clothing data based on a virtual scene, a storage medium, and a related device.
Background
With the vigorous development of physical simulation animation technology, the graphics hardware computing technology is also continuously improved, and for game development, the clothing cloth simulation of virtual characters is a very important technical link.
Disclosure of Invention
In view of this, an object of the present application is to provide a clothing data processing method, a storage medium and a related device based on a virtual scene, so as to solve a problem that clothing cloth data obtained by off-line calculation is applied to a skinning process during game running.
Based on the above purpose, the present application provides a clothing data processing method based on a virtual scene, including:
extracting a preset bone deformation data set and a preset clothing vertex data set of a virtual character model acquired in advance under different postures;
determining a bone dress weight matrix according to the preset bone deformation data set and the preset dress vertex data set;
driving the virtual character model to obtain a real-time bone deformation data set and a real-time clothing vertex data set of the virtual character model in the current posture;
obtaining a first bone data set according to the preset bone deformation data set and the real-time bone deformation data set;
and determining the clothing cloth state of the virtual character model under the current posture according to the real-time clothing vertex data set, the first skeleton data set and the skeleton clothing weight matrix.
Optionally, the extracting a preset bone deformation data set and a preset clothing vertex data set of the virtual character model acquired in advance in different postures further includes:
acquiring motion data of the virtual character model under different postures, and driving the virtual character model to perform morphological transformation according to the motion data to obtain a morphological transformation result of the virtual character model;
and setting an interpolation extraction node, and extracting preset skeleton deformation data and a preset clothing vertex data set of the virtual character model in the form transformation result according to the interpolation extraction node.
Optionally, the extracting a preset bone deformation data set and a preset clothing vertex data set of the virtual character model acquired in advance in different postures further includes:
acquiring a static three-dimensional model of a virtual character, a static skeleton data set of the virtual character and a static clothing data set of a target clothing;
and binding the static skeleton data set and the static clothes data set on the static three-dimensional model to obtain a virtual character model.
Optionally, the determining a bone dress weight matrix according to the preset bone deformation data set and the preset dress vertex data set further includes:
obtaining a preset bone deformation matrix according to the preset bone deformation data set; the preset skeleton deformation matrix is formed by skeleton deformation quaternions corresponding to all skeletons of the virtual character model in different postures;
obtaining a preset clothing vertex matrix according to the preset clothing vertex data set; the preset clothing vertex matrix is formed by clothing vertex three-dimensional coordinates corresponding to all clothing vertices of the virtual character model in different postures;
calculating the axial angle distance between the data of each row and the data of other rows in the preset bone deformation matrix to obtain a first axial angle distance matrix;
and obtaining the skeleton clothing weight matrix according to the first axis angle distance matrix and the preset clothing vertex matrix.
Optionally, the obtaining a first bone data set according to the preset bone deformation data set and the real-time bone deformation data set further includes:
obtaining a real-time bone deformation matrix according to the real-time bone deformation data set; the real-time bone deformation matrix is formed by bone deformation quaternions corresponding to all bones of the virtual character model in the current posture;
calculating the axial angle distance between the data of each row in the real-time bone deformation matrix and the data of each row in the preset bone deformation matrix to obtain a second axial angle distance matrix;
and obtaining the first bone data set according to the second axial angle distance matrix and a radial basis function formula.
Optionally, the determining, according to the real-time clothing vertex data set, the first skeleton data set, and the skeleton clothing weight matrix, a clothing cloth state of the virtual character model in the current posture further includes:
performing weight extraction operation according to the first skeleton data set and the skeleton clothing weight matrix to obtain an interpolation clothing vertex data set under the current posture of the virtual character;
obtaining the comprehensive clothing vertex data set according to the real-time clothing vertex data set and the interpolation clothing vertex data set;
and determining the clothing cloth state of the virtual character model under the current posture according to the comprehensive clothing vertex data set.
Optionally, the setting of an interpolation extraction node and the extracting of the preset bone deformation data and the preset clothing vertex data set of the virtual character model in the form transformation result according to the interpolation extraction node further include:
a preset time interval is set between every two interpolation extraction nodes;
and executing simplification operation according to the preset time interval to extract preset bone deformation data and a preset clothing vertex data set of the virtual character model in the form transformation result.
Based on the above purpose, the present application provides a dress data processing device based on virtual scene, the device includes:
the off-line data acquisition module is configured to extract a preset bone deformation data set and a preset clothing vertex data set of the virtual character model acquired in advance under different postures;
an offline weight calculation module configured to determine a bone dress weight matrix from the preset bone deformation data set and the preset dress apex data set;
a first operation module configured to drive the virtual character model to obtain a real-time bone deformation data set and a real-time clothing vertex data set of the virtual character model in a current posture;
a second operational module configured to derive a first bone data set from the preset bone deformation data set and the real-time bone deformation data set;
a third operation module configured to determine a clothing cloth state of the virtual character model in the current posture according to the real-time clothing vertex data set, the first bone data set and the bone clothing weight matrix.
In view of the foregoing, the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the method for processing the apparel data based on the virtual scene as described in any one of the above.
In view of the foregoing, the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method for processing clothing data based on virtual scene as described in any one of the foregoing items.
From the above, it can be seen that the dress data processing method, the storage medium and the related device based on the virtual scene provided by the application can obtain the preset bone deformation data, the preset dress vertex data of the virtual human model and the bone dress weight matrix capable of representing the bone characteristics and the dress vertex characteristics of the virtual human model in the off-line stage, so as to solve the problem of a large amount of operations involved in the dress data processing process of the virtual human model in a game scene in the off-line stage, and save the time for calculating the data.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only the present application, and that other drawings can be obtained by those skilled in the art without inventive efforts.
Fig. 1 is a schematic flow chart of a method for processing clothing data based on a virtual scene according to an embodiment of the application;
FIG. 2 is a schematic view of a 2D version and a UV coordinate system of an article of apparel provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a stitching calculation process provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a re-extension process provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a clothing data processing apparatus based on a virtual scene according to an embodiment of the present application;
fig. 6 is a more specific hardware structure diagram of the electronic device according to the embodiment.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
It is to be noted that, unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this disclosure belongs. As used in this application, the terms "first," "second," and the like do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item preceding the word comprises the element or item listed after the word and its equivalent, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
As described in the background section, in practical applications, such as skin exhibition in a game scene, a virtual character model usually needs to be put in different postures to exhibit clothes, so how to present the clothes cloth of the virtual character in the game scene smoothly without discomfort and ensure the simulation efficiency of the clothes cloth is a crucial issue in the game development process.
However, the applicant finds that most of mainstream game cloth real-time simulation solutions in the market have the dilemma of good effect but low efficiency or good efficiency and general effect, and the efficiency and the effect are hardly considered.
In order to solve the above problems, the present application provides a clothing data processing method based on a virtual scene, a storage medium and a related device.
Hereinafter, the technical means of the present disclosure will be described in further detail with reference to specific examples.
In specific implementation, the method can be applied to but not limited to the clothing data processing process in the fields of game virtual character model making, movie and television making or animation, the wearing clothing presented by the virtual character model can be but not limited to be loaded into Marvelous designer software for simulation and calculation by related clothing data, the Marvelous designer is three-dimensional clothing design software, the three-dimensional clothing design software has an intuitive pattern design function, wonderful design supports fold lines, a free curve and three-dimensional stereo cutting synchronous interactive design is drawn, and any modification is immediately reflected in a complete 3D real-time stereo cutting clothing design interface. The motion data of the virtual character model can be obtained through MAYA software, the MAYA software is three-dimensional modeling and animation software, a body motion Alembic file for driving the virtual character to move can be derived through the MAYA, the Alembic file is imported into Marvelous designer software, the body and the clothes of the virtual character model are synchronously driven, and 3D clothes static data and 3D clothes dynamic data are obtained.
Referring to fig. 1, a schematic flow chart of a method for processing clothing data based on a virtual scene is provided in an application embodiment.
Step S101, extracting a preset bone deformation data set and a preset clothing vertex data set of the virtual character model acquired in advance under different postures.
In specific implementation, a virtual character model needs to be acquired first, and the virtual character model can be acquired through, but is not limited to, the following manners:
as an optional embodiment, the 2D model of the garment can be drawn according to the design drawing of the garment in Marvelous Designer software, and the stitching calculation process is executed through the software to obtain the 3D finished product of the garment. And obtaining a static clothing data set of the target clothing to be simulated according to the 3D finished clothing sample.
Referring to fig. 2, a schematic view of a 2D version and a UV coordinate system of an apparel provided in an embodiment of the present application is shown.
In one embodiment, the polygon has a three-dimensional spatial coordinate system, and a two-dimensional UV coordinate similar to the X, Y, and Z axes of the spatial model, wherein the vertices of the polygon may be associated with pixels on the picture. The texture mapping of the 2D version of the clothes in the UV coordinate system defines the information of the position of each point on the 2D version of the clothes, the points are mutually connected with the clothes data in a 3D form, each point on the 2D version of the clothes can be accurately corresponding to the surface of the virtual character model, and the smooth interpolation processing of images is carried out on the gap positions between the points by software.
Referring to fig. 3, a schematic diagram of a stitching calculation process in the embodiment of the present application is shown.
The method comprises the steps of working out a 2D version of the clothes through a clothes design drawing, generating a uv coordinate system according to the 2D version, arranging the uv coordinate system in an environment background to obtain required data of a virtual character model, and then calculating and obtaining clothes data in a 3D form through stitching of software.
Referring to fig. 4, a schematic diagram of a repolarization process provided in an embodiment of the present application is shown.
It should be noted that the 3D clothing static data and the 3D clothing dynamic data need to be reiterated to facilitate drawing and making of the three-dimensional map, the four side surfaces of the same model are superior to the subsequent use of the triangular surface, and all people need to use the three-dimensional model to perform one-time re-topology on the model made in Marvelous Designer software, so that the skin structure of the model is changed into the four side surfaces.
Further, in a specific implementation, an Alembic file that guides the virtual character model to assume different postures of body movements may be exported in Maya software, and may include, without limitation, a cache, animation data, and the like. Further, a static three-dimensional model of the virtual character and a static skeleton data set of the virtual character matched with the static three-dimensional model are obtained, the static clothes data set of the target clothes and the static skeleton data set of the virtual character are bound to the static three-dimensional model of the virtual character through Maya software, and the virtual character model bound with the preset skeleton data and the preset clothes data can be obtained.
Furthermore, the virtual human model with the preset bone data binding and the preset clothing data binding and the Alembic file are led into Marvelous Designer software to drive body and clothing settlement movement, so that the virtual human models in different postures can be obtained, and finally the preset bone deformation data set and the preset clothing vertex data set of the virtual human model in different postures can be extracted.
It should be noted that, in a specific implementation, each data in the preset bone deformation data set corresponds to each data in the preset clothing vertex data set.
The preset skeleton deformation data set comprises skeleton posture data and quaternion of each skeleton in the virtual human model corresponding to the skeleton posture data, wherein the quaternion of the skeleton is an expression form of multi-dimensional data of the skeleton and is formed by rotation of each joint of the body skeleton.
The preset clothing vertex data set comprises the bone posture data and clothing vertex data corresponding to the bone posture data, and the vertex data of the virtual human model are formed by matching the body of the virtual human model with the positions of all clothing vertexes.
As an optional embodiment, the form transformation results of the virtual character model, namely the posture A, the posture B \ 8230; \8230and the posture N, can be obtained by acquiring the motion data of the virtual character model under different postures and driving the virtual character model to perform form transformation according to the motion data.
Further, an interpolation extraction node is set, and preset skeleton deformation data and a preset clothing vertex data set of the virtual character model in the form transformation result are extracted according to the interpolation extraction node.
It should be noted that the distance between each interpolation extraction node may be 30 frame distances, and a null frame node with a frame distance of 30 frame distances may be set between every two interpolation extraction nodes, but not limited thereto, so as to ensure that each acquired pose is not repeated, and meet the linear interpolation requirement.
In some embodiments, a preset time interval may be set, for example, a frame distance may be 30, the preset time interval may be inserted into the execution device in the form of an instruction, and the execution device may, upon receiving the extraction instruction, perform an extraction operation on the preset bone deformation data and the preset clothing vertex data set of the virtual character model in the morphological transformation result according to the preset time interval and the interpolation extraction node. For example, when the execution device receives an extraction instruction containing a preset time interval, preset skeleton deformation data and a preset clothing vertex data set of the virtual character model in the morphology transformation result may be obtained every 30 frame distances according to the arrangement order of the interpolation extraction nodes.
And S102, determining a skeleton clothing weight matrix according to the preset skeleton deformation data set and the preset clothing vertex data set.
Specifically, after a preset bone deformation data set and a preset clothing vertex data set are obtained, the preset bone deformation data set is converted into a preset bone deformation matrix, rotation quaternions of all bones are arranged in rows, and the rotation quaternions are arranged in postures. And converting the preset clothing vertex data set into a preset clothing vertex matrix, and listing each vertex of the behavior clothing as each posture. And (3) correspondingly solving the axial angle distance of each row between the preset bone deformation matrixes respectively, and then substituting the axial angle distance into a Gaussian kernel function to obtain a Phi matrix, wherein the Phi matrix can represent the relevance between all bones. And multiplying the transposed matrix of the Phi matrix by the Phi matrix and adding the identity matrix for inversion, and then multiplying the inverted matrix by a preset clothing vertex matrix to obtain a skeleton clothing weight matrix.
Specifically, the preset clothing vertex data set can be visually presented in the following ways:
/ 3 x N dimensional data
Attitude A { { vertex 1 position }, { vertex 2 position }, \8230; { vertex N position }
Attitude B { { vertex 1 position }, { vertex 2 position }, \8230; { vertex N position }
Attitude N { { vertex 1 position }, { vertex 2 position }, \8230; { vertex N position }
Specifically, the preset clothing vertex matrix is formed by clothing vertex three-dimensional coordinates corresponding to all clothing vertices of the virtual character model in different postures, and can be expressed in the following manner:
/ 4 x N dimensional data
Attitude A { { skeleton 1 quaternion }, { skeleton 2 quaternion }, \ 8230; \ 8230; { skeleton n quaternion }
Posture B { { skeleton 1 quaternion }, { skeleton 2 quaternion }, \ 8230; \ 8230; { skeleton n quaternion }
Attitude N { { skeleton 1 quaternion }, { skeleton 2 quaternion }, \8230; { skeleton n quaternion }
Further, the axial angle distance between the data of each row in the preset bone deformation matrix and the data of other rows is calculated to obtain a first axial angle distance matrix, and the bone dress weight matrix is obtained according to the first axial angle distance matrix and the preset dress vertex matrix.
As an alternative embodiment, in the x direction of the uv coordinate system, the axial angle distance between any one bone in the preset bone deformation matrix and the position row coordinate of the ith bone may be obtained by subtracting the position row coordinate of the ith bone from the position row coordinate of the any one bone in the preset bone deformation matrix, and then the axial angle distance is substituted into the gaussian kernel function to obtain the Phi matrix:
Figure BDA0003754212630000091
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003754212630000092
representing the axial angular distance, exp (-) represents the Gaussian kernel function,
Figure BDA0003754212630000093
representing the Phi matrix for the axial angular distance between two bones.
Further, traversing all bones of the virtual character model to obtain Phi matrixes corresponding to all the bones:
Figure BDA0003754212630000094
the skeleton dress weight matrix is:
Figure BDA0003754212630000095
wherein, WxFor the x-direction skeleton dress weight matrix in the uv coordinate system, DxAnd the preset clothing vertex matrix in the x direction in the uv coordinate system.
As an alternative embodiment, in the y direction of the uv coordinate system, the axial angle distance between any one bone in the preset bone deformation matrix and the j-th bone can be obtained by subtracting the position row coordinate of the j-th bone from the position row coordinate of the any one bone, and then the axial angle distance is substituted into the gaussian kernel function to obtain the Phi matrix:
Figure BDA0003754212630000096
wherein the content of the first and second substances,
Figure BDA0003754212630000097
representing the axial angular distance, exp (-) represents the Gaussian kernel function,
Figure BDA0003754212630000098
representing the Phi matrix for the axial angular distance between two bones.
Further, traversing all bones of the virtual character model to obtain Phi matrixes corresponding to all the bones:
Figure BDA0003754212630000101
the skeleton dress weight matrix is:
Figure BDA0003754212630000102
wherein, WyFor the skeleton-clothing weight matrix in the y-direction in the uv coordinate system, DyFor presetting clothes in y direction in uv coordinate systemA vertex matrix.
As an alternative embodiment, in the z direction of the uv coordinate system, the axial angular distance between any one bone in the preset bone deformation matrix and the k-th bone position row coordinate may be subtracted to obtain the axial angular distance, and then the axial angular distance is substituted into the gaussian kernel function to obtain the Phi matrix:
Figure BDA0003754212630000103
wherein the content of the first and second substances,
Figure BDA0003754212630000104
representing the axial angular distance, exp (-) representing a gaussian kernel function,
Figure BDA0003754212630000105
representing the Phi matrix for the axial angular distance between two bones.
Further, traversing all bones of the virtual character model to obtain Phi matrixes corresponding to all the bones:
Figure BDA0003754212630000106
the skeleton dress weight matrix is:
Figure BDA0003754212630000107
wherein, WzFor the z-direction skeleton dress weight matrix in the uv coordinate system, DzAnd the preset clothing vertex matrix in the z direction in the uv coordinate system.
As an alternative embodiment, the bone dress weight is expressed as:
attitude A Posture B Attitude C Attitude D
Vertex 1 0.2 0 0.3 0.5
Vertex 2 0.85 0.2 0.4 0.6
Vertex 3 0 0.4 0.5 0.4
Vertex 4 0.1 0.3 0.7 0.1
Step S103, driving the virtual character model to obtain a real-time bone deformation data set and a real-time clothing vertex data set of the virtual character model in a target posture.
In specific implementation, the virtual character model can be driven to move in the game simulation, a proper posture of the virtual character model can be determined according to actual requirements, the posture is determined as a target posture, and a real-time bone deformation data set and a real-time clothing vertex data set of the virtual character model in the target posture are obtained.
Step S104, obtaining a first bone data set according to the preset bone deformation data set and the real-time bone deformation data set.
As an optional embodiment, a real-time bone deformation matrix is obtained according to a real-time bone deformation data set, wherein the real-time bone deformation matrix is formed by bone deformation quaternions corresponding to all bones of the virtual character model in the current posture, an axial angular distance between data in each row of the real-time bone deformation matrix and data in each preset bone deformation matrix is calculated to obtain a second axial angular distance matrix, and the first bone data set is obtained according to the second axial angular distance matrix and a radial basis function formula, wherein the length of the first bone data set is the number of action postures of the preset bone deformation data set.
Step S105, determining the clothing cloth state of the virtual character model under the current posture according to the real-time clothing vertex data set, the first skeleton data set and the skeleton clothing weight matrix.
As an alternative embodiment, a weight extraction operation may be performed according to the first skeleton data set and the skeleton clothing weight matrix to obtain an interpolated clothing vertex data set in the current posture of the virtual character, further, a comprehensive clothing vertex data set is obtained according to the real-time clothing vertex data set and the interpolated clothing vertex data set, and finally, the clothing cloth state of the virtual character model in the current posture is determined according to the comprehensive clothing vertex data set.
Specifically, the first skeleton data set is multiplied by the skeleton clothing weight matrix obtained in the off-line step to obtain the value of the top point of the clothing at the current moment, the value is added with the real-time clothing top point data set to obtain a comprehensive clothing top point data set, and finally, the clothing cloth state of the current virtual character is adjusted in the GPU according to the comprehensive clothing top point data set.
It should be noted that, in the present application, calculation is performed according to all skeleton point data and decoration vertex data of the virtual character model, and all the clothing vertex data are calculated in parallel in the GPU, so that the time taken to adjust the overall clothing state of the virtual character model is the same as the time taken to adjust the clothing state of a certain clothing point of the virtual character model, which saves the running time and improves the clothing fitting efficiency.
It can be seen from the above that, according to the clothing data processing method based on the virtual scene provided by the application, the preset bone deformation data and the preset clothing vertex data of the virtual human model and the bone clothing weight matrix capable of representing the bone characteristics and the clothing vertex characteristics of the virtual human model are obtained in the off-line stage, so that the problem of a large amount of operations involved in the clothing data processing process of the virtual human model in the game scene is solved in the off-line stage, the time for resolving the data is saved, in addition, in the operation stage, the virtual human model is driven to obtain the real-time bone deformation data set and the real-time clothing vertex data set of the virtual human model in the current posture, the first bone data set is obtained according to the preset bone deformation data set obtained in the off-line stage and the real-time bone deformation data set obtained in the operation stage, further, the data result of the deformation process of the current virtual human model is automatically generated according to the real-time bone deformation data set, the first bone data set and the bone clothing weight matrix, so that the cloth state of the virtual human model in the current posture can be determined, the clothing model can be greatly combined with the cloth state of the clothing model, and the virtual human model can be greatly improved in the final dressing process, and the dressing data processing efficiency can be greatly improved.
Based on the same conception, the application also provides a clothing data processing device based on the virtual scene.
Referring to fig. 5, a schematic diagram of a clothing data processing apparatus based on a virtual scene according to an embodiment of the present application is shown.
The device includes:
an offline data acquisition module 501, configured to extract a preset bone deformation data set and a preset clothing vertex data set of a virtual character model acquired in advance in different postures;
an offline weight calculation module 502 configured to determine a bone dress weight matrix from the preset bone deformation data set and the preset dress vertex data set;
a first running module 503 configured to drive the virtual character model to obtain a real-time bone deformation data set and a real-time clothing vertex data set of the virtual character model in a current pose;
a second running module 504 configured to derive a first bone data set from the preset bone deformation data set and the real-time bone deformation data set;
a third running module 505 configured to determine a clothing cloth state of the virtual character model at the current pose according to the real-time clothing vertex data set, the first skeleton data set and the skeleton clothing weight matrix.
Optionally, the offline data obtaining module 501 is further configured to obtain motion data of the virtual character model in different postures, and drive the virtual character model to perform morphological transformation according to the motion data, so as to obtain a morphological transformation result of the virtual character model;
and setting an interpolation extraction node, and extracting preset skeleton deformation data and a preset clothing vertex data set of the virtual character model in the form transformation result according to the interpolation extraction node.
Optionally, the offline data obtaining module 501 is further configured to obtain a static three-dimensional model of a virtual character, a static skeleton data set of the virtual character, and a static dress data set of a target dress;
and binding the static skeleton data set and the static clothes data set on the static three-dimensional model to obtain a virtual character model.
Optionally, the offline weight calculating module 502 is further configured to obtain a preset bone deformation matrix according to the preset bone deformation data set; the preset skeleton deformation matrix is formed by skeleton deformation quaternions corresponding to all skeletons of the virtual character model in different postures;
obtaining a preset clothing vertex matrix according to the preset clothing vertex data set; the preset clothing vertex matrix is formed by clothing vertex three-dimensional coordinates corresponding to all clothing vertices of the virtual character model in different postures;
calculating the axial angle distance between the data of each row and the data of other rows in the preset bone deformation matrix to obtain a first axial angle distance matrix;
and obtaining the skeleton clothing weight matrix according to the first axis angle distance matrix and the preset clothing vertex matrix.
Optionally, the second operation module 504 is further configured to obtain a real-time bone deformation matrix according to the real-time bone deformation data set; the real-time bone deformation matrix is formed by bone deformation quaternions corresponding to all bones of the virtual character model in the current posture;
calculating the axial angle distance between the data of each row in the real-time bone deformation matrix and the data of each row in the preset bone deformation matrix to obtain a second axial angle distance matrix;
and obtaining the first bone data set according to the second axial angle distance matrix and a radial basis function formula.
Optionally, the third running module 505 is further configured to perform a weight extraction operation according to the first skeleton data set and the skeleton clothing weight matrix to obtain an interpolated clothing vertex data set in the current posture of the virtual character;
obtaining the comprehensive clothing vertex data set according to the real-time clothing vertex data set and the interpolation clothing vertex data set;
and determining the clothing cloth state of the virtual character model under the current posture according to the comprehensive clothing vertex data set.
Optionally, the offline data obtaining module 501 is further configured to set a preset time interval between every two interpolation extraction nodes;
and executing simplification operation according to the preset time interval to extract preset bone deformation data and a preset clothing vertex data set of the virtual character model in the form transformation result.
For convenience of description, the above devices are described as being divided into various modules by functions, which are described separately. Of course, the functions of the modules may be implemented in the same or multiple software and/or hardware when implementing the embodiments of the present application.
The device of the foregoing embodiment is used to implement the corresponding method for processing clothing data based on a virtual scene in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same concept, corresponding to any embodiment of the method, the application further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the method for processing the clothing data based on the virtual scene according to any embodiment of the method is implemented.
Fig. 6 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the electronic device may include: a processor 610, a memory 620, an input/output interface 630, a communication interface 640, and a bus 650. Wherein the processor 610, memory 620, input/output interface 630, and communication interface 640 are communicatively coupled to each other within the device via bus 650.
The processor 610 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present specification.
The Memory 620 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 620 may store an operating system and other application programs, and when the technical solutions provided by the embodiments of the present specification are implemented by software or firmware, the relevant program codes are stored in the memory 620 and called to be executed by the processor 610.
The input/output interface 630 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 640 is used for connecting a communication module (not shown in the figure) to realize communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, bluetooth and the like).
Bus 650 includes a pathway to transfer information between various components of the device, such as processor 610, memory 620, input/output interface 630, and communication interface 640.
It should be noted that although the above-mentioned devices only show the processor 610, the memory 620, the input/output interface 630, the communication interface 640 and the bus 650, in a specific implementation, the devices may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The electronic device of the foregoing embodiment is used to implement the corresponding method for processing clothing data based on a virtual scene in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same concept, corresponding to any of the above embodiments, the present application also provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method for processing clothing data based on virtual scene as described in any of the above embodiments.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.

Claims (10)

1. A clothing data processing method based on virtual scenes is characterized by comprising the following steps:
extracting a preset bone deformation data set and a preset clothing vertex data set of a virtual character model acquired in advance under different postures;
determining a bone dress weight matrix according to the preset bone deformation data set and the preset dress vertex data set;
driving the virtual character model to obtain a real-time bone deformation data set and a real-time clothing vertex data set of the virtual character model in the current posture;
obtaining a first bone data set from the preset bone deformation data set and the real-time bone deformation data set;
and determining the clothing cloth state of the virtual character model under the current posture according to the real-time clothing vertex data set, the first skeleton data set and the skeleton clothing weight matrix.
2. The method according to claim 1, wherein the extracting of the preset bone deformation data set and the preset clothing vertex data set of the virtual character model acquired in advance in different postures further comprises:
acquiring motion data of the virtual character model under different postures, and driving the virtual character model to perform morphological transformation according to the motion data to obtain a morphological transformation result of the virtual character model;
and setting an interpolation extraction node, and extracting preset skeleton deformation data and a preset clothing vertex data set of the virtual character model in the form transformation result according to the interpolation extraction node.
3. The method according to claim 1, wherein the extracting of the preset bone deformation data set and the preset clothing vertex data set of the virtual character model acquired in advance in different postures further comprises:
acquiring a static three-dimensional model of a virtual character, a static skeleton data set of the virtual character and a static clothing data set of a target clothing;
and binding the static skeleton data set and the static clothes data set on the static three-dimensional model to obtain a virtual character model.
4. The method of claim 1, wherein said determining a bone dress weight matrix from said preset bone deformation data set and said preset dress apex data set, further comprises:
obtaining a preset bone deformation matrix according to the preset bone deformation data set; the preset skeleton deformation matrix is composed of skeleton deformation quaternions corresponding to all skeletons of the virtual character model in different postures;
obtaining a preset clothing vertex matrix according to the preset clothing vertex data set; the preset clothing vertex matrix is formed by clothing vertex three-dimensional coordinates corresponding to all clothing vertices of the virtual character model in different postures;
calculating the axial angle distance between the data of each row and the data of other rows in the preset bone deformation matrix to obtain a first axial angle distance matrix;
and obtaining the skeleton clothing weight matrix according to the first axis angle distance matrix and the preset clothing vertex matrix.
5. The method of claim 4, wherein said deriving a first bone data set from said preset bone deformation data set and said real-time bone deformation data set, further comprises:
obtaining a real-time bone deformation matrix according to the real-time bone deformation data set; the real-time bone deformation matrix is formed by bone deformation quaternions corresponding to all bones of the virtual character model in the current posture;
calculating the axial angle distance between the data of each row in the real-time bone deformation matrix and the data of each row in the preset bone deformation matrix to obtain a second axial angle distance matrix;
and obtaining the first bone data set according to the second axial angle distance matrix and a radial basis function formula.
6. The method of claim 1, wherein the determining a clothing cloth state of the virtual character model at a current pose from the real-time clothing vertex data set, the first skeletal data set, and the skeletal clothing weight matrix, further comprises:
performing weight extraction operation according to the first skeleton data set and the skeleton clothing weight matrix to obtain an interpolation clothing vertex data set under the current posture of the virtual character;
obtaining the comprehensive clothing vertex data set according to the real-time clothing vertex data set and the interpolation clothing vertex data set;
and determining the clothing cloth state of the virtual character model under the current posture according to the comprehensive clothing vertex data set.
7. The method according to claim 2, wherein the setting of an interpolation extraction node according to which the preset bone deformation data and the preset clothing vertex data set of the virtual character model in the morphological transformation result are extracted further comprises:
setting a preset time interval between every two interpolation extraction nodes;
and executing simplified operation according to the preset time interval to extract preset bone deformation data and a preset clothing vertex data set of the virtual character model in the form transformation result.
8. The utility model provides a dress data processing apparatus based on virtual scene which characterized in that includes:
the off-line data acquisition module is configured to extract a preset bone deformation data set and a preset clothing vertex data set of the virtual character model acquired in advance under different postures;
an offline weight calculation module configured to determine a bone dress weight matrix from the preset bone deformation data set and the preset dress apex data set;
a first operation module configured to drive the virtual character model to obtain a real-time bone deformation data set and a real-time clothing vertex data set of the virtual character model in a current posture;
a second operational module configured to derive a first bone data set from the preset bone deformation data set and the real-time bone deformation data set;
a third operation module configured to determine a clothing cloth state of the virtual character model at the current pose according to the real-time clothing vertex data set, the first skeleton data set, and the skeleton clothing weight matrix.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the program.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202210855346.4A 2022-07-19 2022-07-19 Clothing data processing method based on virtual scene, storage medium and related equipment Pending CN115272539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210855346.4A CN115272539A (en) 2022-07-19 2022-07-19 Clothing data processing method based on virtual scene, storage medium and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210855346.4A CN115272539A (en) 2022-07-19 2022-07-19 Clothing data processing method based on virtual scene, storage medium and related equipment

Publications (1)

Publication Number Publication Date
CN115272539A true CN115272539A (en) 2022-11-01

Family

ID=83767567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210855346.4A Pending CN115272539A (en) 2022-07-19 2022-07-19 Clothing data processing method based on virtual scene, storage medium and related equipment

Country Status (1)

Country Link
CN (1) CN115272539A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664733A (en) * 2023-07-28 2023-08-29 腾讯科技(深圳)有限公司 Virtual garment prediction method, device, equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664733A (en) * 2023-07-28 2023-08-29 腾讯科技(深圳)有限公司 Virtual garment prediction method, device, equipment and computer readable storage medium
CN116664733B (en) * 2023-07-28 2024-01-30 腾讯科技(深圳)有限公司 Virtual garment prediction method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
Zurdo et al. Animating wrinkles by example on non-skinned cloth
CN107341846B (en) Method and device for displaying large-scale three-dimensional reconstruction scene in real time
Magnenat-Thalmann et al. 3d web-based virtual try on of physically simulated clothes
Bao The application of intelligent algorithms in the animation design of 3D graphics engines
CN105427386A (en) Garment deformation method based on input human body posture real-time generation
CN112530005B (en) Three-dimensional model linear structure recognition and automatic restoration method
US9892485B2 (en) System and method for mesh distance based geometry deformation
GB2546820A (en) Animating a virtual object in a virtual world
CN115272539A (en) Clothing data processing method based on virtual scene, storage medium and related equipment
CN111369649B (en) Method for manufacturing computer skin animation based on high-precision three-dimensional scanning model
JP4438971B2 (en) Data three-dimensional apparatus and computer-readable recording medium recording data three-dimensional program
CN115908664B (en) Animation generation method and device for man-machine interaction, computer equipment and storage medium
WO2023160074A1 (en) Image generation method and apparatus, electronic device, and storage medium
CN112462691B (en) OpenGL-based three-dimensional simulation method and system for multi-channel numerical control system
CN114882153A (en) Animation generation method and device
Wang et al. LVDIF: a framework for real-time interaction with large volume data
CN117576280B (en) Intelligent terminal cloud integrated generation method and system based on 3D digital person
CN117557699B (en) Animation data generation method, device, computer equipment and storage medium
CN112631322B (en) Method and system for generating performance animation of clustered unmanned aerial vehicle, unmanned aerial vehicle and terminal
CN113658319B (en) Gesture migration method and device between heterogeneous frameworks
CN115861500B (en) 2D model collision body generation method and device
Ait Mouhou et al. 3D garment positioning using Hermite radial basis functions
Kaledin et al. Automated visualization of calculation results in the Algozit programming environment
Chen et al. Cycled merging registration of point clouds for 3D human body modeling
Zhang Research on Simulation and Reconstruction of Digital Sculpture 3D Models Based on Deep Learning Algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination