CN117541751A - Three-dimensional model degradation method and device - Google Patents

Three-dimensional model degradation method and device Download PDF

Info

Publication number
CN117541751A
CN117541751A CN202410021046.5A CN202410021046A CN117541751A CN 117541751 A CN117541751 A CN 117541751A CN 202410021046 A CN202410021046 A CN 202410021046A CN 117541751 A CN117541751 A CN 117541751A
Authority
CN
China
Prior art keywords
vertex
dimensional model
grid
model
mesh
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410021046.5A
Other languages
Chinese (zh)
Inventor
陈沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202410021046.5A priority Critical patent/CN117541751A/en
Publication of CN117541751A publication Critical patent/CN117541751A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the specification relates to a three-dimensional model degradation method and device, wherein the method comprises the following steps: a first three-dimensional model is obtained, wherein the first three-dimensional model comprises a first surface grid, any vertex of the first surface network is provided with corresponding material information, and the material information comprises texture information. And then, carrying out grid simplification on the first surface grid, and reducing the number of vertexes and edges to obtain a second surface grid. Next, for any second vertex on the second surface mesh, a first vertex corresponding thereto is determined in the first surface mesh, wherein the first vertex has the shortest spatial distance from the second vertex among all vertices of the first surface mesh. And finally, at least assigning the first material information of the first vertex to the second vertex to obtain a second three-dimensional model.

Description

Three-dimensional model degradation method and device
Technical Field
One or more embodiments of the present disclosure relate to the field of three-dimensional modeling, and in particular, to a method and apparatus for degrading a three-dimensional model.
Background
In recent years, with the progress of hardware functions and software algorithms of electronic devices, the application range of three-dimensional models is expanding. For example, in the field of electronic games, game developers can create a colorful game world through a fine three-dimensional model. Each game element such as roles, props, scenes and the like can be presented in a more realistic form, and more immersive game experience is provided for players. For another example, in the meta-universe field, a user can create his/her own avatar in the meta-universe by using a three-dimensional model, and can interact with other users to search an infinite virtual space. Whether virtual meetings, virtual shopping, virtual tours, or virtual societies, the three-dimensional model provides a vivid and realistic representation of various aspects of the meta-universe.
However, the real frames generated using the high-precision model also put a great computational burden on the hardware. Devices with poor hardware performance often have difficulty withstanding such large amounts of computation, due to the varying capabilities of the terminal devices used by the users. Therefore, a method is needed to degrade the three-dimensional model, reduce detailed information in the model, and reduce model accuracy so as to meet the processing requirements of hardware with different performances.
Disclosure of Invention
One or more embodiments of the present disclosure describe a three-dimensional model degradation method and apparatus, which reduce and degrade a high-precision model, save cost of three-dimensional modeling, and improve production efficiency.
In a first aspect, a three-dimensional model degradation method is provided, including:
acquiring a first three-dimensional model, wherein the first three-dimensional model comprises a first surface grid, any vertex of the first surface network has corresponding material information, and the material information comprises texture information;
grid simplification is carried out on the first surface grid, the number of vertexes and edges in the first surface grid is reduced, and a second surface grid is obtained;
for any second vertex on the second surface grid, determining a first vertex corresponding to the second vertex in the first surface grid, wherein the first vertex has the shortest spatial distance from the second vertex in all vertexes of the first surface grid;
and assigning at least the first material information of the first vertex to the second vertex to obtain a second three-dimensional model.
In one possible implementation, the mesh simplification of the first surface mesh includes:
grid simplifying the first surface grid using a quadratic error metric QEM algorithm.
In one possible embodiment, the method further comprises:
receiving an indication of a grid simplification mode from a user, and adjusting corresponding parameters in the QEM algorithm based on the indication, wherein the grid simplification mode at least comprises one of the following steps: uniform simplification, and simplification of specific part details are maintained.
In one possible embodiment, the first surface mesh further comprises at least one set of BlendShape values; the method further comprises the steps of:
and receiving an indication of deleting a specific one or more groups of Blendshape values by a user, and deleting the one or more groups of Blendshape values from the second three-dimensional model.
In one possible embodiment, the first three-dimensional model further comprises a model skeleton, and any vertex of the first surface network further has a corresponding skeleton weight value; assigning at least the first material information of the first vertex to the second vertex to obtain a second three-dimensional model, including:
and assigning the first material information and the first skeleton weight value of the first vertex to the second vertex to obtain a second three-dimensional model.
In one possible embodiment, the method further comprises:
an indication is received from a user to delete a particular one or more skeletal nodes, which are deleted from the second three-dimensional model.
In one possible embodiment, the spatial distance includes at least one of: euclidean distance, manhattan distance.
In a second aspect, there is provided a three-dimensional model demotion apparatus, comprising:
an obtaining unit configured to obtain a first three-dimensional model, where the first three-dimensional model includes a first surface mesh, and any vertex of the first surface mesh has corresponding texture information, and the texture information includes texture information;
a grid simplifying unit configured to perform grid simplification on the first surface grid, and reduce the number of vertices and edges therein to obtain a second surface grid;
a vertex determining unit configured to determine, for any second vertex on the second surface mesh, a first vertex corresponding to the second vertex in the first surface mesh, wherein the first vertex has a shortest spatial distance from the second vertex among all vertices of the first surface mesh;
and the vertex assignment unit is configured to assign at least the first material information of the first vertex to the second vertex to obtain a second three-dimensional model.
In a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
In a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has executable code stored therein, and wherein the processor, when executing the executable code, implements the method of the first aspect.
The embodiment of the specification provides a three-dimensional model degradation method and device, which can degrade the whole geometric shape of a three-dimensional model without changing the geometric shape. Meanwhile, after the model is degraded, the material information, skeleton weight and other information of the vertexes of the old model are mapped to the vertexes corresponding to the new model through the mapping relation established between the new model and the vertexes of the old model, so that the material information and the motion information of the model are reserved. Meanwhile, according to the embodiment of the specification, the model can be uniformly simplified or specific part detail simplification is reserved according to the requirement of a user, or the Blendshape value designated by the user is deleted, so that a more diversified customization effect is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments disclosed in the present specification, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only examples of the embodiments disclosed in the present specification, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a three-dimensional model degradation schematic according to one example;
FIG. 2 illustrates a schematic view of a scenario of a three-dimensional model degradation method according to one embodiment;
FIG. 3 illustrates a flow diagram of a three-dimensional model degradation method according to one embodiment;
FIG. 4 illustrates a schematic diagram of edge-to-edge collapse, according to one embodiment;
FIG. 5 illustrates a schematic diagram of vertex mapping relationships between emerging patterns in accordance with one embodiment;
FIG. 6 illustrates a schematic block diagram of a three-dimensional model demotion apparatus, according to one embodiment.
Detailed Description
Before describing the specific technical scheme of the embodiments of the present specification, related technical terms will be explained first.
Bone: bone (Skeleton) refers to a set of joint structures used for control and deformation in a model. Bones are made up of a series of skeletal nodes, often organized in a hierarchical fashion. By controlling the position and rotation of the skeletal nodes, the model can be controlled in pose and motion.
Surface mesh: the surface mesh is a mesh that is composed entirely of triangles, where the triangles may be referred to as triangular patches. Surface meshes are widely used in graphics and modeling to simulate the surface of complex objects such as buildings, vehicles, human bodies, teapots, etc. An example of representing the surface of an object with a surface mesh may be as shown in fig. 1 (a).
Texture: texture (Texture) is an image or pattern applied to the surface of a three-dimensional model that can impart visual detail and surface Texture to the model, such as wood grain, brick wall, and the like.
Material quality: materials (materials) are properties applied to the surface of the model that describe the appearance and optical properties of the model. The material may include information such as color, reflectivity, and refractive index to simulate the surface characteristics of different materials. The texture information includes texture information.
Covering: skin (Skinning) refers to the process of associating a character or object's skeletal system with its surface mesh in three-dimensional modeling. Skin techniques are used to achieve character animation and ensure that the skin (surface mesh) of the character deforms following the action of the skeleton.
Bone weight binding: in order to control the movement and deformation of the three-dimensional model, each vertex on the surface mesh needs to be bound to several bones, so that the deformation of the skin is controlled by the rotation of the bones.
Blendshape: is a technique for model deformation and animation by defining a series of preset shapes (represented by surface meshes) and fusing these shapes with a base model by means of linear interpolation, thereby achieving dynamic deformation of the model.
The following describes the scheme provided in the present specification with reference to the drawings.
As described above, the screen generated using the high-precision model may put a great computational pressure on hardware, and devices with poor hardware performance often cannot withstand such a great deal of computation. Therefore, degradation of the three-dimensional model is required to meet the processing requirements of different performance hardware.
In some existing implementations and modeling software, there are some solutions for degrading the three-dimensional model. For example, fig. 1 shows a degradation schematic of a three-dimensional model according to an example, and fig. 1 (a) shows a high-precision model of a rabbit, where a surface mesh of the model has a large number of vertices and triangular patches, so that the model can better simulate the image of a real rabbit. The high-precision model is degraded to become a low-precision model shown in (b) of fig. 1, and the number of vertexes of the surface grid is reduced on the basis of keeping the appearance of a rabbit to the greatest extent. However, since the vertex positions of the model surface mesh change after degradation, these solutions can only output the surface mesh information after degradation, and the information attached to the vertices, such as texture information, bone weight information, etc., is lost. Therefore, the related designer is required to reconfigure the lost information to the low-precision model, so that the manufacturing cost is increased, and the production efficiency is reduced.
To solve the above-described problem, fig. 2 shows a schematic view of a scenario of a three-dimensional model degradation method according to one embodiment. As shown in fig. 2, the left high-precision model to be degraded includes various model related data, including a surface mesh 1, material information 1, skeleton weight 1, blendstock 1, and skeleton 1, wherein each vertex on the surface mesh 1 has respective material information and skeleton weight information, and the skeleton 1 includes a plurality of skeleton nodes. In the process of model degradation, an arbitrary grid simplification algorithm is firstly used for simplifying the surface grid 1 of the high-precision model, the number of vertexes and edges in the surface grid 1 is reduced, and the surface grid 2 of the low-precision model is obtained. Then, for any one of the vertices s in the surface mesh 2, a vertex d having the smallest spatial distance from the vertex is determined among the vertices of the surface mesh 1, and a mapping relationship between the vertex s and the vertex d is established, thereby establishing a mapping relationship between each vertex of the surface mesh 2 and the vertex of the surface mesh 1. And then, according to the mapping relation, assigning the material information and the skeleton weight value of the vertex d in the high-precision model to the vertex s of the low-precision model, and setting corresponding material information and skeleton weight values for each vertex of the low-precision model. Therefore, the purpose that the material information and the skeleton weight information of the model can be reserved on the generated low-precision model while the surface grid of the high-precision model is simplified is achieved. In some application scenarios, one or more groups of Blendshape values in the high-precision model can be deleted according to the requirements and the instructions of the user; or, one or more skeletal nodes in the high-precision model can be deleted according to the instruction of the user so as to achieve the effect of further simplifying the model.
Specific implementation steps of the three-dimensional model degradation method are described below in connection with specific embodiments. Fig. 3 illustrates a flow diagram of a three-dimensional model demotion method, the subject of execution of which may be any platform or server or cluster of devices with computing, processing capabilities, etc., according to one embodiment. As shown in fig. 3, the method at least includes: step 302, acquiring a first three-dimensional model, wherein the first three-dimensional model comprises a first surface grid, any vertex of the first surface network has corresponding material information, and the material information comprises texture information; step 304, performing grid simplification on the first surface grid, and reducing the number of vertexes and edges to obtain a second surface grid; step 306, for any second vertex on the second surface mesh, determining a first vertex corresponding to the second vertex in the first surface mesh, wherein the first vertex has the shortest spatial distance from the second vertex in all vertices of the first surface mesh; and step 308, at least assigning the first material information of the first vertex to the second vertex to obtain a second three-dimensional model. The specific execution of the above steps is described below.
First, in step 302, a first three-dimensional model is acquired, where the first three-dimensional model includes a first surface mesh, and any vertex of the first surface mesh has corresponding texture information, where the texture information includes texture information.
The first three-dimensional model may be a high-precision model to be degraded, and the high-precision model is provided with a first surface grid, wherein the first surface grid is provided with a plurality of vertexes and edges, and the vertexes and edges form a plurality of triangular patches.
Then, at step 304, mesh simplification is performed on the first surface mesh, and the number of vertices and edges therein is reduced, resulting in a second surface mesh.
The second surface mesh may be a surface mesh of a simplified low-precision model having fewer vertices and edges than the first surface mesh, while retaining the overall shape of the model.
In one possible implementation, the first surface mesh is mesh reduced using a quadratic error metric QEM (Quadric Error Metrics) algorithm. The QEM algorithm calculates the cost value after each edge in the surface grid is collapsed (one edge is contracted into one point), selects the edge with the smallest cost value for collapse each time, and updates the cost values of the remaining edges. It should be noted that the new vertices formed after edge collapse are not necessarily vertices on the original surface mesh, i.e., mesh simplification of the surface mesh may result in a change in vertex position.
Fig. 4 shows a schematic diagram of edge-to-edge collapse, fig. 4 (a) shows a partial mesh of a surface mesh, assuming that in this round of collapse, edge (v, u) formed with vertex v and vertex u has the smallest cost value when collapsed to the position of vertex v ' as shown in fig. 4 (b), edge (v, u) is collapsed to the position of vertex v ', and all vertices connected with vertex v and vertex u in fig. 4 (a) are connected with new vertex v ', forming a new surface mesh as shown in fig. 4 (b). It can be seen from fig. 4 (b) that v 'does not belong to any vertex in the surface mesh shown in fig. 4 (a), i.e. v' is a new vertex, and mesh simplification of the surface mesh results in a change in vertex position.
In some more specific embodiments, the method further comprises: receiving an indication of a grid simplification mode from a user, and adjusting corresponding parameters in the QEM algorithm based on the indication, wherein the grid simplification mode at least comprises one of the following steps: uniform simplification, and simplification of specific part details are maintained.
The default mesh reduction of the QEM algorithm may be a uniform reduction, i.e., uniformly reducing vertices and edges throughout the mesh. When the user makes a specific instruction on the grid simplification mode, the corresponding parameters in the QEM algorithm are adjusted according to the instruction of the user so as to achieve the purpose of the user. For example, for a digital person model, the user wants to preserve more facial details so that the degraded digital person model still retains more vivid facial expressions. Then after receiving the above instruction from the user, the corresponding parameters in the QEM algorithm are adjusted so that the algorithm focuses less on the vertices of the face and focuses more on the vertices of other parts, so as to retain more vertices of the face. For example, a larger cost value coefficient may be set for the face vertex, and a smaller cost value coefficient may be set for the vertices of other parts, so that when the QEM algorithm calculates the collapse cost of the face edge, a larger cost value is obtained than originally, so that the algorithm may select the face mesh edge less to collapse.
In other possible embodiments, other mesh reduction algorithms may also be used to reduce the first surface mesh. For example, a vertex-based reduction algorithm may be used to delete vertices and edges connecting the vertices and connect the remaining vertices to reform a triangular patch; alternatively, a simplified algorithm based on edges may be used, where several edges are selected and any vertex or midpoint of an edge is used instead of the edge, resulting in an edge collapse effect similar to the QEM algorithm. The description is not intended to be limiting.
Next, at step 306, for any second vertex on the second surface mesh, a first vertex corresponding thereto is determined in the first surface mesh, wherein the first vertex has the shortest spatial distance from the second vertex among all vertices of the first surface mesh.
From the foregoing analysis, the vertices of the surface mesh of the low-precision model may be algorithmically computed new vertices that do not have information such as material information, bone weight values, etc., in performing mesh reduction. Then, a mapping relationship between each vertex in the second surface mesh and the vertex in the first surface mesh is established, via step 306, so as to assign a value to the vertex in the second surface mesh in a subsequent step.
FIG. 5 illustrates a schematic diagram of vertex mapping relationships between emerging patterns in accordance with one embodiment. Fig. 5 (a) shows a partial mesh of a surface mesh of a high-precision model to be reduced, and fig. 5 (b) shows a partial mesh of a corresponding portion of a surface mesh of a reduced low-precision model. The three sides (v, u), (u, w), (w, v) in fig. 5 (a) collapse to the vertex p in fig. 5 (b). For the vertex p, by calculating the spatial distance between the vertex p and each vertex in fig. 5 (a), a vertex having the smallest spatial distance with the vertex p, for example, a vertex w is found, and then a mapping relationship between the vertex p and the vertex w is established: p- > w.
The above spatial distance may be measured in a variety of ways. For example, euclidean distance, manhattan distance, or the like may be used, without limitation.
Finally, in step 308, at least the first material information of the first vertex is assigned to the second vertex, so as to obtain a second three-dimensional model.
The surface color and pattern of the three-dimensional model is represented by its texture map. By using the methods from step 302 to step 308, the surface color of the model after degradation is still largely similar to that of the original high-precision model, and the requirements of users are met.
In some possible embodiments, the first three-dimensional model further comprises a model skeleton, and any vertex of the first surface network further has a corresponding skeleton weight value. At this point, step 308 includes: and assigning the first material information and the first skeleton weight value of the first vertex to the second vertex to obtain a second three-dimensional model.
In some embodiments, the first three-dimensional model is animated rather than a stationary model, so that the vertices of their surface mesh bind skeletal weights. The movement of the surface grid skin is driven by the movement of the skeleton, so that the movement of the whole model is realized. Therefore, after the first three-dimensional model is downgraded, in addition to the material information being assigned to the second three-dimensional model, the skeletal weight is also assigned to the second three-dimensional model, so that the animation effect of the model is preserved.
In some more specific embodiments, the method further comprises: an indication is received from a user to delete a particular one or more skeletal nodes, which are deleted from the second three-dimensional model.
In some possible embodiments, the first surface mesh further comprises at least one set of BlendShape values; the method further comprises the steps of: and receiving an indication of deleting a specific one or more groups of Blendshape values by a user, and deleting the one or more groups of Blendshape values from the second three-dimensional model.
In some embodiments, the user may need to further refine the model, resulting in a more lightweight model. At this time, deleting one or more groups of Blendshape values in the high-precision model according to the instruction of the user; and/or deleting one or more skeletal nodes in the high-precision model according to the instruction of the user so as to achieve the effect of further simplifying the model.
Through the method of the embodiments, after the grid simplification is performed on the high-precision model to obtain the low-precision model, related information such as mapping, skeleton weight and the like of the high-precision model is assigned to the low-precision model through the vertex mapping relation between the two models, so that the purpose of automatically degrading the model is realized, the cost of three-dimensional modeling is greatly saved, and the production efficiency is improved. At the same time, the model can be further simplified according to the instruction of the user so as to generate a lighter model.
According to an embodiment of still another aspect, there is also provided a three-dimensional model demotion apparatus. FIG. 6 illustrates a schematic block diagram of a three-dimensional model demotion apparatus that may be deployed in any device, platform, or cluster of devices having computing, processing capabilities, according to one embodiment. As shown in fig. 6, the apparatus 600 includes:
an obtaining unit 601, configured to obtain a first three-dimensional model, where the first three-dimensional model includes a first surface mesh, and any vertex of the first surface mesh has corresponding texture information, and the texture information includes texture information;
a mesh simplifying unit 602 configured to perform mesh simplification on the first surface mesh, and reduce the number of vertices and edges therein, to obtain a second surface mesh;
a vertex determining unit 603 configured to determine, for any second vertex on the second surface mesh, a first vertex corresponding to the second vertex in the first surface mesh, wherein the first vertex has the shortest spatial distance from the second vertex among all vertices of the first surface mesh;
and a vertex assignment unit 604, configured to assign at least the first material information of the first vertex to the second vertex, so as to obtain a second three-dimensional model.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in any of the above embodiments.
According to an embodiment of yet another aspect, there is also provided a computing device including a memory and a processor, wherein the memory has executable code stored therein, and the processor, when executing the executable code, implements the method described in any of the above embodiments.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A method of three-dimensional model degradation, comprising:
acquiring a first three-dimensional model, wherein the first three-dimensional model comprises a first surface grid, any vertex of the first surface network has corresponding material information, and the material information comprises texture information;
grid simplification is carried out on the first surface grid, the number of vertexes and edges in the first surface grid is reduced, and a second surface grid is obtained;
for any second vertex on the second surface grid, determining a first vertex corresponding to the second vertex in the first surface grid, wherein the first vertex has the shortest spatial distance from the second vertex in all vertexes of the first surface grid;
and assigning at least the first material information of the first vertex to the second vertex to obtain a second three-dimensional model.
2. The method of claim 1, wherein mesh simplifying the first surface mesh comprises:
grid simplifying the first surface grid using a quadratic error metric QEM algorithm.
3. The method of claim 2, further comprising:
receiving an indication of a grid simplification mode from a user, and adjusting corresponding parameters in the QEM algorithm based on the indication, wherein the grid simplification mode at least comprises one of the following steps: uniform simplification, and simplification of specific part details are maintained.
4. The method of claim 1, wherein the first surface mesh further comprises at least one set of BlendShape values; the method further comprises the steps of:
and receiving an indication of deleting a specific one or more groups of Blendshape values by a user, and deleting the one or more groups of Blendshape values from the second three-dimensional model.
5. The method of claim 1, wherein the first three-dimensional model further comprises a model skeleton, any vertex of the first surface network further having a corresponding skeleton weight value; assigning at least the first material information of the first vertex to the second vertex to obtain a second three-dimensional model, including:
and assigning the first material information and the first skeleton weight value of the first vertex to the second vertex to obtain a second three-dimensional model.
6. The method of claim 5, further comprising:
an indication is received from a user to delete a particular one or more skeletal nodes, which are deleted from the second three-dimensional model.
7. The method of claim 1, wherein the spatial distance comprises at least one of: euclidean distance, manhattan distance.
8. A three-dimensional model demotion apparatus, comprising:
an obtaining unit configured to obtain a first three-dimensional model, where the first three-dimensional model includes a first surface mesh, and any vertex of the first surface mesh has corresponding texture information, and the texture information includes texture information;
a grid simplifying unit configured to perform grid simplification on the first surface grid, and reduce the number of vertices and edges therein to obtain a second surface grid;
a vertex determining unit configured to determine, for any second vertex on the second surface mesh, a first vertex corresponding to the second vertex in the first surface mesh, wherein the first vertex has a shortest spatial distance from the second vertex among all vertices of the first surface mesh;
and the vertex assignment unit is configured to assign at least the first material information of the first vertex to the second vertex to obtain a second three-dimensional model.
9. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-7.
10. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-7.
CN202410021046.5A 2024-01-04 2024-01-04 Three-dimensional model degradation method and device Pending CN117541751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410021046.5A CN117541751A (en) 2024-01-04 2024-01-04 Three-dimensional model degradation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410021046.5A CN117541751A (en) 2024-01-04 2024-01-04 Three-dimensional model degradation method and device

Publications (1)

Publication Number Publication Date
CN117541751A true CN117541751A (en) 2024-02-09

Family

ID=89794148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410021046.5A Pending CN117541751A (en) 2024-01-04 2024-01-04 Three-dimensional model degradation method and device

Country Status (1)

Country Link
CN (1) CN117541751A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961411A (en) * 2018-07-02 2018-12-07 南京大学 A kind of simplified method of the complex three-dimensional building model keeping external appearance characteristic
CN109345615A (en) * 2018-10-25 2019-02-15 网易(杭州)网络有限公司 Covering data creation method and device, electronic equipment and storage medium
WO2020093307A1 (en) * 2018-11-08 2020-05-14 深圳市大疆创新科技有限公司 Method and device for simplifying three-dimensional mesh model
CN115984476A (en) * 2018-07-02 2023-04-18 浙江景致数据技术有限公司 Three-dimensional model cutting method based on texture
CN116051708A (en) * 2023-01-30 2023-05-02 四川视慧智图空间信息技术有限公司 Three-dimensional scene lightweight model rendering method, equipment, device and storage medium
CN117078828A (en) * 2023-08-18 2023-11-17 洛阳众智软件科技股份有限公司 Texture model simplification method and device
CN117315192A (en) * 2023-09-14 2023-12-29 哈尔滨工业大学 Three-dimensional grid model simplification method for Chinese space station

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961411A (en) * 2018-07-02 2018-12-07 南京大学 A kind of simplified method of the complex three-dimensional building model keeping external appearance characteristic
CN115984476A (en) * 2018-07-02 2023-04-18 浙江景致数据技术有限公司 Three-dimensional model cutting method based on texture
CN109345615A (en) * 2018-10-25 2019-02-15 网易(杭州)网络有限公司 Covering data creation method and device, electronic equipment and storage medium
WO2020093307A1 (en) * 2018-11-08 2020-05-14 深圳市大疆创新科技有限公司 Method and device for simplifying three-dimensional mesh model
CN116051708A (en) * 2023-01-30 2023-05-02 四川视慧智图空间信息技术有限公司 Three-dimensional scene lightweight model rendering method, equipment, device and storage medium
CN117078828A (en) * 2023-08-18 2023-11-17 洛阳众智软件科技股份有限公司 Texture model simplification method and device
CN117315192A (en) * 2023-09-14 2023-12-29 哈尔滨工业大学 Three-dimensional grid model simplification method for Chinese space station

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
付钰 等: "面向工业机器人虚拟展示的三维模型简化技术", 《机械与电子》, 24 June 2020 (2020-06-24) *
张文新等: "一种改进的二次误差测度简化算法", 《桂林电子科技大学学报》, 25 February 2015 (2015-02-25) *
潘志广: "保持纹理色彩特征的三维网格与点云模型简化方法研究", 《中国优秀硕士学位论文全文数据库电子期刊 信息科技辑》, vol. 2023, no. 1, 15 January 2023 (2023-01-15), pages 3 *

Similar Documents

Publication Publication Date Title
CN112991502B (en) Model training method, device, equipment and storage medium
US20230104644A1 (en) Virtual asset map and index generation systems and methods
WO2000041139A1 (en) Three-dimensional skeleton data compressing device
CN110992495B (en) Method and device for deforming virtual model
CN112669414B (en) Animation data processing method and device, storage medium and computer equipment
US10762682B2 (en) Skinning weights and underlying object suppression of three-dimensional images
CN112950769A (en) Three-dimensional human body reconstruction method, device, equipment and storage medium
CN112598773A (en) Method and device for realizing skeleton skin animation
CN113457137B (en) Game scene generation method and device, computer equipment and readable storage medium
CN112184862A (en) Control method and device of virtual object and electronic equipment
EP1207498A2 (en) Display object generation method in information processing equipment
TW202203797A (en) Method of modeling part of shoe and method of shoe designing
CN108379841A (en) Processing method, device and the terminal of game special
CN115253294A (en) Game role hairstyle adjusting method and device, electronic equipment and storage medium
de Heras Ciechomski et al. A case study of a virtual audience in a reconstruction of an ancient roman odeon in aphrodisias
CN117541751A (en) Three-dimensional model degradation method and device
CA2349600C (en) Three-dimensional skeleton data error absorbing apparatus
CN111744196B (en) Task target guiding method and device in game task
CN113457136B (en) Game animation generation method and device, storage medium and terminal
CN112843704B (en) Animation model processing method, device, equipment and storage medium
CN113313798B (en) Cloud picture manufacturing method and device, storage medium and computer equipment
CN113724169A (en) Skin penetration repairing method and system and computer equipment
CN116402989B (en) Data processing method, device, equipment and medium
US20230196678A1 (en) Transforming three-dimensional model by using correlation to template model with template skeleton
CN114266852A (en) Lantern wind field image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination