WO2020093307A1 - 三维网格模型的简化方法与装置 - Google Patents

三维网格模型的简化方法与装置 Download PDF

Info

Publication number
WO2020093307A1
WO2020093307A1 PCT/CN2018/114550 CN2018114550W WO2020093307A1 WO 2020093307 A1 WO2020093307 A1 WO 2020093307A1 CN 2018114550 W CN2018114550 W CN 2018114550W WO 2020093307 A1 WO2020093307 A1 WO 2020093307A1
Authority
WO
WIPO (PCT)
Prior art keywords
vertex
vertices
boundary edge
boundary
deleted
Prior art date
Application number
PCT/CN2018/114550
Other languages
English (en)
French (fr)
Inventor
黄胜
梁家斌
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/114550 priority Critical patent/WO2020093307A1/zh
Priority to CN201880041713.9A priority patent/CN110832548A/zh
Publication of WO2020093307A1 publication Critical patent/WO2020093307A1/zh
Priority to US17/307,124 priority patent/US20210256763A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the embodiments of the present invention relate to the technical field of image processing, and in particular, to a simplified method and device for a three-dimensional grid model.
  • fine three-dimensional models can display all details of real objects in all directions, which greatly improves the practicality and ornamental nature of three-dimensional models, and has been widely used.
  • a fine 3D model of a large scene often contains a large number of 3D vertices and triangles.
  • the huge amount of data causes the 3D model to consume a lot of graphics card resources during the rendering process, which makes the rendering speed slower, resulting in frequent human-computer interaction.
  • There is a sense of dullness which has weakened the widespread application of fine three-dimensional models.
  • Embodiments of the present invention provide a simplified method and device for a three-dimensional grid model to improve the accuracy of the simplified three-dimensional grid model.
  • the present application provides a simplified method of a three-dimensional mesh model, including:
  • the three-dimensional mesh model is simplified.
  • the present application provides a simplified device for a three-dimensional grid model, including:
  • Memory used to store computer programs
  • the processor is used to execute the computer program, and is specifically used to:
  • the three-dimensional mesh model is simplified.
  • the simplified method and device of the three-dimensional mesh model in the embodiment of the present application obtains N non-boundary edges of the three-dimensional mesh model and determines the deleted error of each of the N non-boundary edges;
  • the characteristic parameters of the two vertices of the non-boundary edges determine the deletion weight of each non-boundary edge; adjust the deleted error of each non-boundary edge according to the deletion weight of each non-boundary edge ; Simplify the three-dimensional mesh model according to the adjusted deleted error of each of the non-boundary edges.
  • the deletion weights of non-boundary edges are determined based on the feature parameters of the vertices of non-boundary edges.
  • the deleted error of the boundary edge makes the adjusted deleted error update conform to the actual situation, thereby improving the quality of the simplified 3D mesh model.
  • FIG. 1 is a flowchart of a simplified method of a three-dimensional mesh model provided in Embodiment 1 of this application;
  • FIG. 2 is a flowchart of a simplified method of a three-dimensional mesh model provided in Embodiment 2 of this application;
  • FIG. 3 is a flowchart of a simplified method of a three-dimensional mesh model provided in Embodiment 3 of this application;
  • FIG. 5 is a flowchart of a simplified method of a three-dimensional grid model provided in Embodiment 4 of the present application;
  • FIG. 6 is a diagram showing the relationship between the model object and the camera according to an embodiment of the present application.
  • Figure 9 is the original three-dimensional model simplified using existing methods
  • FIG. 10 is a structural diagram of a simplified device of a three-dimensional mesh model provided by an embodiment of the present application.
  • the existing mesh simplification technology simply uses the geometric information of the mesh for simplification. For example, by clustering vertices, the vertices classified into a class are merged into one point, and the topology information is updated, while discarding the mesh generation time. Some very useful information, for example, the influence of the distance of the triangle mesh vertex from the camera and the flatness (curvature) of the region on the 3D mesh model are discarded, which makes the simplified 3D mesh model different from the real object. It is too large to accurately reflect the characteristic information of the real thing.
  • the simplified method of the three-dimensional mesh model determines each of them based on the characteristic parameters of the two vertices of the non-boundary edge (for example, the distance of the triangle mesh vertex from the camera, the curvature of the vertex, and the color value of the vertex, etc.)
  • the deletion weight of the non-boundary edge and based on the deletion weight of the non-boundary edge, to adjust the deleted error of the non-boundary edge, so that the deleted error of the non-boundary edge is more realistic, retains the detailed characteristics of the object, which simplifies The later 3D mesh model is of higher quality.
  • FIG. 1 is a flowchart of a simplified method for a three-dimensional mesh model provided in Embodiment 1 of the present application. As shown in FIG. 1, the method in this embodiment may include:
  • the execution subject of this embodiment may be a device having a function of simplifying a three-dimensional grid model, for example, a simplified device of a three-dimensional grid model, hereinafter referred to as a simplified device.
  • the simplified device can be implemented in software and / or hardware.
  • the simplified apparatus in this embodiment may be a part of an electronic device, for example, a processor of the electronic device.
  • the simplified device of this embodiment may also be a separate electronic device.
  • the electronic device in this embodiment may be an electronic device such as a smart phone, a desktop computer, a notebook computer, a smart bracelet, an augmented reality (AR) device, a virtual application (VA) device, and the like.
  • a smart phone such as a smart phone, a desktop computer, a notebook computer, a smart bracelet, an augmented reality (AR) device, a virtual application (VA) device, and the like.
  • AR augmented reality
  • VA virtual application
  • the three-dimensional grid model in the embodiment of the present application may be generated based on collected multiple pictures, which may be collected by an aerial photography drone, or may be collected by a user using one or more cameras.
  • the three-dimensional mesh model is analyzed to obtain the boundary of the three-dimensional mesh model.
  • the boundary of the three-dimensional mesh model includes a boundary edge and a non-boundary edge, where the boundary edge is an edge owned by one triangular face, and the non-boundary edge is an edge owned by at least two triangular faces.
  • the deletion of the boundary edges of the 3D mesh model will affect the integrity of the 3D mesh model. Therefore, when simplifying the 3D mesh model, the non-boundary edges of the 3D mesh model are mainly studied.
  • the three-dimensional mesh model is a triangular mesh model, and the mesh model includes a certain number of vertices and triangular faces. Get all the edges in the triangle mesh model, and calculate that each edge is shared by several triangle faces, for example, you can traverse all triangle faces, calculate the number of triangle faces containing the edge, if the triangle contains the edge If the number of faces is one, the edge is a boundary edge. If the number of triangular faces containing the edge is at least two, the edge is a non-boundary edge; obtain all non-boundary edges, for example, obtain a three-dimensional network After the N non-boundary edges of the lattice model, the deleted error of each of the N non-boundary edges is determined.
  • the three-dimensional grid model may also be a grid model of other shapes, such as a trapezoidal grid model, etc. This embodiment is only an exemplary description and is not limited herein.
  • the deleted error of the non-boundary edge means that the amount of change caused by the non-boundary edge to the entire 3D mesh model is deleted.
  • the greater the deletion error of the non-boundary edge the more important the non-boundary edge is to the 3D mesh model. Accordingly, the less likely the non-boundary edge is deleted.
  • the deleted error of the non-variable boundary edge can be obtained by calculating the distance from the newly generated vertex to the original triangular surface after deleting the non-variable boundary edge; in another embodiment, it can also be based on the non-boundary edge The distance from the midpoint of the edge or other suitable points to the original triangle to obtain the deleted error of the non-variable boundary. For example, when the sum of the distance between the newly generated vertex or midpoint or other suitable points and the original triangular surface is the smallest, the deletion error of the non-variable boundary edge is the smallest.
  • the embodiment of the present application may determine the deleted error of each non-boundary edge based on the method of quadratic error measurement, which specifically includes the following steps:
  • the non-boundary edge l is (v 1 , v 2 ), where v 1 and v 2 are the two vertices of the non-variable boundary edge l, and the triangular surface where the non-boundary edge l lies It is (v 1 , v 2 , v 3 ).
  • the quadratic error measurement matrix of the vertices v 1 and v 2 namely the Q matrix.
  • the Q matrix can reflect the sum of the squared distances of the vertices v 1 and v 2 from the surrounding triangles.
  • the vertex v 1 is used as an example for description, and the vertex v 2 may be referred to.
  • the unit normal vector of vertex v 1 is calculated.
  • the normal vector is replaced with the normal vector of the triangular face where the vertex v 1 is located.
  • the calculation method of the normal vector of the face is: Where x represents vector cross product. get Later unitize it.
  • the coefficients a, b, c, and the coefficient d can be obtained in a one-to-one correspondence according to the above method.
  • the Q 1 matrix of vertex v 1 is:
  • the midpoint of the non-boundary edge is acquired, and at this time, the deleted error of the non-boundary edge 1 can be acquired according to the midpoint of the non-boundary edge.
  • the error of deleting the non-boundary edge l is: p T Q p p.
  • the deleted error of each of the N non-boundary edges can be obtained.
  • S102 Determine the deletion weight of each non-boundary edge according to the characteristic parameters of the two vertices of each non-boundary edge.
  • the feature parameters of the two vertices of the non-boundary edge involved in the embodiments of the present application include but are not limited to one or more of the following: the distance from the vertex to the camera, the curvature of the vertex, and the color of the vertex value.
  • the triangular grid is generated by dense point clouds, and these dense point clouds are extracted from the information between the photos.
  • Each photo represents a camera.
  • the position can be calculated. Therefore, in the process of simplifying the mesh, the embodiment of the present application considers the factor of how close each vertex is to the position of the camera.
  • Objects close to the camera should be clear and rich in details, and details of objects far from the camera are slightly blurred.
  • scenes close to the camera should have more details, that is, more should be retained Triangles, and scenes far away from the camera can retain some triangles. In this way, the distance from the vertex of the non-boundary edge to the camera can be used to set the deletion weight of the non-boundary edge, so that the simplified 3D mesh model retains more detailed information.
  • a plane can be described based on three points, and a complex shape structure needs more points to describe this information.
  • Planar and non-planar areas can be found in the scene.
  • the triangle edge weight of the plane area is set very small and The weight of the complex shape area is increased, so that more triangular faces will be deleted in the planar area and the triangular faces of the complex area will be retained to the greatest extent during the simplification process.
  • the curvature of the vertices of non-boundary edges can be used to set the deletion weight of non-boundary edges, so that the simplified 3D mesh model retains more important information.
  • the more consistent the colors around the vertices the greater the probability that these vertices are on the plane, and the greater the probability that they can be deleted.
  • the color consistency around the vertices is used to set the deletion weight of the non-boundary edges.
  • the feature parameters of the two vertices of the non-boundary edge involved in the embodiments of the present application are not limited to the above embodiments.
  • the feature parameters of the vertices can also be set to the shape quality of the triangular surface where the vertex is located, that is, the triangle where the vertex is located. The better the shape quality of the face, the higher the consistency of the vertex normal vector, and the greater the probability that it can be deleted. In this way, the weight of deletion of non-boundary edges is set using the shape quality of the triangular face where the vertex is located.
  • the deletion weight of each non-boundary edge may be determined based on the same feature parameter of the vertex, for example, the deletion weight of each non-boundary edge is determined based on the distance from the vertex to the camera.
  • the deletion weight of each non-boundary edge may be determined based on different feature parameters of the vertices, for example, based on the distance from a part of vertices to the camera, the deletion weight of the corresponding non-boundary edge may be determined based on The curvature of a part of the vertex to determine the deletion weight of the corresponding part of the non-boundary edge.
  • the feature parameters of the vertices on which the deletion weight of each non-boundary edge is calculated may be the same or different, which is determined according to the actual situation, and this implementation does not limit this.
  • the deletion weight of each non-boundary edge may be determined based on multiple feature parameters of the vertex. For example, the deletion weight corresponding to each characteristic parameter is calculated, and the deletion weight corresponding to each characteristic parameter is superimposed as the deletion weight of the non-boundary edge.
  • the deleted weight of the non-boundary edge is used to adjust the deleted error of the non-boundary edge.
  • the deleted error of the non-boundary side minus the deleted weight of the non-boundary side is used as the adjusted deleted error.
  • the deleted error of the non-boundary edge is added to the deleted weight of the non-boundary edge as the adjusted deleted error.
  • the deleted error of the non-boundary side is multiplied by the deleted weight of the non-boundary side as the adjusted deleted error.
  • deletion weight of the non-boundary edge may be adjusted based on other methods to adjust the deleted error of the non-boundary edge.
  • the deleted error of the non-boundary edge is adjusted to make the adjusted deleted error more realistic.
  • simplifying the 3D mesh model based on the deleted error that conforms to the actual non-boundary edges can improve the quality of the 3D mesh model simplification.
  • simplifying the three-dimensional mesh model may be to sort the adjusted deleted errors of each of the non-boundary edges, and the deleted deleted errors are small Non-boundary edges simplify the 3D mesh model.
  • the method of the embodiment of the present application by acquiring N non-boundary edges of the three-dimensional mesh model, and determining the deleted error of each non-boundary edge among the N non-boundary edges; according to two of each of the non-boundary edges
  • the characteristic parameters of each vertex determine the deletion weight of each non-boundary edge; adjust the deletion error of each non-boundary edge according to the deletion weight of each non-boundary edge;
  • the deleted error of the non-boundary edge simplifies the three-dimensional mesh model.
  • the deletion weights of non-boundary edges are determined based on the feature parameters of the vertices of non-boundary edges.
  • the deleted error of the boundary edge makes the adjusted deleted error update conform to the actual situation, thereby improving the quality of the simplified 3D mesh model.
  • FIG. 2 is a flowchart of a simplified method of a three-dimensional mesh model provided in Embodiment 2 of the present application. Based on the foregoing embodiment, the embodiment of the present application relates to the characteristics of the two vertices of each non-boundary edge Parameters to determine the specific process of deleting the weight of each non-boundary edge. As shown in FIG. 2, the above S102 may include:
  • S202 Determine the deletion weight of the non-boundary edge according to the deletion weight of each vertex of the non-boundary edge.
  • the above S201 and S202 are a specific implementation manner of the above S102.
  • the feature parameter of the vertex includes the distance from the vertex to the camera.
  • the above S201 may include: acquiring the minimum distance from the distance of each vertex of the non-boundary edge to all cameras; The minimum distance corresponding to each vertex determines the deletion weight of each vertex, where the smaller the minimum distance corresponding to the vertex, the greater the deletion weight of the vertex.
  • the above method for obtaining the minimum distance from the distance of each vertex of the non-boundary edge to all cameras may be: first, obtain the poses of all cameras in the coordinate system of the three-dimensional mesh model, for example , Through motion-based reconstruction (structure from from motion) method to obtain the pose of all cameras in the three-dimensional model coordinate system. Next, according to the poses of all cameras, the distances from the vertices to the centers of all cameras are determined, and the minimum value of all distances is taken as the minimum distance corresponding to the vertices.
  • the minimum distance from the distance of each vertex to all cameras can also be obtained through other existing methods.
  • non-boundary edge l is taken as an example for description, and other non-boundary edges may be referred to.
  • the distance from the vertex v 1 to all cameras is obtained, and the minimum value among all distances is obtained, which is denoted as d 1 .
  • the distance from vertex v 2 to all cameras is obtained, and the minimum value among all distances is obtained, which is denoted as d 2 .
  • vertex v D 2 is determined to delete the minimum weight of 2 v 2 corresponding to the distance from the apex.
  • the farther the vertex is away from the camera the smaller the weight, and the easier it is to be deleted in the mesh simplification. That is, the smaller the minimum distance corresponding to a vertex, the greater the corresponding deletion weight.
  • the smaller d 1 is , the greater the deletion weight of vertex v 1 is, and the smaller d 2 is , the greater the deletion weight of vertex v 2 is. The less likely to be deleted during the simplified process.
  • the reciprocal of the square value of the minimum distance corresponding to the vertex is used as the deletion weight of the vertex.
  • 1 / d 1 2 be the deletion weight of vertex v 1
  • 1 / d 2 2 be the deletion weight of vertex v 2 .
  • the deletion weight of each of the two vertices of the non-boundary edge may be obtained by the above method.
  • the deletion weight of the non-boundary edge is determined according to the deletion weight of each vertex of the non-boundary edge.
  • the average of the deletion weights of the two vertices of the non-boundary edge is taken as the deletion weight of the non-boundary edge.
  • the deletion weight of the non-boundary edge 1 is the average of 1 / d 2 1 and 1 / d 2 2 .
  • the average value may be a weighted average or a numerical average.
  • the above S201 may include: obtaining the curvature of each vertex of the non-boundary edge; determining the vertex of each vertex according to the curvature of each vertex Deletion weights, where the greater the curvature of a vertex, the greater the deletion weight of the vertex.
  • non-boundary edge l As an example for description, other non-boundary edges can be referred to.
  • the curvatures of the vertex v 1 and the vertex v 2 are determined according to the following formula (3).
  • x, y, z are the coordinates of the vertices
  • x ′, x ′′ are the first and second derivatives corresponding to the function respectively.
  • the range of curvature ⁇ is [0,1], so that vertex v 1 and vertex v 2 can be obtained
  • the curvatures of are ⁇ 1 and ⁇ 2, respectively .
  • vertex v. 1 and 2 of curvature of the vertex v is determined vertex v. 1 and vertex v deleting weight of 2.
  • the larger ⁇ 1 is, the larger the deletion weight of vertex v 1 is
  • the larger ⁇ 2 is, the larger the deletion weight of vertex v 2 is, and the less likely it is to be deleted in the process of mesh simplification.
  • the square value of the curvature of the vertex is used as the deletion weight of the vertex.
  • the deletion weight of vertex v 1 is ⁇ 1 2
  • the deletion weight of vertex v 2 is ⁇ 2 2 .
  • the deletion weight of each of the two vertices of the non-boundary edge may be obtained by the above method.
  • the deletion weight of the non-boundary edge is determined according to the deletion weight of each vertex of the non-boundary edge.
  • the average of the deletion weights of the two vertices of the non-boundary edge is taken as the deletion weight of the non-boundary edge.
  • the deletion weight of non-boundary edge 1 is the average of ⁇ 1 2 and ⁇ 2 2 .
  • the average value may be a weighted average or a numerical average.
  • the characteristic parameters of the vertices include the color values of the vertices.
  • the above S201 may include: acquiring the color values of the two vertices of the non-boundary edge, and the surrounding vertices of the two vertices For each of the two vertices, according to the variance of the color value of the vertex and the color value of the surrounding vertex, determine the deletion weight of the vertex, where the color value of the vertex and the color value of the surrounding vertex The smaller the variance, the greater the deletion weight of the vertex.
  • non-boundary edge l As an example for description, other non-boundary edges can be referred to.
  • the color value of the vertex v 1 is obtained, denoted as y 1 , and the color value of each vertex around the vertex v 1 .
  • the color value of vertex v 2 is obtained, denoted as y 1 , and the color value of each vertex around the vertex v 1 .
  • the color value of vertex v 2 is obtained, denoted as y 1 , and the color value of each vertex around vertex v 2 .
  • calculating the variance of the color value of the vertex can be done separately on the three RGB channels.
  • the colors of these points are considered to be the same, indicating that the vertices v 1 and v 2 are more easily deleted in the process of mesh simplification. That is, the smaller the variance of the vertex color value and the color values of the surrounding vertices, the smaller the deletion weight of the vertex and the easier it is to delete.
  • the variance of the color value of the vertex and the color value of each surrounding vertex is used as the deletion weight of the vertex.
  • the deletion weight of each of the two vertices of the non-boundary edge may be obtained by the above method.
  • the feature parameters of the two vertices of the non-boundary edge involved in the embodiments of the present application are not limited to the above embodiments.
  • the feature parameters of the vertices can also be set to the shape quality of the triangular surface where the vertex is located, that is, the triangle where the vertex is located.
  • the deletion weight of each vertex of the two vertices of the non-boundary edge is obtained by using the shape quality of the triangle face where the vertex is located.
  • the deletion weight of the non-boundary edge is determined according to the deletion weight of each vertex of the non-boundary edge.
  • the average of the deletion weights of the two vertices of the non-boundary edge is taken as the deletion weight of the non-boundary edge.
  • the deletion weight of each of the two vertices is obtained, and then the The deletion weight of each vertex of the boundary edge determines the deletion weight of the non-boundary edge, so as to accurately determine the deletion weight of the non-boundary edge.
  • the embodiments of the present application provide two methods of FIG. 3 and FIG. 5.
  • FIG. 3 is a flowchart of a simplified method of a three-dimensional mesh model provided in Embodiment 3 of the present application. Based on the foregoing embodiment, as shown in FIG. 3, the above S104 may include:
  • the adjusted deleted errors of N non-boundary edges are sorted, and the sorting may be sorting from large to small, or from small to large.
  • the above obtained M non-boundary edges whose deletion error is less than the first preset threshold are non-boundary edges that need to be deleted during the mesh reduction process.
  • the process of deleting the non-boundary edge l is to delete the two vertices v 1 and v 2 of the non-boundary edge l into a new one Vertex p.
  • the new vertex p is connected to the surrounding vertices to complete the deletion of the non-boundary edge l.
  • the deletion process of other non-boundary edges among the M non-boundary edges is the same as the deletion process of the aforementioned non-boundary edge l, which can be referred to.
  • FIG. 5 is a flowchart of a simplified method of a three-dimensional mesh model provided in Embodiment 4 of the present application. Based on the foregoing embodiment, as shown in FIG. 5, the above S104 may include:
  • S501 Sort the N non-boundary edges according to the adjusted deleted error
  • the embodiment of the present application deletes the non-boundary edges with the smallest deleted error one by one, specifically, sorts the N non-boundary edges according to the adjusted deleted errors of the N non-boundary edges. Then delete the non-boundary edges with the smallest deleted error.
  • step S506 determine the deleted error of the new non-boundary edge formed by the new vertex and the surrounding vertices, and sort the deleted errors of all current non-boundary edges, and delete the non-boundary edge with the smallest deleted error, Return to the above step S504.
  • the method of the embodiment of the present application further improves the simplification accuracy of the three-dimensional mesh model by deleting the non-boundary edges with the smallest deleted error one by one.
  • the deleting two vertices of the non-boundary edge to generate a new vertex may include:
  • each time the non-boundary edge is deleted it is determined whether the non-boundary edge will cause a sudden change in the mesh shape. For example, if the non-boundary edge is deleted, whether the triangular face flips or sharp triangular faces are generated. If the triangular face is flipped or sharp triangular faces are generated, the non-boundary edges cannot be deleted. If the above phenomenon does not occur, you can delete The non-boundary edge further improves the reliability of the non-boundary edge deletion and ensures the integrity of the mesh model.
  • generating a new vertex according to the two vertices of the non-boundary edge may include: acquiring a quadratic error measurement matrix of the two vertices of the non-boundary edge; and according to the secondary error measurement of the two vertices The sum of the matrices determines the coordinates of the new vertex.
  • L continue to be non-border edges as an example, two non-border edges vertices v l is 1 and v 2, vertex v 1, the second error measurement is the above matrix Q a matrix of vertex v 1, vertex v2 di
  • the secondary error measurement matrix is the Q 2 matrix of the above vertex v 2.
  • the coordinate value of the new vertex p generated after deleting the non-boundary edge l can be determined.
  • coordinates of new vertices generated by other deleted non-boundary edges refer to the above description.
  • Figure 6 shows the positional relationship between the model object (Stone Pixiu) and the camera. It can be seen that the camera is shot around the model, that is, the Pixiu is close to the camera, and the surrounding grass road is far from the camera.
  • Figure 7 shows the original three-dimensional model before simplification (only a part of which is intercepted). You can see that the model has dense triangular surfaces and rich details.
  • the simplified result with added weights is shown in Figure 8, and the simplified result with unweighted weights is shown in Figure 9 (MeshLab Decimate result), in which the number of triangles after Figure 8 and Figure 9 are simplified is the same.
  • the mesh obtained by the method of this embodiment has a large number of triangular faces in the area of interest and more detailed details, while the meshes in the areas that are not focused around are sparse. The reason is that because of the addition of weights, fewer edges are deleted in the model center and more edges are deleted in the model periphery.
  • the foregoing program may be stored in a computer-readable storage medium, and when the program is executed, It includes the steps of the above method embodiments; and the foregoing storage media include: read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical discs, etc., which can store program codes Medium.
  • FIG. 10 is a structural diagram of a simplified device for a three-dimensional mesh model provided by an embodiment of the present application.
  • the simplified device for a three-dimensional mesh model 100 includes a memory 110 and a processor 120.
  • the memory 110 is used to store a computer program
  • the processor 120 is configured to execute the computer program, and is specifically used to:
  • the three-dimensional mesh model is simplified.
  • the simplified device of the three-dimensional mesh model in the embodiments of the present application may be used to execute the technical solutions of the above-described method embodiments, and the implementation principles and technical effects are similar, and are not repeated here.
  • the processor 120 is specifically configured to determine the deletion weight of each of the two vertices according to the characteristic parameters of the two vertices of the non-boundary edge; and according to the The deletion weight of each vertex of the non-boundary edge determines the deletion weight of the non-boundary edge.
  • the characteristic parameter of the vertex includes any one of the following: the distance from the vertex to the camera, the curvature of the vertex, and the color value of the vertex.
  • the processor 120 when the feature parameter of the vertex includes the distance from the vertex to the camera, the processor 120 is specifically configured to obtain the distance from each vertex of the non-boundary edge to all cameras The minimum distance in each; and according to the minimum distance corresponding to each vertex, determine the deletion weight of each vertex, where the smaller the minimum distance corresponding to the vertex, the greater the deletion weight of the vertex.
  • the processor 120 is specifically configured to use the reciprocal of the square value of the minimum distance corresponding to the vertex as the deletion weight of the vertex.
  • the processor 120 is specifically configured to acquire the poses of all cameras in the coordinate system of the three-dimensional mesh model; and determine the vertices according to the poses of all cameras For the distances of the centers of all cameras, the minimum value of all distances is taken as the minimum distance corresponding to the vertex.
  • the processor 120 when the characteristic parameter of the vertex includes the curvature of the vertex, the processor 120 is specifically configured to obtain the curvature of each vertex of the non-boundary edge; and according to each The curvature of the vertex determines the deletion weight of each vertex, wherein the greater the curvature of the vertex, the greater the deletion weight of the vertex.
  • the processor 120 is specifically configured to use the square value of the curvature of the vertex as the deletion weight of the vertex.
  • the processor 120 when the characteristic parameter of the vertex includes the color value of the vertex, the processor 120 is specifically configured to obtain the color values of the two vertices of the non-boundary edge, and the The color values of the surrounding vertices of the two vertices; and for each of the two vertices, the deletion weight of the vertex is determined according to the variance of the color value of the vertex and the color value of the surrounding vertices, wherein the color of the vertex The smaller the variance between the value and the color value of the surrounding vertices, the greater the deletion weight of that vertex.
  • the processor 120 is specifically configured to obtain, from the adjusted deleted errors of the N non-boundary edges, the number of M non-boundary edges that are less than the first preset threshold The error is deleted, where M is a positive integer not greater than N; for each non-boundary edge of the M non-boundary edges, two vertices of the non-boundary edge are deleted to generate a new vertex ; Connect the new vertex with the surrounding vertices.
  • the processor 120 is specifically configured to sort the N non-boundary edges according to the adjusted deleted error; delete the deleted ones of the N non-boundary edges Two vertices of the non-boundary edge with the smallest error generate a new vertex; connect the new vertex with the surrounding vertices.
  • the processor 120 is further specifically configured to determine whether deleting the non-boundary edge will cause triangular face flip or generate a sharp triangular face; if not, delete the non-boundary edge; According to the two vertices of the non-boundary edge, a new vertex is generated.
  • the processor 120 is specifically configured to obtain a quadratic error measurement matrix of two vertices of the non-boundary edge; based on the sum of the quadratic error measurement matrices of the two vertices Matrix to determine the coordinates of the new vertex.
  • the processor 120 is further specifically configured to determine the coordinates of the new vertex according to the coordinates of the two vertices if the sum matrix is an irreversible matrix.
  • the processor 120 is further specifically configured to obtain the number of triangular faces of the three-dimensional mesh model; determine whether the number of triangular faces reaches the second preset threshold, and if so, stop; If not, determine the deleted error of the new non-boundary edge formed by the new vertex and the surrounding vertices, and sort the deleted errors of all current non-boundary edges, and delete the non-boundary edge with the smallest deleted error until The number of triangles reaches a second preset threshold.
  • the processor 120 is specifically configured to multiply the deletion weight of the non-boundary edge and the deleted error of the non-boundary edge as the adjusted non-boundary edge Is deleted error.
  • the simplified device of the three-dimensional mesh model in the embodiments of the present application may be used to execute the technical solutions of the above-described method embodiments, and the implementation principles and technical effects are similar, and are not repeated here.
  • the technical solution of the present application essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or processor to execute all or part of the steps of the methods described in the embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transferred from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server or data center Transmit to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, Solid State Disk (SSD)) or the like.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a DVD
  • a semiconductor medium for example, Solid State Disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种三维网格模型的简化方法与装置,该方法包括:获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非边界边的被删除误差(S101);根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重(S102);根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差(S103);根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型(S104)。提高了三维网格模型简化的质量。

Description

三维网格模型的简化方法与装置 技术领域
本发明实施例涉及图像处理技术领域,尤其涉及一种三维网格模型的简化方法与装置。
背景技术
随着计算机运算能力和显卡性能的提升,精细三维模型由于可以全方位精确的展示真实物体的各个细节,极大的提升了三维模型的实用性和观赏性,得到了广泛应用。但是,一个精细的大场景三维模型往往包含海量的三维顶点和三角面,巨大的数据量导致了三维模型在渲染过程中需要消耗大量的显卡资源,使得渲染速度较慢,导致人机交互常伴有迟钝感,削弱了精细三维模型的更大规模的普及应用。
发明内容
本发明实施例提供一种三维网格模型的简化方法与装置,以提高三维网格模型简化的准确性。
第一方面,本申请提供一种三维网格模型的简化方法,包括:
获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非边界边的被删除误差;
根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重;
根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差;
根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型。
第二方面,本申请提供一种三维网格模型的简化装置,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序,具体用于:
获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非 边界边的被删除误差;
根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重;
根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差;
根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型。
本申请实施例的三维网格模型的简化方法与装置,通过获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非边界边的被删除误差;根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重;根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差;根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型。即本申请实施例,考虑到非边界边的顶点的特征参数对三维网格模型简化的影响,进而基于非边界边的顶点的特征参数确定非边界边的删除权重,使用该删除权重来调整非边界边的被删除误差,使得调整后的被删除误差更新符合实际情况,进而提高了三维网格模型简化的质量。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例一提供的三维网格模型的简化方法的流程图;
图2为本申请实施例二提供的三维网格模型的简化方法的流程图;
图3为本申请实施例三提供的三维网格模型的简化方法的流程图;
图4为本申请实施例涉及的删除非边界边的一种示例图;
图5为本申请实施例四提供的三维网格模型的简化方法的流程图;
图6为本申请实施例涉及的模型对象与相机位置关系图;
图7为本申请实施例涉及未简化前的原始三维模型;
图8为使用本申请实施例的方法简化后的原始三维模型;
图9为使用已有方法简化后的原始三维模型;
图10为本申请一实施例提供的三维网格模型的简化装置的结构图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
随着计算机运算能力和显卡性能的提升,精细三维模型数量越来越多,这些模型可以360°精确的展示真实物体的各个细节,极大的提升了三维模型的实用性和观赏性。但是一个精细的大场景三维模型往往包含海量的三维顶点和三角面,巨大的数据量导致了三维模型在渲染过程中需要消耗大量的显卡资源,使得渲染速度较慢,导致人机交互常伴有迟钝感,削弱了精细三维模型的更大规模的普及应用。
现有的网格简化技术单纯利用网格的几何信息进行简化,例如,通过对顶点进行聚类,将归为一类的顶点合并成一个点,并更新拓扑信息,而丢弃了网格生成时候的一些非常实用的信息,例如,丢弃了三角网格顶点离照相机距离的远近、以及区域的平面度(曲率)等对三维网格模型的影响,进而使得简化后的三维网格模型与实物差别较大,无法准确反映实物的特征信息。
本申请实施例提供的三维网格模型的简化方法,基于非边界边的两个顶点的特征参数(例如,三角网格顶点离照相机距离、顶点的曲率和顶点的颜色值等),确定每个所述非边界边的删除权重,并基于该非边界边的删除权重,来调整非边界边的被删除误差,使得非边界边的被删除误差更加符合实际,保留了物体的细节特征,使得简化后的三维网格模型质量更高。
图1为本申请实施例一提供的三维网格模型的简化方法的流程图,如图1所示,本实施例的方法可以包括:
S101、获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非边界边的被删除误差。
本实施例的执行主体可以是具有简化三维网格模型功能的装置,例如,三维网格模型的简化装置,以下简称简化装置。该简化装置可以通过 软件和/或硬件的方式实现。
可选的,本实施例的简化装置可以是电子设备的一部分,例如为电子设备的处理器。
可选的,本实施例的简化装置还可以是单独的电子设备。
本实施例的电子设备可以是智能手机、台式电脑、笔记本电脑、智能手环、增强现实(Augmented Reality,AR)设备、虚拟应用(Virtual Application,VA)设备等电子设备。
本申请实施例的三维网格模型可以是基于采集的多张图片生成的,该多张图片可以是航拍无人机采集的,也可以是用户使用一个或多个相机采集的。
获得三维网格模型后,对该三维网格模型进行分析,获得三维网格模型的边界。其中,三维网格模型的边界包括边界边和非边界边,其中,边界边为被一个三角面拥有的边,非边界边为被至少两个三角面拥有的边。由于三维网格模型的边界边删除会影响三维网格模型的完整性,因此,在简化三维网格模型时,主要以三维网格模型的非边界边为研究对象。
例如,在一种实施方式中,三维网格模型为三角面网格模型,该网格模型包括一定数量的顶点和三角面。获取该三角形网格模型中所有边,并计算每条边被几个三角面所共有,例如可以遍历所有三角面,计算包含有该条边的三角面的数量,若包含有该条边的三角面的数量为一个,则该条边为边界边,若包含有该条边的三角面的数量为至少两个,则该条边为非边界边;获取所有非边界边,例如,获得三维网格模型的N个非边界边之后,确定该N个非边界边中每个非边界边的被删除误差。可以理解,三维网格模型也可以为其他形状的网格模型,如梯形网格模型等,本实施例仅为示例性说明,在此不作限定。
非边界边的被删除误差表示,删除该非边界边对整个三维网格模型带来的变化量。非边界边的被删除误差越大,说明该非边界边对三维网格模型越重要,相应的,该非边界边被删除的可能性越小。进一步地,该非变界边的被删除误差可以通过计算删除该非变界边后,新生成的顶点到原三角面的距离来获取;在另一种实施方式中,也可以根据该非边界边的中点或其他合适的点到原三角面的距离来获取非变界边的被删除误差。例如,当新生成的顶点或中点或其他合适的点到原三角面的距离之和最小时,该非变界边的被删 除误差最小。
可选的,本申请实施例可以基于二次误差测度的方法,确定每个非边界边的被删除误差,具体包括如下步骤:
以一条非边界边l为例,该非边界边l为(v 1,v 2),其中v 1,v 2为该非变界边l的两个顶点,该非边界边l所在的三角面为(v 1,v 2,v 3)。
1、计算顶点v 1、v 2的二次误差测量矩阵,即Q矩阵。具体地,Q矩阵可以反映顶点v 1、v 2到周围三角面的距离平方和。本实施例以顶点v 1为例进行说明,顶点v 2参照即可。
首先,计算顶点v 1的单位法向量,法向量用该顶点v 1所在三角面的法向量替代,面的法向量计算方式是:
Figure PCTCN2018114550-appb-000001
其中×表示向量叉乘。得到
Figure PCTCN2018114550-appb-000002
后将其单位化。
顶点v 1的坐标是已知的,设顶点v 1的坐标为:p=(x,y,z,1) T,设有一个三维平面:q=(a,b,c,d) T,其满足:ax+by+cz+d=0,平面的系数满足:
Figure PCTCN2018114550-appb-000003
d=-(ax+by+cz)。
由于,法向量n是三维的,根据上述方法可以一一对应得到系数a,b,c,以及系数d。
顶点v 1的Q 1矩阵为:
Figure PCTCN2018114550-appb-000004
参照上述方法,可以获得顶点v 2的Q 2矩阵。
2、计算非边界边l的被删除误差:
根据上述步骤,获得顶点v 1和顶点v 2的Q矩阵分别为Q 1和Q 2,删除该非边界边l后生成新的顶点为p,p的Q矩阵为Q p=(Q 1+Q 2)。
新生成的顶点p坐标(齐次坐标表示)的计算方法是求解如下方程,
Figure PCTCN2018114550-appb-000005
其中,q ij为矩阵Q p中对应的元素。如果上述系数矩阵可逆,则p就是该方程的唯一解,计算得到的唯一解p为新生成的到周围三角面的距离平方和最小的顶点,此时可以根据新生成的顶点到原三角面的距离来获取该非变界边l 的被删除误差。如果系数矩阵不可逆,则该方程有无数解,此种情况令
Figure PCTCN2018114550-appb-000006
亦即,获取该非边界边的中点,此时可以根据非边界边的中点来获取该非边界边l的被删除误差。
进一步地,该非边界边l被删除误差是:p TQ pp。
根据上述方法,可以获得N条非边界边中每条非边界边的被删除误差。
S102、根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重。
可选的,本申请实施例涉及的非边界边的两个顶点的特征参数包括但不限于以下一种或多种:所述顶点到相机的距离、所述顶点的曲率和所述顶点的颜色值。
例如,在利用航拍影像生成三维模型的过程中,三角网格是通过稠密点云生成的,而这些稠密点云又是通过照片之间的信息提取的,每张照片代表一个相机,相机的真实位置是可以计算的,因此在简化网格的过程中,本申请实施例考虑每个顶点离相机的位置远近这一因素。离照相机近的物体应该是清晰细节丰富的,离相机远的物体细节部分是稍显模糊的,对应到实际三维网格模型中就是,离照相机近的场景应该细节更多,即应保留更多的三角面,而离照相机远的场景可以适当少保留一些三角面。这样,可以使用非边界边的顶点到相机的距离,来设置非边界边的删除权重,以使简化后的三维网格模型保留更多细节信息。
例如,根据三个点就可以描述一个平面,而复杂的形状结构需要更多的点来描述这个信息,可以在场景中找到平面区域和非平面区域,平面区域的三角边权重设置的很小而复杂形状区域权重加大,这样平面区域就会有更多的三角面被删除而复杂区域三角面在简化过程中会最大程度的被保留。这样,可以使用非边界边的顶点的曲率,来设置非边界边的删除权重,以使简化后的三维网格模型保留更多重要信息。
例如,顶点周围颜色越一致,表明这些顶点在平面上的概率越大,可被删除的概率越大。这样,利用顶点周围的颜色一致性设置非边界边的删除权重。
可以理解,本申请实施例涉及的非边界边的两个顶点的特征参数不限于 上述实施例,例如,顶点的特征参数还可以设置为顶点所在三角面的形状质量,也就是说,顶点所在三角面的形状质量越好,顶点法向量的一致性越高,可被删除的概率越大。这样,利用顶点所在三角面的形状质量设置非边界边的删除权重。
可选的,本步骤中,可以基于顶点的同一个特征参数,来确定每个非边界边的删除权重,例如,基于顶点到相机的距离,来确定每个非边界边的删除权重。
可选的,本步骤中,可以基于顶点的不同特征参数,来确定每个非边界边的删除权重,例如,基于一部分顶点到相机的距离,来确定对应部分非边界边的删除权重,基于另一部分顶点的曲率,来确定对应部分非边界边的删除权重。
即本实施例中,计算每条非边界边的删除权重所基于的顶点的特征参数可以相同,也可以不同,具体根据实际情况确定,本实施对此不做限制。
可选的,可以基于顶点的多个特征参数,来确定每条非边界边的删除权重。例如,计算每个特征参数对应的删除权重,将每个特性参数对应的删除权重进行叠加,作为非边界边的删除权重。
S103、根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差。
基于上述步骤,获得每个非边界边的被删除误差和删除权重后,使用非边界边的删除权重来调整该非边界边的被删除误差。
在一种示例中,非边界边的被删除误差减去该非边界边的删除权重,作为调整后的被删除误差。
在另一种示例中,非边界边的被删除误差加上该非边界边的删除权重,作为调整后的被删除误差。
在另一种示例中,非边界边的被删除误差乘以该非边界边的删除权重,作为调整后的被删除误差。
可选的,还可以基于其他的方式,使用非边界边的删除权重,调整该非边界边的被删除误差。
S104、根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型。
根据上述步骤对非边界边的被删除误差进行调整,使得调整后的被删除误差更加符合实际。这样,基于符合实际的非边界边的被删除误差来简化三维网格模型,可以提高三维网格模型简化的质量。
其中,基于调整后的每个所述非边界边的被删除误差,简化三维网格模型,可以是对调整后的每个所述非边界边的被删除误差进行排序,删除被删除误差小的非边界边,实现对三维网格模型的简化。
本申请实施例的方法,通过获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非边界边的被删除误差;根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重;根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差;根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型。即本申请实施例,考虑到非边界边的顶点的特征参数对三维网格模型简化的影响,进而基于非边界边的顶点的特征参数确定非边界边的删除权重,使用该删除权重来调整非边界边的被删除误差,使得调整后的被删除误差更新符合实际情况,进而提高了三维网格模型简化的质量。
图2为本申请实施例二提供的三维网格模型的简化方法的流程图,在上述实施例的基础上,本申请实施例涉及的是根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重的具体过程。如图2所示,上述S102可以包括:
S201、根据所述非边界边的两个顶点的特征参数,确定所述两个顶点中每个顶点的删除权重。
S202、根据所述非边界边的每个顶点的删除权重,确定所述非边界边的删除权重。
上述S201与S202为上述S102的一种具体的实现方式。
下面通过以下示例对上述S201与S202进行描述,其中,上述S201与S202包括但不限于以下示例。
在第一种示例中,顶点的特征参数包括所述顶点到相机的距离,此时,上述S201可以包括:获取所述非边界边的每个顶点到所有相机的距离中的最小距离;根据每个顶点对应的最小距离,确定每个顶点的删除权重,其中, 顶点对应的最小距离越小,则顶点的删除权重越大。
可选的,上述获取所述非边界边的每个顶点到所有相机的距离中的最小距离的方法可以是:首先,获取所有相机在所述三维网格模型的坐标系下的位姿,例如,通过基于运动的重建(structure from motion)方法获得所有相机在三维模型坐标系下的位姿。接着,根据所有相机的位姿,确定所述顶点到所有相机中心的距离,将所有距离中的最小值作为所述顶点对应的最小距离。
可选的,还可以通过其他已有的方式获得每个顶点到所有相机的距离中的最小距离。
在此以一条非边界边l为例进行说明,其他非边界边参照即可。
参照上述例子,假设非边界边l包括顶点v 1和顶点v 2,获取顶点v 1到所有相机的距离,获得所有距离中的最小值,记为d 1。同理,获取顶点v 2到所有相机的距离,获得所有距离中的最小值,记为d 2
接着,根据顶点v 1对应的最小距离d 1来确定顶点v 1的删除权重,根据顶点v 2对应的最小距离d 2来确定顶点v 2的删除权重。其中,顶点离相机距离越远,权重越小,越容易在网格简化中被删除。即顶点对应的最小距离越小,对应的删除权重越大,例如,d 1越小,则顶点v 1的删除权重越大,d 2越小,则顶点v 2的删除权重越大,在网格简化的过程中越不容易被删除。
可选的,将所述顶点对应的最小距离的平方值的倒数,作为所述顶点的删除权重。例如,将1/d 1 2作为顶点v 1的删除权重,将1/d 2 2作为顶点v 2的删除权重。
即本申请实施例,当顶点的特征参数包括所述顶点到相机的距离时,可以通过上述方法,获得非边界边的两个顶点中每个顶点的删除权重。
接着,根据所述非边界边的每个顶点的删除权重,确定所述非边界边的删除权重。
可选的,将非边界边的两个顶点的删除权重的平均值作为非边界边的删除权重。例如,非边界边1的删除权重为1/d 2 1和1/d 2 2的平均值。该平均值可以是加权平均,也可以是数值平均。
在第二种示例中,若顶点的特征参数包括所述顶点的曲率,则上述S201可以包括:获取所述非边界边的每个顶点的曲率;根据每个顶点的曲率,确 定每个顶点的删除权重,其中,顶点的曲率越大,所述顶点的删除权重越大。
继续以非边界边l为例进行说明,其他非边界边参照即可。
参照上述例子,假设非边界边l包括顶点v 1和顶点v 2,根据如下公式(3)确定顶点v 1和顶点v 2顶点的曲率。
Figure PCTCN2018114550-appb-000007
x,y,z为顶点的坐标,x′、x″分别为函数对应的一阶导数和二阶导数。曲率ρ的范围为[0,1],这样,可以获得顶点v 1和顶点v 2的曲率分别为ρ 1和ρ 2
接着,顶点v 1和顶点v 2的曲率,确定顶点v 1和顶点v 2的删除权重。其中,顶点的曲率越小,说明该顶点的弯曲程度越小,被删除的可能性越大,即,顶点的曲率越大,所述顶点的删除权重越大。例如,ρ 1越大,则顶点v 1的删除权重越大,ρ 2越大,则顶点v 2的删除权重越大,在网格简化的过程中越不容易被删除。
可选的,将所述顶点的曲率的平方值作为所述顶点的删除权重。例如,顶点v 1的删除权重为ρ 1 2,顶点v 2的删除权重为ρ 2 2
即本申请实施例,当顶点的特征参数包括所述顶点曲率时,可以通过上述方法,获得非边界边的两个顶点中每个顶点的删除权重。
接着,根据所述非边界边的每个顶点的删除权重,确定所述非边界边的删除权重。
可选的,将非边界边的两个顶点的删除权重的平均值作为非边界边的删除权重。例如,非边界边1的删除权重为ρ 1 2和ρ 2 2的平均值。该平均值可以是加权平均,也可以是数值平均。
在第三种示例中,顶点的特征参数包括所述顶点的颜色值,此时,上述S201可以包括:获取所述非边界边的两个顶点的颜色值,以及所述两个顶点的周围顶点的颜色值;针对两个顶点中的每个顶点,根据该顶点颜色值与周围顶点的颜色值的方差,确定该顶点的删除权重,其中,所述该顶点颜色值与周围顶点的颜色值的方差越小,则该顶点的删除权重越大。
顶点的颜色与周围顶点的颜色越一致,表明这些顶点在平面上的概率越大,可被删除的概率越大。
继续以非边界边l为例进行说明,其他非边界边参照即可。
参照上述例子,假设非边界边l包括顶点v 1和顶点v 2,获取顶点v 1的颜色值,记为y 1,以及顶点v 1周围各顶点的颜色值。同理,获取顶点v 2的颜色值,记为y 2,以及顶点v 2周围各顶点的颜色值。
接着,确定顶点v 1的颜色值与顶点v 1周围各顶点的颜色值之间的方差,以及确定顶点v 2的颜色值与顶点v 2周围各顶点的颜色值之间的方差,由于顶点v 1与顶点v 2互为周围顶点,因此,也会确定顶点v 1的颜色值与顶点v 2的颜色值之间的方差。可选的,计算顶点的颜色值方差,可在RGB三通道上分别进行。
当上述各方差都很小(比如小于1)的时候则认为这些点颜色一致,则说明顶点v 1与顶点v 2在网格简化的过程中越容易被删除。即顶点颜色值与周围顶点的颜色值的方差越小,则该顶点的删除权重越小,越容易被删除。
可选的,将所述顶点的颜色值与周围各顶点的颜色值的方差,作为所述顶点的删除权重。
即本申请实施例,当顶点的特征参数包括所述顶点的颜色值时,可以通过上述方法,获得非边界边的两个顶点中每个顶点的删除权重。
可以理解,本申请实施例涉及的非边界边的两个顶点的特征参数不限于上述实施例,例如,顶点的特征参数还可以设置为顶点所在三角面的形状质量,也就是说,顶点所在三角面的形状质量越好,顶点法向量的一致性越高,可被删除的概率越大。这样,利用顶点所在三角面的形状质量获得非边界边的两个顶点中每个顶点的删除权重。接着,根据所述非边界边的每个顶点的删除权重,确定所述非边界边的删除权重。
可选的,将非边界边的两个顶点的删除权重的平均值作为非边界边的删除权重。
本申请实施例,针对顶点的特征参数为顶点到相机的距离、所述顶点的曲率和所述顶点的颜色值的至少一种时,获得两个顶点中每个顶点的删除权重,进而根据非边界边的每个顶点的删除权重,确定非边界边的删除权重,实现对非边界边的删除权重的准确确定。
针对上述S104中根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型的方式,本申请实施例提供图3和图5两种方式。
图3为本申请实施例三提供的三维网格模型的简化方法的流程图,在上述实施例的基础上,如图3所示,上述S104可以包括:
S301、从所述N个非边界边的调整后的被删除误差中,获取小于第一预设阈值的M个非边界边的被删除误差,其中,所述M为不大于所述N的正整数。
本申请实施例对N个非边界边的调整后的被删除误差进行排序,该排序可以是从大到小排序,也可以是从小到大排序。
接着,从排序的N个非边界边的调整后的被删除误差中,获得小于第一预设阈值的M个非边界边的被删除误差。
S302、针对所述M个非边界边中的每个非边界边,删除该非边界边的两个顶点,生成一个新的顶点。
S303、将所述新的顶点与周围顶点进行连接。
上述获得被删除误差小于第一预设阈值的M个非边界边,为在网格简化过程中需要删除的非边界边。
假设上述所述的非边界边l属于上述M个非边界边,如图4所示,删除非边界边l的过程为,删除非边界边l的两个顶点v 1和v 2合并成一个新的顶点p。接着,将新的顶点p与其周围顶点进行连接,完成非边界边l的删除。M个非边界边中其他非边界边的删除过程与上述非边界边l的删除过程相同,参照即可。
本申请实施例,通过将调整后的被删除误差小于第一预设阈值的M个非边界边进行删除,实现网格的简化,使得简化后的网格更加符合实际情况,质量较高。并且整个删除过程简单,一次可以完成。
图5为本申请实施例四提供的三维网格模型的简化方法的流程图,在上述实施例的基础上,如图5所示,上述S104可以包括:
S501、根据调整后的被删除误差,对所述N个非边界边进行排序;
S502、删除所述N个非边界边中被删除误差最小的非边界边的两个顶点,生成一个新的顶点。
S503、将所述新的顶点与周围顶点进行连接。
本申请实施例是逐一删除被删除误差最小的非边界边,具体是根据N个 非边界边的调整后的被删除误差,对N个非边界边进行排序。接着删除被删除误差最小的非边界边。
S504、获取所述三维网格模型的三角面数量。
S505、判断所述三角面数量是否达到第二预设阈值。
S506、若否,确定所述新的顶点与周围顶点形成的新的非边界边的被删除误差,并对当前所有非边界边的被删除误差进行排序,删除被删除误差最小的非边界边,返回执行上述步骤S504。
S507、若是,则结束。
接着,获取当前三维网格模型的三角面数,例如,每减少一条非边界边减少两个三角面,这样,将三维网格模型的初始三角面数减去当前删除非边界边减少的三角面,获得当前剩余的三角面数量。可选的,也可以遍历获取当前剩余的三角面数量。
判断当前三维网格模型的三角面数是否达到第二预设阈值,若是,则简化过程结束。若否,则确定所述新的顶点与周围顶点形成的新的非边界边的被删除误差,并对当前所有非边界边的被删除误差进行排序,删除被删除误差最小的非边界边。接着,返回执行上述S504至S506的步骤,直到三维网格模型的三角面数量达到第二预设阈值为止。
本申请实施例的方法,通过逐一删除被删除误差最小的非边界边,进一步提高了三维网格模型简化的精度。
在一种示例中,所述删除非边界边的两个顶点,生成一个新的顶点,可以包括:
判断删除所述非边界边是否会发生三角面翻转或生成尖锐三角面;若否,则删除该非边界边;根据该非边界边的两个顶点,生成一个新的顶点。
即本实施例,为了保证删除非边界边后网格的完好性和准确性,则每次删除非边界边时,判断该非边界边时会不会造成网格形状的突变。例如,删除该非边界边是否会发生三角面翻转或生成尖锐三角面,若会造成三角面翻转或生成尖锐三角面时,则不能删除该非边界边,若不会产生上述现象,则可以删除该非边界边,进而提高了非边界边删除的可靠性,保证了网格模型的完好性。
可选的,上述根据非边界边的两个顶点,生成一个新的顶点,可以包括:获取该非边界边的两个顶点的二次误差测量矩阵;根据所述两个顶点的二次误差测量矩阵的和矩阵,确定所述新的顶点的坐标。
继续以非边界边l为例进行说明,非边界边l的两个顶点为v 1和v 2,顶点v 1的二次误差测量矩阵即为上述顶点v 1的Q 1矩阵,顶点v2的二次误差测量矩阵即为上述顶点v 2的Q 2矩阵,删除到非边界边l后生成的新的顶点为p,p的二次误差测量矩阵为Q p=(Q 1+Q 2)。
新生成的顶点p坐标(齐次坐标表示)的计算方法是求解如下方程,
Figure PCTCN2018114550-appb-000008
其中,q ij为矩阵Q p中对应的元素。
可选的,当上述系数矩阵可逆,即矩阵Q p为可逆矩阵时,p就是该方程的唯一解。
可选的,当上述系数矩阵不可逆,即矩阵Q p为不可逆矩阵时,则该方程有无数解,此种情况令
Figure PCTCN2018114550-appb-000009
根据上述步骤,可以确定删除非边界边l后生成的新的顶点p的坐标值。其他被删除的非边界边生成的新的顶点的坐标参照上述描述即可。
下面结合实例,说明本申请实施例提供的三维网格模型的简化方法的效果:
图6展示了模型对象(石头貔貅)和相机的位置关系,可以看到相机是环绕模型进行拍摄的,也就是貔貅离相机近,周围草地马路离相机远。图7展示的是未简化前的原始三维模型(只截取了其中一部分),可以看到模型三角面密集,细节丰富。加入了权重的简化结果如图8所示,未加权重的简化结果如图9所示(MeshLab decimate结果),其中图8和图9简化后三角面的数量一致。
如图8所示,本实施例的方法所得网格在关注区域三角面数多,细节更丰富,而周边不关注的地方网格则稀疏。原因就是因为权重的加入,模型中 心被删除的边少而模型外围删除的边多。
如图9所示,没有加入权重的模型中心和周围都是按同一准则进行边的删除,因此模型中心被删除的边比较多。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:只读内存(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
图10为本申请一实施例提供的三维网格模型的简化装置的结构图,该三维网格模型的简化装置100包括:存储器110和处理器120。
存储器110,用于存储计算机程序;
处理器120,用于执行所述计算机程序,具体用于:
获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非边界边的被删除误差;
根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重;
根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差;
根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型。
本申请实施例的三维网格模型的简化装置,可以用于执行上述所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
在一种可能的实现方式中,所述处理器120,具体用于根据所述非边界边的两个顶点的特征参数,确定所述两个顶点中每个顶点的删除权重;并根据所述非边界边的每个顶点的删除权重,确定所述非边界边的删除权重。
在另一种可能的实现方式中,所述顶点的特征参数包括以下任意一种:所述顶点到相机的距离、所述顶点的曲率和所述顶点的颜色值。
在另一种可能的实现方式中,所述顶点的特征参数包括所述顶点到相机的距离时,所述处理器120,具体用于获取所述非边界边的每个顶点到所有 相机的距离中的最小距离;并根据每个顶点对应的最小距离,确定每个顶点的删除权重,其中,顶点对应的最小距离越小,则顶点的删除权重越大。
在另一种可能的实现方式中,所述处理器120,具体用于将所述顶点对应的最小距离的平方值的倒数,作为所述顶点的删除权重。
在另一种可能的实现方式中,所述处理器120,具体用于获取所有相机在所述三维网格模型的坐标系下的位姿;并根据所有相机的位姿,确定所述顶点到所有相机中心的距离,将所有距离中的最小值作为所述顶点对应的最小距离。
在另一种可能的实现方式中,所述顶点的特征参数包括所述顶点的曲率时,所述处理器120,具体用于获取所述非边界边的每个顶点的曲率;并根据每个顶点的曲率,确定每个顶点的删除权重,其中,顶点的曲率越大,所述顶点的删除权重越大。
在另一种可能的实现方式中,所述处理器120,具体用于将所述顶点的曲率的平方值作为所述顶点的删除权重。
在另一种可能的实现方式中,所述顶点的特征参数包括所述顶点的颜色值时,所述处理器120,具体用于获取所述非边界边的两个顶点的颜色值,以及所述两个顶点的周围顶点的颜色值;并针对两个顶点中的每个顶点,根据该顶点颜色值与周围顶点的颜色值的方差,确定该顶点的删除权重,其中,所述该顶点颜色值与周围顶点的颜色值的方差越小,则该顶点的删除权重越大。在另一种可能的实现方式中,所述处理器120,具体用于从所述N个非边界边的调整后的被删除误差中,获取小于第一预设阈值的M个非边界边的被删除误差,其中,所述M为不大于所述N的正整数;针对所述M个非边界边中的每个非边界边,删除该非边界边的两个顶点,生成一个新的顶点;将所述新的顶点与周围顶点进行连接。
在另一种可能的实现方式中,所述处理器120,具体用于根据调整后的被删除误差,对所述N个非边界边进行排序;删除所述N个非边界边中的被删除误差最小的非边界边的两个顶点,生成一个新的顶点;将所述新的顶点与周围顶点进行连接。
在另一种可能的实现方式中,所述处理器120,还具体用于判断删除所述非边界边是否会发生三角面翻转或生成尖锐三角面;若否,则删除所述非 边界边;根据所述非边界边的两个顶点,生成一个新的顶点。
在另一种可能的实现方式中,所述处理器120,具体用于获取所述非边界边的两个顶点的二次误差测量矩阵;根据所述两个顶点的二次误差测量矩阵的和矩阵,确定所述新的顶点的坐标。
在另一种可能的实现方式中,所述处理器120,还具体用于若所述和矩阵为不可逆矩阵,则根据所述两个顶点的坐标确定所述新的顶点的坐标。
在另一种可能的实现方式中,所述处理器120,还具体用于获取所述三维网格模型的三角面数量;判断所述三角面数量是否达到第二预设阈值,若是,停止;若否,确定所述新的顶点与周围顶点形成的新的非边界边的被删除误差,并对当前所有非边界边的被删除误差进行排序,删除被删除误差最小的非边界边,直至所述三角面数量达到第二预设阈值。
在另一种可能的实现方式中,所述处理器120,具体用于将所述非边界边的删除权重与所述非边界边的被删除误差的乘积,作为调整后的所述非边界边的被删除误差。
本申请实施例的三维网格模型的简化装置,可以用于执行上述所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或 者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (33)

  1. 一种三维网格模型的简化方法,其特征在于,包括:
    获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非边界边的被删除误差;
    根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重;
    根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差;
    根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型。
  2. 根据权利要求1所述的方法,其特征在于,所述根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重,包括:
    根据所述非边界边的两个顶点的特征参数,确定所述两个顶点中每个顶点的删除权重;
    根据所述非边界边的每个顶点的删除权重,确定所述非边界边的删除权重。
  3. 根据权利要求2所述的方法,其特征在于,所述顶点的特征参数包括以下任意一种:所述顶点到相机的距离、所述顶点的曲率和所述顶点的颜色值。
  4. 根据权利要求3所述的方法,其特征在于,所述顶点的特征参数包括所述顶点到相机的距离,所述根据所述非边界边的两个顶点的特征参数,确定所述两个顶点中每个顶点的删除权重,包括:
    获取所述非边界边的每个顶点到所有相机的距离中的最小距离;
    根据每个顶点对应的最小距离,确定每个顶点的删除权重,其中,顶点对应的最小距离越小,则顶点的删除权重越大。
  5. 根据权利要求4所述的方法,其特征在于,所述根据每个顶点对应的最小距离,确定每个顶点的删除权重,包括:
    将所述顶点对应的最小距离的平方值的倒数,作为所述顶点的删除权重。
  6. 根据权利要求4或5所述的方法,其特征在于,所述获取所述非边界边的每个顶点到所有相机的距离中的最小距离,包括:
    获取所有相机在所述三维网格模型的坐标系下的位姿;
    根据所有相机的位姿,确定所述顶点到所有相机中心的距离,将所有距离中的最小值作为所述顶点对应的最小距离。
  7. 根据权利要求3所述的方法,其特征在于,所述顶点的特征参数包括所述顶点的曲率,所述根据所述非边界边的两个顶点的特征参数,确定所述两个顶点中每个顶点的删除权重,包括:
    获取所述非边界边的每个顶点的曲率;
    根据每个顶点的曲率,确定每个顶点的删除权重,其中,顶点的曲率越大,所述顶点的删除权重越大。
  8. 根据权利要求7所述的方法,其特征在于,所述根据每个顶点的曲率,确定每个顶点的删除权重,包括:
    将所述顶点的曲率的平方值作为所述顶点的删除权重。
  9. 根据权利要求3所述的方法,其特征在于,所述顶点的特征参数包括所述顶点的颜色值,所述根据所述非边界边的两个顶点的特征参数,确定所述两个顶点中每个顶点的删除权重,包括:
    获取所述非边界边的两个顶点的颜色值,以及所述两个顶点的周围顶点的颜色值;
    针对两个顶点中的每个顶点,根据该顶点颜色值与周围顶点的颜色值的方差,确定该顶点的删除权重,其中,所述该顶点颜色值与周围顶点的颜色值的方差越小,则该顶点的删除权重越大。
  10. 根据权利要求1所述的方法,其特征在于,所述根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型,包括:
    从所述N个非边界边的调整后的被删除误差中,获取小于第一预设阈值的M个非边界边的被删除误差,其中,所述M为不大于所述N的正整数;
    针对所述M个非边界边中的每个非边界边,删除该非边界边的两个顶点,生成一个新的顶点;
    将所述新的顶点与周围顶点进行连接。
  11. 根据权利要求1所述的方法,其特征在于,所述根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型,包括:
    根据调整后的被删除误差,对所述N个非边界边进行排序;
    删除所述N个非边界边中的被删除误差最小的非边界边的两个顶点,生 成一个新的顶点;
    将所述新的顶点与周围顶点进行连接。
  12. 根据权利要求10或11所述的方法,其特征在于,删除非边界边的两个顶点,生成一个新的顶点,包括:
    判断删除所述非边界边是否会发生三角面翻转或生成尖锐三角面;
    若否,则删除所述非边界边;
    根据所述非边界边的两个顶点,生成一个新的顶点。
  13. 根据权利要求12所述的方法,其特征在于,所述根据所述非边界边的两个顶点,生成一个新的顶点,包括:
    获取所述非边界边的两个顶点的二次误差测量矩阵;
    根据所述两个顶点的二次误差测量矩阵的和矩阵,确定所述新的顶点的坐标。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述两个顶点的二次误差测量矩阵的和矩阵,确定所述新的顶点的坐标,包括:
    若所述和矩阵为不可逆矩阵,则根据所述两个顶点的坐标确定所述新的顶点的坐标。
  15. 根据权利要求11所述的方法,其特征在于,所述将所述新的顶点与周围顶点进行连接之后,所述方法还包括:
    获取所述三维网格模型的三角面数量;
    判断所述三角面数量是否达到第二预设阈值,若是,停止;
    若否,确定所述新的顶点与周围顶点形成的新的非边界边的被删除误差,并对当前所有非边界边的被删除误差进行排序,删除被删除误差最小的非边界边,直至所述三角面数量达到第二预设阈值。
  16. 根据权利要求1所述的方法,其特征在于,所述根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差,包括:
    将所述非边界边的删除权重与所述非边界边的被删除误差的乘积,作为调整后的所述非边界边的被删除误差。
  17. 一种三维网格模型的简化装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序,具体用于:
    获取三维网格模型的N个非边界边,并确定所述N个非边界边中每个非边界边的被删除误差;
    根据每个所述非边界边的两个顶点的特征参数,确定每个所述非边界边的删除权重;
    根据每个所述非边界边的删除权重,调整每个所述非边界边的被删除误差;
    根据调整后的每个所述非边界边的被删除误差,简化所述三维网格模型。
  18. 根据权利要求17所述的装置,其特征在于,所述处理器,具体用于根据所述非边界边的两个顶点的特征参数,确定所述两个顶点中每个顶点的删除权重;并根据所述非边界边的每个顶点的删除权重,确定所述非边界边的删除权重。
  19. 根据权利要求18所述的装置,其特征在于,所述顶点的特征参数包括以下任意一种:所述顶点到相机的距离、所述顶点的曲率和所述顶点的颜色值。
  20. 根据权利要求19所述的装置,其特征在于,所述顶点的特征参数包括所述顶点到相机的距离,
    所述处理器,具体用于获取所述非边界边的每个顶点到所有相机的距离中的最小距离;并根据每个顶点对应的最小距离,确定每个顶点的删除权重,其中,顶点对应的最小距离越小,则顶点的删除权重越大。
  21. 根据权利要求20所述的装置,其特征在于,
    所述处理器,具体用于将所述顶点对应的最小距离的平方值的倒数,作为所述顶点的删除权重。
  22. 根据权利要求20或21所述的装置,其特征在于,
    所述处理器,具体用于获取所有相机在所述三维网格模型的坐标系下的位姿;并根据所有相机的位姿,确定所述顶点到所有相机中心的距离,将所有距离中的最小值作为所述顶点对应的最小距离。
  23. 根据权利要求19所述的装置,其特征在于,所述顶点的特征参数包括所述顶点的曲率,
    所述处理器,具体用于获取所述非边界边的每个顶点的曲率;并根据每 个顶点的曲率,确定每个顶点的删除权重,其中,顶点的曲率越大,所述顶点的删除权重越大。
  24. 根据权利要求23所述的装置,其特征在于,所述处理器,具体用于将所述顶点的曲率的平方值作为所述顶点的删除权重。
  25. 根据权利要求19所述的装置,其特征在于,所述顶点的特征参数包括所述顶点的颜色值,
    所述处理器,具体用于获取所述非边界边的两个顶点的颜色值,以及所述两个顶点的周围顶点的颜色值;并针对两个顶点中的每个顶点,根据该顶点颜色值与周围顶点的颜色值的方差,确定该顶点的删除权重,其中,所述该顶点颜色值与周围顶点的颜色值的方差越小,则该顶点的删除权重越大。
  26. 根据权利要求17所述的装置,其特征在于,
    所述处理器,具体用于从所述N个非边界边的调整后的被删除误差中,获取小于第一预设阈值的M个非边界边的被删除误差,其中,所述M为不大于所述N的正整数;针对所述M个非边界边中的每个非边界边,删除该非边界边的两个顶点,生成一个新的顶点;将所述新的顶点与周围顶点进行连接。
  27. 根据权利要求17所述的装置,其特征在于,
    所述处理器,具体用于根据调整后的被删除误差,对所述N个非边界边进行排序;删除所述N个非边界边中的被删除误差最小的非边界边的两个顶点,生成一个新的顶点;将所述新的顶点与周围顶点进行连接。
  28. 根据权利要求26或27所述的装置,其特征在于,
    所述处理器,还具体用于判断删除所述非边界边是否会发生三角面翻转或生成尖锐三角面;若否,则删除所述非边界边;根据所述非边界边的两个顶点,生成一个新的顶点。
  29. 根据权利要求28所述的装置,其特征在于,
    所述处理器,具体用于获取所述非边界边的两个顶点的二次误差测量矩阵;根据所述两个顶点的二次误差测量矩阵的和矩阵,确定所述新的顶点的坐标。
  30. 根据权利要求29所述的装置,其特征在于,所述处理器,还具体用于若所述和矩阵为不可逆矩阵,则根据所述两个顶点的坐标确定所述新的顶 点的坐标。
  31. 根据权利要求27所述的装置,其特征在于,
    所述处理器,还具体用于:
    获取所述三维网格模型的三角面数量;
    判断所述三角面数量是否达到第二预设阈值,若是,停止;
    若否,确定所述新的顶点与周围顶点形成的新的非边界边的被删除误差,并对当前所有非边界边的被删除误差进行排序,删除被删除误差最小的非边界边,直至所述三角面数量达到第二预设阈值。
  32. 根据权利要求17所述的装置,其特征在于,
    所述处理器,具体用于将所述非边界边的删除权重与所述非边界边的被删除误差的乘积,作为调整后的所述非边界边的被删除误差。
  33. 一种计算机存储介质,其特征在于,所述存储介质包括计算机指令,当所述指令被计算机执行时,使得所述计算机实现如权利要求1至16中任一项权利要求所述的三维网格模型的简化方法。
PCT/CN2018/114550 2018-11-08 2018-11-08 三维网格模型的简化方法与装置 WO2020093307A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/114550 WO2020093307A1 (zh) 2018-11-08 2018-11-08 三维网格模型的简化方法与装置
CN201880041713.9A CN110832548A (zh) 2018-11-08 2018-11-08 三维网格模型的简化方法与装置
US17/307,124 US20210256763A1 (en) 2018-11-08 2021-05-04 Method and device for simplifying three-dimensional mesh model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/114550 WO2020093307A1 (zh) 2018-11-08 2018-11-08 三维网格模型的简化方法与装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/307,124 Continuation US20210256763A1 (en) 2018-11-08 2021-05-04 Method and device for simplifying three-dimensional mesh model

Publications (1)

Publication Number Publication Date
WO2020093307A1 true WO2020093307A1 (zh) 2020-05-14

Family

ID=69547507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/114550 WO2020093307A1 (zh) 2018-11-08 2018-11-08 三维网格模型的简化方法与装置

Country Status (3)

Country Link
US (1) US20210256763A1 (zh)
CN (1) CN110832548A (zh)
WO (1) WO2020093307A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114329668A (zh) * 2021-12-31 2022-04-12 西安交通大学 一种基于cad模型的rar网格优化方法及系统
CN114662110A (zh) * 2022-05-18 2022-06-24 杭州海康威视数字技术股份有限公司 一种网站检测方法、装置及电子设备
CN117171867A (zh) * 2023-11-03 2023-12-05 临沂大学 一种建筑模型显示方法及系统
CN117541751A (zh) * 2024-01-04 2024-02-09 支付宝(杭州)信息技术有限公司 一种三维模型降级方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113296542B (zh) * 2021-07-27 2021-10-01 成都睿铂科技有限责任公司 一种航拍拍摄点获取方法及系统
CN114299266B (zh) * 2021-12-27 2023-02-28 贝壳找房(北京)科技有限公司 模型的颜色调整方法、装置以及存储介质
CN117115391B (zh) * 2023-10-24 2024-01-12 中科云谷科技有限公司 模型更新方法、装置、计算机设备及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306394A (zh) * 2011-08-30 2012-01-04 北京理工大学 基于外观保持的三维模型简化方法
WO2012171312A1 (zh) * 2011-06-15 2012-12-20 中山大学 一种面向普适终端的三维网格模型连续多分辨率编码方法
CN105761314A (zh) * 2016-03-16 2016-07-13 北京理工大学 一种基于显著颜色属性特征保持的模型简化方法
CN106408620A (zh) * 2016-09-08 2017-02-15 成都希盟泰克科技发展有限公司 基于压缩感知的三维网格模型数据处理方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000113210A (ja) * 1998-10-02 2000-04-21 Nippon Telegr & Teleph Corp <Ntt> 3次元幾何データ簡略化方法及びそのプログラムを記録した記録媒体
US6853373B2 (en) * 2001-04-25 2005-02-08 Raindrop Geomagic, Inc. Methods, apparatus and computer program products for modeling three-dimensional colored objects
CN106408665A (zh) * 2016-10-25 2017-02-15 合肥东上多媒体科技有限公司 一种新的渐进网格生成方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012171312A1 (zh) * 2011-06-15 2012-12-20 中山大学 一种面向普适终端的三维网格模型连续多分辨率编码方法
CN102306394A (zh) * 2011-08-30 2012-01-04 北京理工大学 基于外观保持的三维模型简化方法
CN105761314A (zh) * 2016-03-16 2016-07-13 北京理工大学 一种基于显著颜色属性特征保持的模型简化方法
CN106408620A (zh) * 2016-09-08 2017-02-15 成都希盟泰克科技发展有限公司 基于压缩感知的三维网格模型数据处理方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114329668A (zh) * 2021-12-31 2022-04-12 西安交通大学 一种基于cad模型的rar网格优化方法及系统
CN114329668B (zh) * 2021-12-31 2024-01-16 西安交通大学 一种基于cad模型的rar网格优化方法及系统
CN114662110A (zh) * 2022-05-18 2022-06-24 杭州海康威视数字技术股份有限公司 一种网站检测方法、装置及电子设备
CN114662110B (zh) * 2022-05-18 2022-09-02 杭州海康威视数字技术股份有限公司 一种网站检测方法、装置及电子设备
CN117171867A (zh) * 2023-11-03 2023-12-05 临沂大学 一种建筑模型显示方法及系统
CN117171867B (zh) * 2023-11-03 2024-01-26 临沂大学 一种建筑模型显示方法及系统
CN117541751A (zh) * 2024-01-04 2024-02-09 支付宝(杭州)信息技术有限公司 一种三维模型降级方法及装置

Also Published As

Publication number Publication date
CN110832548A (zh) 2020-02-21
US20210256763A1 (en) 2021-08-19

Similar Documents

Publication Publication Date Title
WO2020093307A1 (zh) 三维网格模型的简化方法与装置
Zhang et al. Guided mesh normal filtering
CN108122277B (zh) 一种建模方法及装置
US8711143B2 (en) System and method for interactive image-based modeling of curved surfaces using single-view and multi-view feature curves
JP6400720B2 (ja) ビュー非依存色等化3dシーンテクスチャ処理
US8766979B2 (en) Three dimensional data compression
US10726599B2 (en) Realistic augmentation of images and videos with graphics
WO2021164550A1 (zh) 图像分类方法及装置
CN107578467B (zh) 一种医疗器械三维建模方法及装置
CN115439607A (zh) 一种三维重建方法、装置、电子设备及存储介质
KR20210087524A (ko) 포인트 클라우드 융합 방법, 장치, 전자 기기 및 컴퓨터 저장 매체
CN108492284B (zh) 用于确定图像的透视形状的方法和装置
WO2022170895A1 (zh) 图像处理方法和装置
CN115937546A (zh) 图像匹配、三维图像重建方法、装置、电子设备以及介质
JP2023547616A (ja) 品質アセスメントのための方法、装置、及びプログラム
JP6736422B2 (ja) 画像処理装置、画像処理の方法およびプログラム
WO2022217830A1 (zh) 虚拟对象构建方法及装置、存储介质
WO2020062547A1 (zh) 网格细分方法、图像处理设备及具有存储功能的装置
TWI711004B (zh) 圖片處理方法和裝置
US10861174B2 (en) Selective 3D registration
WO2022041119A1 (zh) 三维点云处理方法及装置
CN114529648A (zh) 模型展示方法、设备、装置、电子设备及存储介质
CN110310353B (zh) 一种bim模型数据的优化方法及系统
CN117635875B (zh) 一种三维重建方法、装置及终端
CN116012666B (zh) 图像生成、模型的训练、信息重建方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18939576

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18939576

Country of ref document: EP

Kind code of ref document: A1