WO2023093085A1 - Method and apparatus for reconstructing surface of object, and computer storage medium and computer program product - Google Patents

Method and apparatus for reconstructing surface of object, and computer storage medium and computer program product Download PDF

Info

Publication number
WO2023093085A1
WO2023093085A1 PCT/CN2022/106501 CN2022106501W WO2023093085A1 WO 2023093085 A1 WO2023093085 A1 WO 2023093085A1 CN 2022106501 W CN2022106501 W CN 2022106501W WO 2023093085 A1 WO2023093085 A1 WO 2023093085A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
tetrahedron
point
dense
points
Prior art date
Application number
PCT/CN2022/106501
Other languages
French (fr)
Chinese (zh)
Inventor
张壮
周立阳
姜翰青
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023093085A1 publication Critical patent/WO2023093085A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • the embodiment of the present disclosure is based on the Chinese patent application with the application number 202111433346.7, the application date is November 29, 2021, and the application name is "Method, device and computer storage medium for reconstructing the surface of an object", and the priority of the Chinese patent application is required.
  • the entire content of this Chinese patent application is hereby incorporated into this disclosure as a reference.
  • the present disclosure relates to the field of image processing, and in particular to a method, device, computer storage medium and computer program product for reconstructing an object surface.
  • 3D reconstruction techniques are used to reconstruct the 3D surfaces of the entire scene or objects in the scene from multiple images of a scene.
  • 3D reconstruction technology With the development of technology, the application of 3D reconstruction technology is becoming more and more extensive.
  • Such as virtual reality, augmented reality, digital twins, etc. have put forward higher requirements for the accuracy and detail restoration ability of 3D reconstruction.
  • noise points or abnormal points may appear. These noise points or abnormal points will negatively affect the details of the reconstructed object surface, thereby affecting the accuracy of the reconstructed object surface.
  • Embodiments of the present disclosure provide a method, an apparatus, a computer storage medium and a computer program product for reconstructing an object surface, so as to solve the problem of reducing the influence of noise and retaining more details in the prior art.
  • a technical solution adopted by the embodiments of the present disclosure is to provide a method for reconstructing the surface of an object, the method includes: using multiple images to generate corresponding dense point clouds, the multiple images are obtained from different shooting positions The point is obtained by shooting the scene; based on the dense point cloud, a tetrahedral grid corresponding to the dense point cloud is generated, wherein the vertex of each tetrahedron in the tetrahedral grid is the coordinate point in the dense point cloud; based on the energy function Minimize, generate a binary label for each tetrahedron, where the binary label is used to characterize whether the tetrahedron is inside or outside the object surface; extract the common faces between tetrahedra with different binary labels to reconstruct the object surface.
  • the energy function includes the sum of the first penalty term corresponding to the tetrahedron and the sum of the second penalty term corresponding to the common face, and the second penalty term includes the grid density weight.
  • the device includes: a dense point cloud generation module configured to generate a corresponding dense point cloud using multiple images obtained by shooting the scene from different shooting positions; a grid generation module configured to generate a corresponding dense point cloud based on the dense point cloud , to generate a tetrahedral mesh corresponding to the dense point cloud, wherein the vertices of each tetrahedron in the tetrahedral mesh are coordinate points in the dense point cloud; the tetrahedron labeling module is configured to minimize based on the energy function, as Each tetrahedron generates a binary label, wherein the binary label is used to characterize that the tetrahedron is located inside or outside the surface of the object; the surface extraction module is configured to extract the common face between the tetrahedrons with different binary labels to reconstruct the object surface .
  • the energy function includes
  • the device includes a processor and memory.
  • a computer program is stored in the memory.
  • the processor is used to execute the computer program to realize the steps of the above method.
  • the computer storage medium stores a computer program.
  • the steps of the above method are realized.
  • another technical solution adopted by the embodiments of the present disclosure is to provide a computer program product, including computer readable codes, where the computer program product includes computer programs or instructions, where the computer program or instructions are in In the case of running on an electronic device, the electronic device is made to execute the above business state analysis method.
  • the embodiment of the present disclosure constructs the energy function to include the sum of the first penalty term corresponding to the tetrahedron and the sum of the second penalty term corresponding to the common face, wherein the second penalty term includes the mesh density weight, due to the noise Points or abnormal points correspond to fewer shooting positions, and through the grid density weight, a greater penalty can be imposed on the sparse grid; that is, through the grid density weight, a greater penalty can be imposed on the noise point or abnormal point Penalty, so as to reduce the negative impact of noise points or abnormal points on details, retain more details, and improve the accuracy of object reconstruction surface. .
  • FIG. 1 is a schematic flowchart of an optional method for reconstructing an object surface provided by an embodiment of the present disclosure
  • Fig. 2 is a schematic flow chart of an embodiment of the method for generating a dense point cloud in the present disclosure
  • Fig. 3 is a schematic flow chart of another embodiment of the method for generating a dense point cloud in the present disclosure
  • Fig. 4 is a schematic diagram of another embodiment of the generated dense point cloud provided by the embodiment of the present disclosure.
  • Figure 5 is a schematic diagram of one embodiment of a tetrahedral mesh corresponding to the dense point cloud in Figure 4;
  • FIG. 6A shows a schematic diagram of a line of sight passing through a common plane
  • FIG. 6B shows a schematic diagram of an optional public surface weight
  • FIG. 6C shows a schematic diagram of an optional public surface weight
  • Figure 7 shows a schematic view of the included angle between adjacent tetrahedrons
  • FIG. 8 shows a schematic diagram of an embodiment of a method for solving an energy function using a graph cut method
  • Figure 9 shows a schematic diagram of a directed graph corresponding to the tetrahedral grid in Figure 4.
  • Fig. 10 is a schematic structural diagram of an embodiment of the device for reconstructing the surface of an object according to the present disclosure
  • Fig. 11 is a schematic structural diagram of another embodiment of the device for reconstructing the surface of an object according to the present disclosure.
  • Fig. 12 is a schematic structural diagram of an embodiment of a computer storage medium of the present disclosure.
  • TSDF truncated signed distance function
  • TSDF truncated signed distance function
  • An embodiment of the present disclosure provides a method for reconstructing an object surface, and the method may be executed by a processor of an apparatus for reconstructing an object surface.
  • the device for reconstructing the surface of an object refers to a server, a notebook computer, a tablet computer, a desktop computer, a smart TV, a set-top box, a mobile device (such as a mobile phone, a portable video player, a personal digital assistant, a dedicated message device, a portable game device, etc.) ) and other equipment with data processing capabilities.
  • FIG. 1 is a schematic flowchart of a first embodiment of a method for reconstructing an object surface according to the present disclosure.
  • the method for reconstructing an object surface according to an embodiment of the present disclosure includes the following steps.
  • the above-mentioned scenes may be large-area scenes with various terrain changes, buildings and objects, such as university campuses, scenic spots, mountains, commercial areas, and residential areas; the scenes may also be indoor small-area scenes .
  • the scene may also be a single object such as a statue, an industrial product (for example, a single airplane). Embodiments of the present disclosure do not limit this.
  • the aforementioned scene may be a static or substantially static scene.
  • different shooting locations may include shooting locations such as land, air, and water.
  • the acquisition device can shoot the same scene from different shooting positions to obtain multiple images I i ; the device for reconstructing the object surface can obtain multiple images I i from the acquisition device; wherein, the acquisition device can be a device for reconstructing the object surface
  • the built-in device may also be a device other than the device for reconstructing the surface of the object, which is not limited by the embodiments of the present disclosure.
  • the collection device may include at least one of an unmanned aerial vehicle and a ground-based camera; the collection device may include at least one of a collection platform or a collection device; for the collection device, it may be set as required, Embodiments of the present disclosure are not limited.
  • the multiple images I i captured by the acquisition device may be depth images or RGB (Red-Green-Blue) images.
  • the multiple images I i each include at least a part of the same scene.
  • At least two of the plurality of images I i contain scene overlapping parts.
  • the area of the signal tower may be the scene overlapping part of images Im and In .
  • the more scene overlapping parts among the multiple images I i the easier the surface reconstruction process in the embodiments of the present disclosure will be.
  • the acquisition device can store the camera pose corresponding to each image while taking multiple images I i , so that the device for reconstructing the object surface can obtain the camera pose corresponding to each image from the acquisition device;
  • the apparatus for reconstructing the surface of an object may also calculate the camera pose corresponding to each image based on the multiple images I i after acquiring the multiple images I i ; the embodiment of the present disclosure does not limit the acquisition method of the camera pose.
  • the camera pose may include a camera shooting position c i and a camera shooting angle.
  • the camera shooting position c i may also be referred to as shooting position point c i or camera position point information c i .
  • the shooting position point c i can be the relative information of the camera relative to the scene or feature in the image when shooting the corresponding image I i , or the space of the camera in the three-dimensional space (for example, the three-dimensional space corresponding to the physical reality space of the scene). coordinate.
  • the above camera pose can also be calculated from multiple images I i .
  • the embodiment of the present disclosure does not limit the manner of acquiring the pose of the camera.
  • the apparatus for reconstructing the object surface may generate corresponding dense point clouds P from the multiple images I i .
  • the dense point cloud P includes the first type of points p, and the first type of points can also be called coordinate points p; each coordinate point p is a point from at least one image among the above-mentioned plurality of images I i .
  • the point can be a sampling point or a point obtained by methods such as interpolation.
  • the coordinate point p includes spatial coordinate information of the point.
  • the dense point cloud P also includes the second type of point c i or the shooting location point c i , and the first type of point can also be called the shooting location point c i ; that is to say, the dense point cloud P also includes the same
  • Each of the plurality of images I i corresponds to a shooting location point c i .
  • the device for reconstructing the object surface can perform feature matching on multiple images I i to obtain the common view points of the shooting cameras of at least two images in the multiple images, and then obtain the dense point cloud P based on these common view points.
  • the coordinate point p wherein, the common view point of at least two images is a point that can be seen by the shooting cameras of at least two images.
  • the coordinate point p obtained from the common view point is the coordinate point p derived from the two images.
  • the source image of the coordinate point in the dense point cloud P is queried, and the shooting location point corresponding to the source image is determined according to the source image.
  • the shooting location point is added to the dense point cloud to obtain an updated dense point cloud.
  • the updated dense point cloud P will include the coordinate point p and the shooting location point from the image.
  • the shooting location point may be the optical center point of the shooting camera.
  • FIG. 2 shows an embodiment of the method for generating a dense point cloud in the present disclosure.
  • the method includes: S21-S23.
  • the color map may be an RGB image or other types of color map, which is not limited in the embodiment of the present disclosure.
  • the device for reconstructing the object surface may perform depth estimation for each of the multiple images to obtain a dense depth map corresponding to each image; wherein, the depth estimation Methods can include:
  • the implementation of estimating a dense depth map for each of the multiple images I i in S22 may include: using an incremental SFM (structure-from-motion) method for the input multiple images I i Rebuild a sparse map.
  • the sparse map is a set of key points including at least a part or all of the SIFT features.
  • SIFT is used to describe scale invariance in the field of image processing, and key points can be detected in an image through SIFT.
  • the SIFT feature is obtained by performing SIFT transformation on some key points of local appearance on the object. Since the SIFT feature has nothing to do with the size and rotation of the image, and has a high tolerance for light, noise, and micro-angle changes, the SIFT feature is highly significant, easy to identify and relatively easy to extract, and is not easy to be misidentified.
  • the device for reconstructing the object surface only needs more than three SIFT features to calculate the position and orientation of key points.
  • the implementation of estimating a dense depth map for each of the multiple images I i in S22 may further include: based on the matching relationship between SIFT features of at least some of the multiple images I i , Divide images I i into clusters and estimate a dense depth map for each image I i in each cluster.
  • the apparatus for reconstructing the object surface may classify images including as many identical SIFT features as possible into one cluster, thereby obtaining multiple clusters.
  • the number of multiple clusters can be set according to specific needs, which is not limited in this embodiment of the present disclosure.
  • the sparse map is a set of key points including at least a part or all of the SIFT features, and more than three SIFT features are enough to calculate the position and orientation of the key points.
  • SIFT feature matching is used to establish the correspondence between features, and feature matching can correspond to the images of the same feature in different views in space.
  • the generated dense depth map includes sample points from the corresponding image.
  • the dense depth map includes depth information of sampling points.
  • the dense depth map includes three-dimensional coordinate information of sampling points.
  • the density (also known as dense) in a dense depth map is compared with the sparseness in a sparse map.
  • the sparse map may contain key points corresponding to SIFT features.
  • the key points corresponding to SIFT features are relatively few and sparse, while the points in the dense depth map are denser, so more details can be preserved. information.
  • the device for reconstructing the object surface may perform fusion processing on multiple dense depth maps corresponding to multiple images to obtain a dense point cloud P.
  • the device for reconstructing the surface of an object may fuse the dense depth maps into a dense point cloud P according to the geometric shape and depth continuity relationship of images corresponding to different dense depth maps.
  • both point A and point B correspond to the same spatial coordinate position in the scene , or when the spatial distance between point A and point B in the scene is less than the spatial distance threshold, point A and point B can be fused into the same point ph in the dense point cloud P, and point ph comes from image I l at the same time and image Im .
  • the point ph can be seen simultaneously from the shooting location point c l of the image I l and the shooting location point c m of the image Im (the subscripts l and m indicate the one-to-one correspondence between the image and the shooting location point), or point Ph can see the shooting position point c l of the image I l and the shooting position point c m of the image Im .
  • the point ph corresponds to the shooting location point c l of the image I l and the shooting location point c m of the image I m ; when the spatial distance between point A and point B in the scene is less than the spatial distance threshold, the point ph can be A point obtained by interpolation from point A and point B; wherein, the spatial distance threshold can be set as required; this is not limited in this embodiment of the present disclosure.
  • some coordinate points p in the dense point cloud P can be seen from more than two shooting location points corresponding to more than two images. In some embodiments of the present disclosure, some coordinate points p in the dense point cloud P can only be seen from one shooting location point corresponding to one image.
  • the dense point cloud P is formed through at least two fusion steps.
  • the device for reconstructing the surface of an object may first fuse multiple dense depth maps of each cluster in multiple clusters into a cluster dense point cloud, and then combine multiple clusters corresponding to multiple clusters The dense point cloud is fused into a dense point cloud P for the entire scene.
  • the device for reconstructing the surface of an object can only load dense depth maps in one or more clusters in memory for fusion at a time, instead of loading all dense depth maps at the same time, thereby reducing memory consumption during the generation of dense point clouds P, Increased applicability to large-scale high-precision scene reconstruction, thereby improving the robustness of reconstructed object surfaces.
  • the sampling points in multiple images Ii can be expressed in the form of coordinate points p of the 3D point cloud transmitted into the 3D space, thus showing the 3D geometric relationship between the sampling points .
  • the dense point cloud P is a depth point cloud or a 3D point cloud corresponding to multiple images I i .
  • the dense point cloud P also includes the shooting location point c i of each image I i ; thus, the dense point cloud P can reflect the spatial position relationship between the shooting location point c i and the sampling points .
  • FIG. 3 shows a schematic flow chart of another embodiment of a method for generating a dense point cloud
  • the device for reconstructing the surface of an object can be based on multiple images obtained by shooting the scene from different shooting positions c i Images Ii generate corresponding dense point clouds P.
  • the method may include: S31-S32.
  • the depth map can be obtained by image acquisition devices such as binocular cameras, multiple monocular or binocular cameras; the depth map can also be obtained by image acquisition devices such as ultrasound and laser, and the embodiments of the present disclosure do not make any reference to this. limit.
  • this embodiment can omit the step of generating a dense depth map from the color map in FIG. 2 , and directly use the depth map as the dense depth map.
  • the apparatus for reconstructing the surface of an object may also perform preprocessing on the depth map, for example, changing the angle of the depth map, adjusting its size, and so on.
  • step S32 fusing multiple depth maps to obtain a 3D dense point cloud P corresponding to the scene. This step is similar to the above step S23, and will not be repeated here.
  • the apparatus for reconstructing the surface of an object may also combine the depth map and the color map to generate a dense point cloud P, which is not limited in the embodiments of the present disclosure.
  • FIG. 4 shows a schematic flowchart of still another embodiment of the method for generating a dense point cloud.
  • the dense point cloud P includes coordinate points p 1 , p 2 , p 3 and p 4 corresponding to sampling points in multiple images I i .
  • the dense point cloud P also includes shooting position points c 1 and c 2 .
  • the dense point cloud P takes the form of a 3D point cloud showing the spatial relationship between the coordinate point p and the shooting location point c.
  • Fig. 4 only schematically shows the dense point cloud p, and the embodiment of the present disclosure does not limit the number and/or geometric positional relationship of points in the dense point cloud p.
  • the apparatus for reconstructing the surface of an object may generate a tetrahedral mesh based on the shooting location point c i and the coordinate point p. In some embodiments, the apparatus for reconstructing the object surface may generate a tetrahedral mesh based on each point in the updated dense point cloud obtained in step S11. That is to say, in the embodiment of the present disclosure, the apparatus for reconstructing the object surface may use the coordinate point p and the shooting position point from the image to construct the tetrahedral mesh.
  • the device for reconstructing the object surface can perform back projection based on the pixels in the collected image to obtain a three-dimensional point in space, and use the three-dimensional point as a dense point cloud P The coordinate point p.
  • the shooting position point may be the position of the optical center point of the camera.
  • Each shooting location point ci and coordinate point p is at least a vertex of a tetrahedron.
  • the tetrahedra are Delaunay tetrahedra.
  • the device for reconstructing the object surface may perform Delaunay triangulation on the dense point cloud P to construct a three-dimensional Delaunay triangular mesh.
  • all faces are triangular faces, and the three vertices of all triangular faces are the coordinate point p and/or the shooting location point c i in the dense point cloud P, and the circumscribed boundary of all triangular faces
  • the coordinate point p in the dense point cloud P and the shooting location point ci are not included in the circle. Delaunay triangulation can maximize the minimum interior angle in the formed triangle, thereby improving the uniformity and regularity of the triangular mesh.
  • the method of implementing the Delaunay triangulation may be the Lawson method of pointwise interpolation.
  • a common triangular facet may be included between two adjacent tetrahedrons ⁇ , also referred to as the common face f hereinafter.
  • FIG. 5 shows a schematic diagram of a tetrahedral mesh corresponding to the dense point cloud P in FIG. 4 .
  • the tetrahedral grid includes tetrahedra ⁇ 1 , ⁇ 2 and ⁇ 3 .
  • tetrahedron ⁇ 1 has coordinate points p 1 , p 2 , p 3 and shooting position point c 1 as vertices.
  • Tetrahedron ⁇ 2 has coordinate points p 2 and p 3 and shooting position points c 1 and c 2 as vertices.
  • Tetrahedron ⁇ 3 has coordinate points p 2 , p 3 , p 4 and shooting position point c 2 as vertices.
  • Tetrahedron ⁇ 1 and tetrahedron ⁇ 2 have a common face f 1 .
  • Tetrahedron ⁇ 2 and tetrahedron ⁇ 3 have a common face f 2 .
  • FIG. 5 exemplarily shows an optional tetrahedral grid structure, and the embodiment of the present disclosure does not limit the tetrahedral quantity and positional relationship of the tetrahedral grid.
  • the device for reconstructing the surface of an object can generate a binary label ⁇ inner, outer ⁇ for each tetrahedron ⁇ in the above-mentioned tetrahedron set T, and the binary label ⁇ of all tetrahedrons ⁇ constitutes the
  • the label ⁇ of the tetrahedron ⁇ is "inner" to indicate that the tetrahedron ⁇ is located inside the surface of the scene or inside the surface of an object in the scene.
  • the label ⁇ of the tetrahedron ⁇ is "outside", which means that the tetrahedron ⁇ is located outside the surface of the scene or outside the surface of objects in the scene.
  • the label ⁇ x of one tetrahedron ⁇ x is "inside”
  • the label ⁇ y of the other tetrahedron ⁇ y is "outside”
  • the tetrahedron ⁇ x and The common face f of ⁇ y will be the surface of the scene or the surface of an object in the scene.
  • the binary label ⁇ may have any suitable label, such as a binary value, a text label, a label number, an integer value, a real value, etc., and this disclosed embodiment is not limited.
  • the binary label ⁇ may be a set of either 0 or 1.
  • a value of 0 may indicate that label ⁇ is "in”
  • a value of 1 may indicate that label ⁇ is "out”, and vice versa.
  • the device for reconstructing the surface of an object can generate a binary label set L for each tetrahedron ⁇ by constructing an energy function (also called a cost function) and solving a binary label set L that minimizes the energy function.
  • label lambda also called a cost function
  • energy function E energy can refer to formula (1):
  • T is the set of tetrahedron ⁇ of the above tetrahedral grid
  • F is the set of triangular facet f
  • ⁇ ( ⁇ ) is the binary label ⁇ of tetrahedron ⁇
  • the triangular facet f is the common face f of tetrahedron ⁇ and tetrahedron ⁇ '.
  • the energy function E energy includes the sum of the first penalty term E1 corresponding to the tetrahedron ⁇ and the sum of the second penalty term E2 corresponding to the common surface f.
  • the first penalty item E1 is set based on whether the vertex of the tetrahedron ⁇ is the shooting location point c i or a coordinate point p other than the shooting location point c i
  • the second penalty item E2 is based on the line of sight and the public The intersecting relationship of the surface f is set; where, the line of sight refers to the connection line from the coordinate point p to the shooting position point c i in the above-mentioned 3D dense point cloud P.
  • black dots represent vertices of tetrahedron ⁇ in the tetrahedral mesh
  • thick solid lines represent triangular facets f of tetrahedron ⁇ .
  • f includes f 1 , f 2 and f 3 .
  • a vertex of the tetrahedron corresponds to a coordinate point p in the above-mentioned 3D dense point cloud P.
  • the connection line from the coordinate point p to the shooting position point c is called the line of sight p ⁇ c.
  • the coordinate point p is called the starting point of the line of sight p ⁇ c
  • the shooting position point c is called the end point of the line of sight p ⁇ c.
  • the distance from the starting point of the sight line p ⁇ c to the end point of the sight line p ⁇ c can be called the length of the sight line p ⁇ c.
  • a line of sight p ⁇ c may intersect multiple common faces f. As shown in Fig. 6, the line of sight p ⁇ c intersects common faces f1 , f2 and f3 in this tetrahedral mesh.
  • the device for reconstructing the surface of an object may construct at least one line of sight for each vertex in the tetrahedron grid, as long as the vertex is not the shooting location point c.
  • the device for reconstructing the surface of an object may construct at least one line of sight for each vertex in the tetrahedron grid, as long as the vertex is not the shooting location point c.
  • multiple sight lines corresponding to multiple shooting position points c may be constructed for the coordinate point p.
  • each coordinate point p in the 3D dense point cloud P is a sampling point on the surface of the scene or object, and the line of sight of each vertex will not pass through the surface of the reconstructed object, thus, the reconstructed object surface
  • the device can construct the first penalty term Efirst in the energy function E energy according to the relationship between the line of sight and the tetrahedron ⁇ , and can construct the second penalty term E2 according to the relationship between the line of sight and the public surface f.
  • the device for reconstructing the object surface may initialize the first penalty term E1 and the second penalty term E2 to zero, and then calculate the corresponding first penalty term for each line of sight in turn E first and second penalty term E second .
  • ⁇ p is the tetrahedron ⁇ where the vertex p is located
  • ⁇ c is the tetrahedron ⁇ where the shooting position point c is located.
  • the coordinate point p may correspond to multiple tetrahedrons ⁇ .
  • ⁇ p is a tetrahedron ⁇ through which the reverse extension line of the line of sight passes.
  • the shooting location point c may correspond to multiple tetrahedrons ⁇ .
  • ⁇ c is a tetrahedron ⁇ through which the line of sight passes.
  • ⁇ p is located behind the line of sight of vertex p, and is more likely to belong to the inside of the scene.
  • the device for reconstructing the object surface can add a binary label ⁇ to ⁇ p as "outside ”The preset penalty coefficient ⁇ v .
  • ⁇ v is a positive number, which can be set by the user according to needs.
  • ⁇ v can be set to 1.
  • ⁇ v By setting ⁇ v , the possibility of ⁇ p being marked out can be reduced.
  • the tetrahedron ⁇ c where the shooting position point c is located is more likely to belong to the outside of the scene.
  • the device for reconstructing the surface of an object can add a preset penalty coefficient ⁇ v when the label is “inside” to ⁇ c .
  • the penalty coefficient ⁇ v may also be replaced by other preset values, which is not limited in this embodiment of the present disclosure.
  • the line of sight from the vertex p to the shooting location point c may pass through one or more common planes f. It can be seen from the above that the line of sight of the vertex p does not pass through the reconstructed surface, so the second penalty value Esecond (f) is added to the one or more passed common faces f.
  • the second penalty value Esecond (f i ) is grid density weight ⁇ d (f i ), distance weight ⁇ v (f i ) and the product of the grid quality weight ⁇ q (f i ) and the preset penalty coefficient ⁇ v .
  • the second penalty value Esecond (f i ) may not include the distance weight ⁇ v (f i ) and/or the mesh quality weight ⁇ q (f i ).
  • the preset penalty coefficient ⁇ v is a positive number and can be set as required. For example, ⁇ v can be set to 1.
  • the sum of the number of shooting location points c i that can be seen by each of the three vertices of the common surface f i is the number of shooting location points corresponding to the public surface f i ;
  • the three vertices of the public surface f can respectively see 2, 1 and 3 shooting location points ci , then the number of shooting location points ci that can be seen by the three vertices of the public surface f
  • the larger the sum of the numbers is, the smaller the grid density weight ⁇ d (f i ) is.
  • the larger the sum of the numbers the more vertices of the common face f i appear in more images, and the greater the possibility that the common face f i belongs to the scene or object surface.
  • the grid density weight ⁇ d (f i ) indicates that the common face of the denser part of the grid is closer to the real object surface.
  • the grid density weight can be calculated by formula (5).
  • V(f i ) represents the value obtained by dividing the total side length of the public face f i by the number of shooting positions corresponding to the public face f i ; the public face f i includes three vertices, and the shooting position points corresponding to the public face f i
  • the number of location points includes the sum of the numbers of shooting location points ci corresponding to each vertex.
  • ⁇ d is a grid density control factor, which is used to control the influence of the grid density weight on the second penalty value.
  • the size of ⁇ d can be set as required, which is not limited by the embodiments of the present disclosure. For example, ⁇ d can be set to 0.8.
  • ⁇ d is the scale control quantity, which can be used to make ⁇ d (f) dimensionless.
  • ⁇ d may be a quarter of the smallest value among V(f) of all common planes f i , which is not limited in this embodiment of the present disclosure.
  • the second penalty item E2 further includes a distance weight ⁇ v (f i ). The farther the intersection point of the line of sight and the public plane f i is from the starting point of the line of sight, the greater the distance weight ⁇ v (f i ).
  • Grid density weight ⁇ v (f i ) can be calculated by formula (6).
  • D(f i ) represents the distance from the intersection of the common plane f i and the line of sight to the starting point p of the line of sight.
  • ⁇ v is a grid complexity constant, and ⁇ v can be set according to actual needs, which is not limited in the embodiments of the present disclosure.
  • embodiments of the present disclosure introduce a truncated distance coefficient in the distance weight ⁇ v (f i ).
  • the device for reconstructing the surface of an object is at distance D(f i ) and line-of-sight length When the ratio between them is greater than the first threshold (that is, the truncated distance coefficient), it is determined that the distance weight is zero.
  • the device for reconstructing the surface of the object can be between the distance D(f i ) and the line-of-sight length
  • the ratio between is greater than the first threshold
  • the ratio V(f) between the perimeter of the public plane fi and the number of shooting position points (image acquisition points) corresponding to the public plane fi is greater than the second threshold
  • the cutoff distance factor may be 1-S(P).
  • S(P) is used to represent the uncertainty of the line of sight origin, and S(P) can be calculated according to related methods.
  • the truncation distance coefficient may be a constant set as required, which is not limited in the embodiment of the present disclosure.
  • the second threshold may be the scale control amount ⁇ d , or may be other values set according to needs, which is not limited in this embodiment of the present disclosure.
  • FIG. 6B shows a schematic diagram of visualization of weights in the related art
  • FIG. 6C shows an optional schematic diagram of visualization of weights provided by an embodiment of the present disclosure.
  • the grid density at f 3 is high, and it is far away from the coordinate point P, which is likely to be a small structure in the space.
  • the device for reconstructing the surface of the object resets the weight here to 0, and the weight farther away from the coordinate point P Set it to 0, so as to reduce the probability of applying unnecessary penalties to high-density grid areas and improve the accuracy of applying penalties.
  • the grid quality weight ⁇ q (f i ) is used to consider the influence of the local grid shape. Generally speaking, the better the grid shape and the higher the grid quality, the more reliable the results obtained by using the grid, and the smaller the grid quality weight ⁇ q (f i ).
  • the common plane f is the common plane between the tetrahedron ⁇ 1 and the tetrahedron ⁇ 2 .
  • ⁇ q (f) can be calculated according to formula (7).
  • is the angle between the circumscribed sphere of the tetrahedron ⁇ 1 and the common surface f, is the angle between the circumsphere of the tetrahedron ⁇ 2 and the public surface f.
  • the included angle between the circumscribing sphere and the public surface f can be defined as the line-surface angle between the line between the center of the circumscribing sphere and any vertex of the public surface and the public surface f.
  • the mesh quality weight ⁇ q characterizes the effect of the relative angle of the two tetrahedrons ⁇ 1 and ⁇ 2 .
  • the smaller relative angle between the two tetrahedra ⁇ proves the better shape of the local mesh.
  • the second penalty term Esecond (f i ) can be initialized to 0 . Accumulate a corresponding value ⁇ d (f i ) ⁇ v (f i ) ⁇ q (f i ) ⁇ v .
  • first penalty term E( ⁇ ,inner) labeled "inner” there is a first penalty term E( ⁇ ,inner) labeled "inner” and another first penalty term labeled "outer”.
  • Efirst ( ⁇ ,outer) The first penalty term Efirst ( ⁇ ,inner) when the label is "inside” means that the tetrahedron ⁇ is marked as penalized inside the scene.
  • the first penalty term Efirst ( ⁇ ,outside) when the label is “outside” indicates the penalty for tetrahedron ⁇ being marked as outside the scene.
  • the first penalty term Efirst ( ⁇ ) may be initialized to 0 before starting to calculate the coefficients of the energy function Eenergy .
  • a corresponding preset penalty coefficient can be accumulated for the first penalty item Efirst ( ⁇ , inner) ⁇ v .
  • a corresponding preset penalty coefficient ⁇ v can be accumulated for the first penalty item Efirst ( ⁇ , outer).
  • the apparatus for reconstructing the object surface can minimize the energy of the energy function E, so as to obtain a binary label set L.
  • the energy function E energy (T, F, L) includes the sum of the first penalty term E first ( ⁇ , ⁇ ( ⁇ )) corresponding to each tetrahedron ⁇ and the sum with The sum of the second penalty term E2 (f, ⁇ ( ⁇ ), ⁇ ( ⁇ ′)) corresponding to the common face f of adjacent tetrahedra ⁇ with different labels ⁇ .
  • Esecond (f i , ⁇ ( ⁇ ), ⁇ ( ⁇ ′)) is Esecond (f i ) mentioned above.
  • the label ⁇ of the tetrahedron ⁇ in the tetrahedral grid may be different, and the second penalty term E2 (f, ⁇ ( ⁇ ), ⁇ ( ⁇ ′)) corresponds to the public
  • the surface f may be different, the sum of the second penalty item E (f, ⁇ ( ⁇ ), ⁇ ( ⁇ ′)) may be different, and the value of the energy function E energy (T, F, L) may be different.
  • the device for reconstructing the object surface can solve a binary label set L to minimize the energy function E energy (T, F, L).
  • the problem of minimizing the energy function E energy (T, F, L) can be solved using the st graph cut method.
  • the method for minimizing the energy function E energy (T, F, L) using the st graph cut method includes the following steps.
  • each tetrahedron ⁇ in the tetrahedral grid can be mapped to a point ⁇ in the directed graph G, and the common face f is used as the connection between the points ⁇ of the directed graph G Line ⁇ .
  • the graph points ⁇ 1 , ⁇ 2 and ⁇ 3 in the directed graph G correspond to the tetrahedrons ⁇ 1 , ⁇ 2 and ⁇ 3 in Fig. 5, respectively.
  • the connections ⁇ 1 and ⁇ 2 in the directed graph G correspond to the common planes f 1 and f 2 in Fig. 5, respectively.
  • the first penalty item Efirst ( ⁇ , outer) of the tetrahedron is the flow capacity of the line between the virtual starting point and the corresponding graph point of the tetrahedron.
  • the first ( ⁇ , inner) of the tetrahedron is the flow capacity of the line between the corresponding graph point and the virtual end point of the tetrahedron.
  • the virtual source point s is connected to all graph points ⁇ i via a connecting line.
  • the virtual sink t is connected with all graph points ⁇ i by connecting lines.
  • the first penalty term E1 ( ⁇ i , outer) of the tetrahedron ⁇ i is the flow capacity of the connection line flowing from the virtual starting point s to the graph point ⁇ i .
  • the first penalty term Efirst ( ⁇ i ,inner) of the tetrahedron ⁇ i is the flow capacity of the line flowing from the graph point ⁇ i to the virtual endpoint t.
  • the second penalty term E2( f i ) of the common face f i of two different tetrahedron ⁇ and tetrahedron ⁇ 4 acts as a connection between tetrahedron ⁇ and tetrahedron ⁇ 4 The flow capacity of the line.
  • the second penalty term E2( f i ) of the common surface f 1 between the tetrahedron ⁇ 1 and ⁇ 2 is the flow capacity of the line between ⁇ 1 and ⁇ 2 .
  • the flow between tetrahedron ⁇ and tetrahedron ⁇ 4 is bidirectional, and can be from ⁇ to ⁇ , or from ⁇ to ⁇ .
  • the flow capacities from ⁇ to ⁇ , and from ⁇ to ⁇ are the same, which are equal to the second penalty term E2( f i ) of the common plane f i .
  • the traffic capacity of a connection refers to the maximum value of traffic allowed by the connection, which is also referred to as the weight of the connection.
  • the directed graph G after adding the virtual start point s and the virtual end point t can be regarded as a network flow graph.
  • this network flow only the virtual origin s will generate traffic, and the virtual endpoint t will receive traffic.
  • the flow flows from the virtual starting point s to the virtual end point t through the graph point ⁇ .
  • the graph point ⁇ in the directed graph G except the virtual start point s and the virtual end point t its net flow must be 0. That is to say, the flow of the virtual starting point s will eventually reach the virtual endpoint t through the connection in the directed graph G.
  • all graph points ⁇ in the directed graph G are divided into two categories: the first category, the graph points whose labels are outside, including the virtual starting point s; the second category, Labeled plot points within, including the virtual endpoint t.
  • the process of dividing all graph points ⁇ into two classes including virtual start point s and virtual end point t is called s-t graph cut.
  • the net flow of the graph point ⁇ must be zero, so the net flow of the network flow from the virtual starting point s to the virtual endpoint t is equal to the adjacent connection of the two types of graph point ⁇ Net flow on line ⁇ . That is, the total flow is the sum of the flows between two map points corresponding to the interior and exterior of the object.
  • the common face f between two adjacent tetrahedrons ⁇ whose labels ⁇ are respectively "inner” and “outer” in the tetrahedral grid is extracted. These common faces f are fused together as the reconstructed object surface.
  • the apparatus for reconstructing object surfaces may perform enhancement processing on these extracted common surfaces f.
  • the enhancement processing may include color rendering for the reconstructed surface, etc., which is not limited in this embodiment of the present disclosure.
  • the apparatus for reconstructing the object surface may also perform smoothing processing on the extracted common plane f.
  • the device for reconstructing the object surface can complete the reconstruction of the object in the scene or the surface of the entire scene.
  • the embodiment of the present disclosure also provides a device 100 for reconstructing the surface of an object.
  • the device 100 for reconstructing the surface of an object includes: a dense point cloud generation module 110, a mesh generation module 120, a tetrahedron marking module 130, and surface extraction Module 140.
  • the dense point cloud generating module 110 is configured to generate a corresponding dense point cloud using multiple images obtained by shooting the scene from different shooting positions.
  • the mesh generation module 120 is configured to generate a tetrahedral mesh corresponding to the dense point cloud based on the dense point cloud.
  • the tetrahedron labeling module 130 is configured to determine the binary label of each tetrahedron based on the minimization of the energy function, wherein the binary label is used to indicate that the tetrahedron is located inside or outside the surface of the object; the surface extraction module 140 is configured To extract the common face between tetrahedrons with different binary labels, and reconstruct the surface of the object based on the common face; wherein, the energy function includes the sum of the first penalty term corresponding to each tetrahedron and the sum of the The sum of the second penalty term corresponding to each public face; the first penalty term is determined based on the binary label of the corresponding tetrahedron; the second penalty term includes a grid density weight; the grid density The weight is used to characterize the number of shooting location points corresponding to the common surface.
  • the tetrahedron labeling module 130 is further configured to generate a binary label for each tetrahedron based on energy function minimization, before each coordinate point in the dense point cloud When the line of sight to the shooting location point intersects with the common surface, set the second penalty item for the common surface; wherein, the second penalty item is the grid density weight and the preset Set the product of penalty coefficients.
  • the number of shooting location points corresponding to the common surface includes a sum of the numbers of shooting location points corresponding to each of the three vertices of the common surface.
  • the dense point cloud generation module 110 is further configured to perform feature point matching based on the multiple images to obtain multiple common view points; the common view points are used to represent the corresponding points of the multiple images Points in the scene captured by multiple shooting position points; coordinate points in the dense point cloud are obtained based on the common view points, thereby obtaining the dense point cloud.
  • the dense point cloud generating module 110 is further configured to query the coordinate points of the dense point cloud before generating the tetrahedral mesh corresponding to the dense point cloud based on the dense point cloud
  • the source image of the source image determine the shooting location point corresponding to the source image according to the source image; add the shooting location point to the dense point cloud to obtain an updated dense point cloud
  • the grid generation module 120 is further configured to generate a tetrahedral mesh corresponding to the dense point cloud based on the updated dense point cloud.
  • the grid generation module 120 is further configured to generate the tetrahedron grid based on the shooting location point and the coordinate point; wherein, for each point cloud from the dense The line of sight from coordinate points to the shooting position point is determined based on the type of vertices of the tetrahedron; the vertices of the tetrahedron include: the shooting position point of the line of sight or the coordinate point of the line of sight.
  • the second penalty item further includes a distance weight; the farther the intersection point of the line of sight and the common plane is from the starting point of the line of sight, the greater the distance weight is.
  • the ratio between the distance and the length of the line of sight is greater than a first threshold, and the sum of the side lengths of the common surface is equal to that of each of the three vertices of the common surface In a case where the ratio of the sum of the numbers of corresponding shooting location points is greater than the second threshold, the distance weight is zero.
  • the tetrahedron labeling module 130 is further configured to map each tetrahedron into a graph point in a directed graph, and use the common face as a graph of the directed graph A connection line between points; a virtual start point and a virtual end point are set in the directed graph, wherein the first penalty item is converted into the flow between the graph point and the virtual start point or virtual end point, and the second The two penalty items are converted into the flow between the graph points; with the goal of maximizing the total flow from the virtual start point to the virtual end point, the binary label is calculated, wherein, in the calculation process of the total flow, only for The flow is summed between two plot points corresponding to the interior and exterior of the object, respectively.
  • the tetrahedron is a Delaunay tetrahedron.
  • the dense point cloud generating module 110 is further configured to generate depth point clouds corresponding to the multiple images as the dense point cloud.
  • FIG. 11 is a schematic structural view of an embodiment of an apparatus 200 for reconstructing an object surface provided by an embodiment of the present disclosure.
  • the apparatus 200 for reconstructing an object surface includes: a processor 210 and a memory 220 .
  • a computer program is stored in the memory 220, and the processor 210 is used to execute the computer program to realize the steps of the method for reconstructing the object surface as described above.
  • FIG. 12 is a schematic structural diagram of an optional computer storage medium provided by an embodiment of the present disclosure.
  • a computer program 310 is stored in the computer storage medium 300 provided by an embodiment of the present disclosure, and the computer program 310 is executed by a processor. Realize the method for reconstructing the surface of an object as described above.
  • the computer storage medium 300 can be a medium that can store computer programs such as a U disk, a mobile hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, or It may also be a server storing the computer program, and the server may send the stored computer program to other devices for execution, or may run the stored computer program itself. From a physical point of view, the computer storage medium 300 may be a combination of multiple entities, such as multiple servers, servers plus storage, or storage plus a mobile hard disk.
  • the device for reconstructing the surface of an object constructs the energy function to include the sum of the first penalty term corresponding to the tetrahedron and the sum of the second penalty term corresponding to the common face, since the second penalty term includes the grid Density weight, thereby reducing the negative impact of noise points or abnormal points on the details of the reconstructed object surface, and improving the reconstruction accuracy.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

Disclosed are a method and apparatus for reconstructing a surface of an object, and a computer storage medium and a computer program product. The method comprises: generating a corresponding dense point cloud by using a plurality of images of a scene, the plurality of images being obtained by photographing the scene from a plurality of photographing position points; generating, on the basis of the dense point cloud, a tetrahedral mesh corresponding to the dense point cloud, vertices of each tetrahedron in the tetrahedral mesh being coordinate points in the dense point cloud; determining a binary tag of each tetrahedron on the basis of energy function minimization, the binary tag being used for characterizing that the tetrahedron is located inside or outside the surface of the object; and extracting a common surface of tetrahedrons having different binary tags to reconstruct the surface of the object on the basis of the common surface. The energy function is determined on the basis of a first penalty term corresponding to each tetrahedron and a second penalty term corresponding to each common surface, and a mesh density weight in the second penalty item is used for representing the number of photographing position points corresponding to the common surface. By means of the present invention, the accuracy of reconstructing the surface of the object is improved.

Description

重建物体表面的方法、装置、计算机存储介质和计算机程序产品Method, device, computer storage medium and computer program product for reconstructing object surface
相关申请的交叉引用Cross References to Related Applications
本公开实施例基于申请号为202111433346.7、申请日为2021年11月29日、申请名称为“重建物体表面的方法、装置及计算机存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。The embodiment of the present disclosure is based on the Chinese patent application with the application number 202111433346.7, the application date is November 29, 2021, and the application name is "Method, device and computer storage medium for reconstructing the surface of an object", and the priority of the Chinese patent application is required. Right, the entire content of this Chinese patent application is hereby incorporated into this disclosure as a reference.
技术领域technical field
本公开涉及图像处理领域,特别是涉及一种重建物体表面的方法、装置、计算机存储介质和计算机程序产品。The present disclosure relates to the field of image processing, and in particular to a method, device, computer storage medium and computer program product for reconstructing an object surface.
背景技术Background technique
三维重建技术用于从一个场景的多张图像重建整个场景或场景中的物体的三维表面。随着技术的发展,三维重建技术的应用也越来越广泛。诸如虚拟现实、增强现实、数字孪生等都对三维重建的精度和细节还原能力提出了更高的要求。3D reconstruction techniques are used to reconstruct the 3D surfaces of the entire scene or objects in the scene from multiple images of a scene. With the development of technology, the application of 3D reconstruction technology is becoming more and more extensive. Such as virtual reality, augmented reality, digital twins, etc. have put forward higher requirements for the accuracy and detail restoration ability of 3D reconstruction.
由于重建物体表面的过程中,需要生成场景的密集点云;而密集点云的生成过程中,可能出现噪声点或异常点。这些噪声点或异常点会对重建物体表面的细节造成负面影响,从而影响重建物体表面的精度。In the process of reconstructing the surface of an object, it is necessary to generate a dense point cloud of the scene; and in the process of generating a dense point cloud, noise points or abnormal points may appear. These noise points or abnormal points will negatively affect the details of the reconstructed object surface, thereby affecting the accuracy of the reconstructed object surface.
发明内容Contents of the invention
本公开实施例提供一种重建物体表面的方法、装置、计算机存储介质和计算机程序产品,以解决现有技术中降低噪声影响且保留更多细节的问题。Embodiments of the present disclosure provide a method, an apparatus, a computer storage medium and a computer program product for reconstructing an object surface, so as to solve the problem of reducing the influence of noise and retaining more details in the prior art.
为解决上述技术问题,本公开实施例采用的一个技术方案是:提供一种重建物体表面的方法,该方法包括:利用多张图像生成对应的密集点云,该多张图像通过从不同拍摄位置点对场景进行拍摄获得;基于密集点云,生成与密集点云对应的四面体网格,其中,四面体网格中的每个四面体的顶点为密集点云中的坐标点;基于能量函数最小化,为每一个四面体生成二元标签,其中二元标签用于表征四面体位于物体表面的内部或外部;提取具有不同二元标签的四面体之间的公共面以重建物体表面。其中,能量函数包括与四面体对应的第一惩罚项之和以及与公共面对应的第二惩罚项之和,第二惩罚项包括网格密度权重。In order to solve the above technical problems, a technical solution adopted by the embodiments of the present disclosure is to provide a method for reconstructing the surface of an object, the method includes: using multiple images to generate corresponding dense point clouds, the multiple images are obtained from different shooting positions The point is obtained by shooting the scene; based on the dense point cloud, a tetrahedral grid corresponding to the dense point cloud is generated, wherein the vertex of each tetrahedron in the tetrahedral grid is the coordinate point in the dense point cloud; based on the energy function Minimize, generate a binary label for each tetrahedron, where the binary label is used to characterize whether the tetrahedron is inside or outside the object surface; extract the common faces between tetrahedra with different binary labels to reconstruct the object surface. Wherein, the energy function includes the sum of the first penalty term corresponding to the tetrahedron and the sum of the second penalty term corresponding to the common face, and the second penalty term includes the grid density weight.
为解决上述技术问题,本公开实施例采用的另一个技术方案是:提供一种用于从场景的多张图像重建物体表面的装置。该装置包括:密集点云生成模块,配置成利用多张图像生成对应的密集点云,该多张图像通过从不同拍摄位置点对场景进行拍摄获得;网格生成模块,配置成基于密集点云,生成与密集点云对应的四面体网格,其中,四面体网格中的每个四面体的顶点为 密集点云中的坐标点;四面体标记模块,配置成基于能量函数最小化,为每一个四面体生成二元标签,其中二元标签用于表征四面体位于物体表面的内部或外部;表面提取模块,配置成提取具有不同二元标签的四面体之间的公共面以重建物体表面。其中,能量函数包括与四面体对应的第一惩罚项之和以及与公共面对应的第二惩罚项之和,第二惩罚项包括网格密度权重。In order to solve the above technical problem, another technical solution adopted by the embodiments of the present disclosure is to provide an apparatus for reconstructing an object surface from multiple images of a scene. The device includes: a dense point cloud generation module configured to generate a corresponding dense point cloud using multiple images obtained by shooting the scene from different shooting positions; a grid generation module configured to generate a corresponding dense point cloud based on the dense point cloud , to generate a tetrahedral mesh corresponding to the dense point cloud, wherein the vertices of each tetrahedron in the tetrahedral mesh are coordinate points in the dense point cloud; the tetrahedron labeling module is configured to minimize based on the energy function, as Each tetrahedron generates a binary label, wherein the binary label is used to characterize that the tetrahedron is located inside or outside the surface of the object; the surface extraction module is configured to extract the common face between the tetrahedrons with different binary labels to reconstruct the object surface . Wherein, the energy function includes the sum of the first penalty term corresponding to the tetrahedron and the sum of the second penalty term corresponding to the common face, and the second penalty term includes the grid density weight.
为解决上述技术问题,本公开实施例采用的另一个技术方案是:提供一种用于从场景的多张图像重建物体表面的装置。该装置包括处理器和存储器。存储器中存储有计算机程序。处理器用于执行计算机程序以实现上述方法的步骤。In order to solve the above technical problem, another technical solution adopted by the embodiments of the present disclosure is to provide an apparatus for reconstructing an object surface from multiple images of a scene. The device includes a processor and memory. A computer program is stored in the memory. The processor is used to execute the computer program to realize the steps of the above method.
为解决上述技术问题,本公开实施例采用的另一个技术方案是:提供一种计算机存储介质。该计算机存储介质存储有计算机程序。该计算机程序被处理器执行时实现上述方法的步骤。In order to solve the above technical problems, another technical solution adopted by the embodiments of the present disclosure is to provide a computer storage medium. The computer storage medium stores a computer program. When the computer program is executed by the processor, the steps of the above method are realized.
为解决上述技术问题,本公开实施例采用的另一个技术方案是:提供一种计算机程序产品,包括计算机可读代码,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行上述业务状态的分析方法。In order to solve the above technical problems, another technical solution adopted by the embodiments of the present disclosure is to provide a computer program product, including computer readable codes, where the computer program product includes computer programs or instructions, where the computer program or instructions are in In the case of running on an electronic device, the electronic device is made to execute the above business state analysis method.
本公开实施例通过将能量函数构建成包括与四面体对应的第一惩罚项之和以及与公共面对应的第二惩罚项之和,其中,第二惩罚项包括网格密度权重,由于噪声点或异常点对应的拍摄位置点较少,通过网格密度权重,可以对稀疏的网格施加更大的惩罚;也就是说,通过网格密度权重可以对噪声点或异常点施加更大的惩罚,从而降低噪声点或异常点对于细节的负面影响,保留更多的细节,提高物体重建表面的精度。。The embodiment of the present disclosure constructs the energy function to include the sum of the first penalty term corresponding to the tetrahedron and the sum of the second penalty term corresponding to the common face, wherein the second penalty term includes the mesh density weight, due to the noise Points or abnormal points correspond to fewer shooting positions, and through the grid density weight, a greater penalty can be imposed on the sparse grid; that is, through the grid density weight, a greater penalty can be imposed on the noise point or abnormal point Penalty, so as to reduce the negative impact of noise points or abnormal points on details, retain more details, and improve the accuracy of object reconstruction surface. .
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
附图说明Description of drawings
为更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present disclosure. Those of ordinary skill in the art can also obtain other drawings based on these drawings without any creative effort.
图1是本公开实施例提供的一个可选的重建物体表面的方法的流程示意图;FIG. 1 is a schematic flowchart of an optional method for reconstructing an object surface provided by an embodiment of the present disclosure;
图2是本公开生成密集点云的方法的一个实施例的流程示意图;Fig. 2 is a schematic flow chart of an embodiment of the method for generating a dense point cloud in the present disclosure;
图3是本公开生成密集点云的方法的又一实施例的流程示意图;Fig. 3 is a schematic flow chart of another embodiment of the method for generating a dense point cloud in the present disclosure;
图4是本公开实施例提供的生成的密集点云的再一个实施例的示意图;Fig. 4 is a schematic diagram of another embodiment of the generated dense point cloud provided by the embodiment of the present disclosure;
图5是与图4中的密集点云对应的四面体网格的一个实施例的示意图;Figure 5 is a schematic diagram of one embodiment of a tetrahedral mesh corresponding to the dense point cloud in Figure 4;
图6A示出视线穿过公共面的示意图;FIG. 6A shows a schematic diagram of a line of sight passing through a common plane;
图6B示出了一种可选的公共面权重示意图;FIG. 6B shows a schematic diagram of an optional public surface weight;
图6C示出了一种可选的公共面权重示意图;FIG. 6C shows a schematic diagram of an optional public surface weight;
图7示出相邻四面体之间的夹角示意图;Figure 7 shows a schematic view of the included angle between adjacent tetrahedrons;
图8示出利用图割方法求解能量函数的方法一实施例的示意图;FIG. 8 shows a schematic diagram of an embodiment of a method for solving an energy function using a graph cut method;
图9示出与图4中的四面体网格对应的有向图的示意图;Figure 9 shows a schematic diagram of a directed graph corresponding to the tetrahedral grid in Figure 4;
图10是本公开重建物体表面的装置一实施例的结构示意图;Fig. 10 is a schematic structural diagram of an embodiment of the device for reconstructing the surface of an object according to the present disclosure;
图11是本公开重建物体表面的装置另一实施例的结构示意图;Fig. 11 is a schematic structural diagram of another embodiment of the device for reconstructing the surface of an object according to the present disclosure;
图12是本公开计算机存储介质一实施例的结构示意图。Fig. 12 is a schematic structural diagram of an embodiment of a computer storage medium of the present disclosure.
具体实施方式Detailed ways
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本公开的一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are only a part of the embodiments of the present disclosure, not all of them. Based on the embodiments in the present disclosure, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present disclosure.
目前,物体表面的重建技术的应用存在各种限制。例如,泊松方法适合重建水密性的点云,基于截断的带符号距离函数算法(truncated signed distance function,TSDF)在大场景的物体表面重建方面精度不足,基于德洛内(Delaunay)剖分的物体表面重建的方法则对噪声十分敏感。相关技术中普遍存在细节丢失,精度不足的情况。Currently, there are various limitations in the application of reconstruction techniques for object surfaces. For example, the Poisson method is suitable for reconstructing watertight point clouds. The truncated signed distance function (TSDF) algorithm based on truncated signed distance function (TSDF) is not accurate enough in the object surface reconstruction of large scenes. Object surface reconstruction methods are very sensitive to noise. The loss of details and insufficient precision generally exist in related technologies.
本公开实施例提供一种重建物体表面的方法,该方法可以由重建物体表面的装置的处理器执行。其中,重建物体表面的装置指的可以是服务器、笔记本电脑、平板电脑、台式计算机、智能电视、机顶盒、移动设备(例如移动电话、便携式视频播放器、个人数字助理、专用消息设备、便携式游戏设备)等具备数据处理能力的设备。参阅图1,图1是本公开重建物体表面的方法第一实施例的流程示意图。本公开实施例重建物体表面的方法包括以下步骤。An embodiment of the present disclosure provides a method for reconstructing an object surface, and the method may be executed by a processor of an apparatus for reconstructing an object surface. Wherein, the device for reconstructing the surface of an object refers to a server, a notebook computer, a tablet computer, a desktop computer, a smart TV, a set-top box, a mobile device (such as a mobile phone, a portable video player, a personal digital assistant, a dedicated message device, a portable game device, etc.) ) and other equipment with data processing capabilities. Referring to FIG. 1 , FIG. 1 is a schematic flowchart of a first embodiment of a method for reconstructing an object surface according to the present disclosure. The method for reconstructing an object surface according to an embodiment of the present disclosure includes the following steps.
S11、利用场景的多张图像I i(其中,i=1,…N,N为多张图像的数量)生成对应的密集点云P。其中,场景的多张图像是从不同拍摄位置点对场景进行拍摄所获得的;密集点云P也称稠密点云P。 S11. Generate a corresponding dense point cloud P by using multiple images I i of the scene (wherein, i=1, ... N, where N is the number of multiple images). Among them, the multiple images of the scene are obtained by shooting the scene from different shooting positions; the dense point cloud P is also called dense point cloud P.
在本公开实施例中,上述场景可以为大学校园、景区、山岭、商业区、居民区等具有各种地形变化、建筑物和物体的大区域场景;所述场景也可以为室内的小面积场景。在一些实施例中,所述场景也可以为一个雕像、一个工业制品(例如,单架飞机)等单个物体。本公开实施例对此不作限制。在一些实施例中,上述场景可以为静止或者大致静止的场景。In the embodiment of the present disclosure, the above-mentioned scenes may be large-area scenes with various terrain changes, buildings and objects, such as university campuses, scenic spots, mountains, commercial areas, and residential areas; the scenes may also be indoor small-area scenes . In some embodiments, the scene may also be a single object such as a statue, an industrial product (for example, a single airplane). Embodiments of the present disclosure do not limit this. In some embodiments, the aforementioned scene may be a static or substantially static scene.
在本公开实施例中,不同拍摄位置点可以包括陆地、空中、水中等拍摄位置点。如此,采集装置可以从不同拍摄位置点对同一场景进行拍摄,得到多张图像I i;重建物体表面的装置可以从采集装置获取多张图像I i;其中,采集装置可以为重建物体表面的装置内置的装置,也可以为重建物体表面的装置以外的装置,对此,本公开实施例不作限制。 In the embodiment of the present disclosure, different shooting locations may include shooting locations such as land, air, and water. In this way, the acquisition device can shoot the same scene from different shooting positions to obtain multiple images I i ; the device for reconstructing the object surface can obtain multiple images I i from the acquisition device; wherein, the acquisition device can be a device for reconstructing the object surface The built-in device may also be a device other than the device for reconstructing the surface of the object, which is not limited by the embodiments of the present disclosure.
在本公开的一些实施例中,采集装置可以包括无人机和陆基摄像机中的至少一种;采集装置可以包括采集平台或采集设备中的至少一种;对于采集 装置,可以根据需要设置,本公开实施例不作限制。In some embodiments of the present disclosure, the collection device may include at least one of an unmanned aerial vehicle and a ground-based camera; the collection device may include at least one of a collection platform or a collection device; for the collection device, it may be set as required, Embodiments of the present disclosure are not limited.
在本公开实施例中,采集装置所拍摄的多张图像I i可以是深度图像,也可以是RGB(Red-Green-Blue)图像。多张图像I i各自包括同一场景的至少一部分。多张图像I i中的至少两张包含场景重叠部分。例如,在某住宅区的多张图像I i中,图像I m和I n都包括同一信号塔,则该信号塔的区域可以为图像I m和I n的场景重叠部分。多张图像I i之间的场景重叠部分越多,本公开实施例中的表面重建过程将越容易。 In the embodiment of the present disclosure, the multiple images I i captured by the acquisition device may be depth images or RGB (Red-Green-Blue) images. The multiple images I i each include at least a part of the same scene. At least two of the plurality of images I i contain scene overlapping parts. For example, in multiple images I i of a certain residential area, both images Im and In include the same signal tower, then the area of the signal tower may be the scene overlapping part of images Im and In . The more scene overlapping parts among the multiple images I i , the easier the surface reconstruction process in the embodiments of the present disclosure will be.
在本公开实施例中,采集装置在拍摄多张图像I i的同时,可以存储每张图像对应的相机姿态,如此,重建物体表面的装置可以从采集装置中获取每张图像对应的相机姿态;重建物体表面的装置也可以在获取多张图像I i后,基于多张图像I i计算得到每张图像对应的相机姿态;对于相机姿态的获取方式,本公开实施例不作限制。 In the embodiment of the present disclosure, the acquisition device can store the camera pose corresponding to each image while taking multiple images I i , so that the device for reconstructing the object surface can obtain the camera pose corresponding to each image from the acquisition device; The apparatus for reconstructing the surface of an object may also calculate the camera pose corresponding to each image based on the multiple images I i after acquiring the multiple images I i ; the embodiment of the present disclosure does not limit the acquisition method of the camera pose.
在一些实施例中,相机姿态可以包括相机拍摄位置c i和相机拍摄角度。 In some embodiments, the camera pose may include a camera shooting position c i and a camera shooting angle.
在本公开实施例中,相机拍摄位置c i也可以称为拍摄位置点c i或相机位置点信息c i。拍摄位置点c i可以为拍摄对应图像I i时,相机相对于图像中的景物或者特征的相对信息,或者相机在三维空间(例如,与场景的物理现实空间相对应的三维空间)中的空间坐标。 In the embodiment of the present disclosure, the camera shooting position c i may also be referred to as shooting position point c i or camera position point information c i . The shooting position point c i can be the relative information of the camera relative to the scene or feature in the image when shooting the corresponding image I i , or the space of the camera in the three-dimensional space (for example, the three-dimensional space corresponding to the physical reality space of the scene). coordinate.
在一些实施例中,上述相机姿态也可以从多张图像I i计算得到。对于相机姿态的获取方式,本公开实施例不作限制。 In some embodiments, the above camera pose can also be calculated from multiple images I i . The embodiment of the present disclosure does not limit the manner of acquiring the pose of the camera.
在本公开实施例中,在获取多张图像I i之后,重建物体表面的装置可以从多张图像I i生成对应的密集点云P。密集点云P包括第一类点p,第一类点也可以称为坐标点p;每个坐标点p为来自上述多张图像I i中的至少一张图像的点。该点可以是采样点或通过插值等方法得到的点。在一些实施例中,坐标点p包括该点的空间坐标信息。 In the embodiment of the present disclosure, after acquiring multiple images I i , the apparatus for reconstructing the object surface may generate corresponding dense point clouds P from the multiple images I i . The dense point cloud P includes the first type of points p, and the first type of points can also be called coordinate points p; each coordinate point p is a point from at least one image among the above-mentioned plurality of images I i . The point can be a sampling point or a point obtained by methods such as interpolation. In some embodiments, the coordinate point p includes spatial coordinate information of the point.
在一些实施例中,密集点云P还包括第二类点c i或拍摄位置点c i,第一类点也可以称为拍摄位置点c i;也就是说,密集点云P还包括与多张图像I i中的每一张对应的拍摄位置点c iIn some embodiments, the dense point cloud P also includes the second type of point c i or the shooting location point c i , and the first type of point can also be called the shooting location point c i ; that is to say, the dense point cloud P also includes the same Each of the plurality of images I i corresponds to a shooting location point c i .
在一些实施例中,重建物体表面的装置可以对多张图像I i进行特征匹配,得到多张图像中的至少两张图像的拍摄摄像头的共视点,再基于这些共视点得到密集点云P中的坐标点p。其中,至少两张图像的共视点为至少两张图像的拍摄摄像头都能看到的点。相应地,由该共视点所得到的坐标点p为来源于这两张图像的坐标点p。 In some embodiments, the device for reconstructing the object surface can perform feature matching on multiple images I i to obtain the common view points of the shooting cameras of at least two images in the multiple images, and then obtain the dense point cloud P based on these common view points. The coordinate point p. Wherein, the common view point of at least two images is a point that can be seen by the shooting cameras of at least two images. Correspondingly, the coordinate point p obtained from the common view point is the coordinate point p derived from the two images.
在本公开实施例中,查询密集点云P中的坐标点的来源图像,根据该来源图像确定出来源图像对应的拍摄位置点。将该拍摄位置点添加至密集点云中,得到更新的密集点云。更新后的密集点云P将包括来源于图像的坐标点p以及拍摄位置点。在一些实施例中,拍摄位置点可以为拍摄摄像头的光心点。In the embodiment of the present disclosure, the source image of the coordinate point in the dense point cloud P is queried, and the shooting location point corresponding to the source image is determined according to the source image. The shooting location point is added to the dense point cloud to obtain an updated dense point cloud. The updated dense point cloud P will include the coordinate point p and the shooting location point from the image. In some embodiments, the shooting location point may be the optical center point of the shooting camera.
在本公开的一些实施例中,S11中利用场景的多张图像I i生成对应的密集点云P的实现,参考图2,图2示出本公开生成密集点云的方法的一个实施例的流程图,该方法包括:S21-S23。 In some embodiments of the present disclosure, in S11, multiple images I i of the scene are used to generate a corresponding dense point cloud P. Referring to FIG. 2, FIG. 2 shows an embodiment of the method for generating a dense point cloud in the present disclosure. Flow chart, the method includes: S21-S23.
S21,从不同拍摄位置点c i对场景进行拍摄,得到多张图像I i。所述多张图像I i是色彩图。 S21. Shoot the scene from different shooting positions c i to obtain multiple images I i . The plurality of images I i are color maps.
在本公开实施例中,色彩图可以是RGB图像或其他类型的色彩图,本公开实施例对此不作限制。In the embodiment of the present disclosure, the color map may be an RGB image or other types of color map, which is not limited in the embodiment of the present disclosure.
S22,为多张图像I i中的每一张估计稠密深度地图。 S22. Estimate a dense depth map for each of the plurality of images I i .
在本公开实施例中,重建物体表面的装置在得到多张图像后,可以为多张图像中的每一张图像进行深度估计,得到每一张图像对应的稠密深度地图;其中,深度估计的方法可以包括:In an embodiment of the present disclosure, after obtaining multiple images, the device for reconstructing the object surface may perform depth estimation for each of the multiple images to obtain a dense depth map corresponding to each image; wherein, the depth estimation Methods can include:
在一些实施例中,S22中为多张图像I i中的每一张估计稠密深度地图的实现,可以包括:使用增量式SFM(structure-from-motion)方法为输入的多张图像I i重建稀疏地图。 In some embodiments, the implementation of estimating a dense depth map for each of the multiple images I i in S22 may include: using an incremental SFM (structure-from-motion) method for the input multiple images I i Rebuild a sparse map.
例如,提取每一张图像的SIFT(Scale-Invar iant Feature Transform,尺度不变特征变换)特征,对多张图像I i中的SIFT特征进行匹配,重建稀疏地图。该稀疏地图为包括至少一部分或全部SIFT特征的关键点集合。 For example, extract the SIFT (Scale- Invariant Feature Transform) feature of each image, match the SIFT features in multiple images I i , and reconstruct the sparse map. The sparse map is a set of key points including at least a part or all of the SIFT features.
在一些实施例中,SIFT是用于描述图像处理领域中的尺度不变性,通过SIFT可在图像中检测出关键点。SIFT特征是对物体上的一些局部外观的关键点,进行SIFT变换得到的。由于SIFT特征与图像的大小和旋转无关,且对于光线、噪声、微视角改变的容忍度较高,因此,SIFT特征高度显著、易辨识而且相对容易提取,不容易被误认。在一些实施例中,重建物体表面的装置只需要3个以上的SIFT特征就足以计算出关键点位置与方位。In some embodiments, SIFT is used to describe scale invariance in the field of image processing, and key points can be detected in an image through SIFT. The SIFT feature is obtained by performing SIFT transformation on some key points of local appearance on the object. Since the SIFT feature has nothing to do with the size and rotation of the image, and has a high tolerance for light, noise, and micro-angle changes, the SIFT feature is highly significant, easy to identify and relatively easy to extract, and is not easy to be misidentified. In some embodiments, the device for reconstructing the object surface only needs more than three SIFT features to calculate the position and orientation of key points.
在一些实施例中,S22中为多张图像I i中的每一张估计稠密深度地图的实现,还可以包括:基于多张图像I i中的至少一些图像的SIFT特征之间的匹配关系,将多张图像I i分成多个聚类,且为每个聚类中的每一张图像I i估计一个稠密深度地图。 In some embodiments, the implementation of estimating a dense depth map for each of the multiple images I i in S22 may further include: based on the matching relationship between SIFT features of at least some of the multiple images I i , Divide images I i into clusters and estimate a dense depth map for each image I i in each cluster.
在本公开实施例中,重建物体表面的装置可以将包括尽可能多相同SIFT特征的图像归为一个聚类,从而得到多个聚类。多个聚类的数目可以根据具体需要设定,本公开实施例对此不作限制。In the embodiment of the present disclosure, the apparatus for reconstructing the object surface may classify images including as many identical SIFT features as possible into one cluster, thereby obtaining multiple clusters. The number of multiple clusters can be set according to specific needs, which is not limited in this embodiment of the present disclosure.
在本公开实施例中,由于稀疏地图为包括至少一部分或全部SIFT特征的关键点集合,且只需要3个以上的SIFT特征就足以计算出关键点位置与方位。通过在每个聚类中的图像之间进行SIFT特征匹配,可以为每个聚类中的每一张图像估计一个稠密深度地图。其中,特征匹配用于建立特征之间的对应关系,特征匹配可以将空间的同一特征在不同视图中的映像对应起来。所生成的稠密深度地图包括来自对应图像中的采样点。在一些实施例中,稠密深度地图包括采样点的深度信息。在一些实施例中,稠密深度地图包括采样点的三维坐标信息。In the embodiment of the present disclosure, since the sparse map is a set of key points including at least a part or all of the SIFT features, and more than three SIFT features are enough to calculate the position and orientation of the key points. By performing SIFT feature matching between images in each cluster, a dense depth map can be estimated for each image in each cluster. Among them, feature matching is used to establish the correspondence between features, and feature matching can correspond to the images of the same feature in different views in space. The generated dense depth map includes sample points from the corresponding image. In some embodiments, the dense depth map includes depth information of sampling points. In some embodiments, the dense depth map includes three-dimensional coordinate information of sampling points.
需要说明的是,稠密深度地图中的稠密(也称为密集)是与稀疏地图中的稀疏相比较而言。在本公开实施例中,稀疏地图可以包含与SIFT特征对应的关键点,这里,与SIFT特征对应的关键点相对少而稀疏,而稠密深度地图中点则更稠密,因而能保存更多的细节信息。It should be noted that the density (also known as dense) in a dense depth map is compared with the sparseness in a sparse map. In the embodiment of the present disclosure, the sparse map may contain key points corresponding to SIFT features. Here, the key points corresponding to SIFT features are relatively few and sparse, while the points in the dense depth map are denser, so more details can be preserved. information.
S23,将与每一张图像对应的稠密深度地图融合为密集点云P。S23, merging the dense depth map corresponding to each image into a dense point cloud P.
在本公开实施例中,重建物体表面的装置在得到每一张图像对应的稠密深度地图后,可以将多张图像对应的多个稠密深度地图进行融合处理,得到密集点云P。在一些实施例中,重建物体表面的装置可以根据不同稠密深度地图对应的图像的几何形状和深度连续性关系,将稠密深度地图融合为密集点云P。In the embodiment of the present disclosure, after obtaining the dense depth map corresponding to each image, the device for reconstructing the object surface may perform fusion processing on multiple dense depth maps corresponding to multiple images to obtain a dense point cloud P. In some embodiments, the device for reconstructing the surface of an object may fuse the dense depth maps into a dense point cloud P according to the geometric shape and depth continuity relationship of images corresponding to different dense depth maps.
例如,对于与一张图像I l对应的稠密深度地图中的一个点A以及与图像I m对应的稠密深度地图中的点B,在点A和点B都对应于场景中的同一空间坐标位置,或者点A和点B在场景中的空间距离小于空间距离阈值的情况下,点A和点B可以融合为密集点云P中的同一点p h,且点p h同时来源于图像I l和图像I m。换言之,点p h可以从图像I l的拍摄位置点c l和图像I m的拍摄位置点c m同时看到(下标l和m表示图像与拍摄位置点的一一对应关系),或者点p h可以看到图像I l的拍摄位置点c l和图像I m的拍摄位置点c m。点p h与图像I l的拍摄位置点c l和图像I m的拍摄位置点c m对应;在点A和点B在场景中的空间距离小于空间距离阈值的情况下,点p h可以为从点A和点B插值得到的点;其中,空间距离阈值可以根据需要设置;对此,本公开实施例不作限制。 For example, for a point A in the dense depth map corresponding to an image I l and a point B in the dense depth map corresponding to an image I m , both point A and point B correspond to the same spatial coordinate position in the scene , or when the spatial distance between point A and point B in the scene is less than the spatial distance threshold, point A and point B can be fused into the same point ph in the dense point cloud P, and point ph comes from image I l at the same time and image Im . In other words, the point ph can be seen simultaneously from the shooting location point c l of the image I l and the shooting location point c m of the image Im (the subscripts l and m indicate the one-to-one correspondence between the image and the shooting location point), or point Ph can see the shooting position point c l of the image I l and the shooting position point c m of the image Im . The point ph corresponds to the shooting location point c l of the image I l and the shooting location point c m of the image I m ; when the spatial distance between point A and point B in the scene is less than the spatial distance threshold, the point ph can be A point obtained by interpolation from point A and point B; wherein, the spatial distance threshold can be set as required; this is not limited in this embodiment of the present disclosure.
在本公开的一些实施例中,密集点云P中的一些坐标点p可以从与多于两个图像对应的多于两个拍摄位置点看到。在本公开的一些实施例,密集点云P中的一些坐标点p仅可以从与一个图像对应的一个拍摄位置点看到。In some embodiments of the present disclosure, some coordinate points p in the dense point cloud P can be seen from more than two shooting location points corresponding to more than two images. In some embodiments of the present disclosure, some coordinate points p in the dense point cloud P can only be seen from one shooting location point corresponding to one image.
在一些实施例中,密集点云P经过至少两次融合步骤形成。In some embodiments, the dense point cloud P is formed through at least two fusion steps.
在本公开实施例中,重建物体表面的装置可以先将多个聚类中每个聚类的多张稠密深度地图融合成聚类密集点云,之后将多个聚类对应的多个聚类密集点云融合成整个场景的密集点云P。In the embodiment of the present disclosure, the device for reconstructing the surface of an object may first fuse multiple dense depth maps of each cluster in multiple clusters into a cluster dense point cloud, and then combine multiple clusters corresponding to multiple clusters The dense point cloud is fused into a dense point cloud P for the entire scene.
可以理解的是,由于通过将每一个聚类中的多张稠密深度地图融合成对应的一个聚类密集点云,然后再将所有聚类密集点云融合成整个场景的密集点云P;如此,重建物体表面的装置可以在内存中单次只加载一个或多个聚类中的稠密深度地图进行融合,而不用同时加载所有稠密深度地图,从而降低密集点云P生成过程中的内存消耗,增加了对大尺度的高精度场景重建的适用性,从而提高了重建物体表面的鲁棒性,。It is understandable that by merging multiple dense depth maps in each cluster into a corresponding clustered dense point cloud, and then merging all clustered dense point clouds into a dense point cloud P of the entire scene; so , the device for reconstructing the surface of an object can only load dense depth maps in one or more clusters in memory for fusion at a time, instead of loading all dense depth maps at the same time, thereby reducing memory consumption during the generation of dense point clouds P, Increased applicability to large-scale high-precision scene reconstruction, thereby improving the robustness of reconstructed object surfaces.
在整个场景的密集点云P中,多张图像I i中的采样点可以通过透射到3D空间中的3D点云的坐标点p的形式表现出来,从而示出采样点之间的3D几何关系。换言之,密集点云P是与多张图像I i对应的深度点云或者3D点云。 In the dense point cloud P of the entire scene, the sampling points in multiple images Ii can be expressed in the form of coordinate points p of the 3D point cloud transmitted into the 3D space, thus showing the 3D geometric relationship between the sampling points . In other words, the dense point cloud P is a depth point cloud or a 3D point cloud corresponding to multiple images I i .
在本公开的一些实施例中,密集点云P还包括每个图像I i的拍摄位置点c i;如此,密集点云P可以体现出拍摄位置点c i与采样点之间的空间位置关系。 In some embodiments of the present disclosure, the dense point cloud P also includes the shooting location point c i of each image I i ; thus, the dense point cloud P can reflect the spatial position relationship between the shooting location point c i and the sampling points .
参考图3,图3示出生成密集点云的方法的又一实施例的流程示意图;在图3中,重建物体表面的装置可以根据从不同拍摄位置点c i对场景拍摄所获得的多张图像I i生成对应的密集点云P。如图3所示,该方法可以包括:S31-S32。 Referring to FIG. 3 , FIG. 3 shows a schematic flow chart of another embodiment of a method for generating a dense point cloud; in FIG. 3 , the device for reconstructing the surface of an object can be based on multiple images obtained by shooting the scene from different shooting positions c i Images Ii generate corresponding dense point clouds P. As shown in Fig. 3, the method may include: S31-S32.
S31,从不同拍摄位置点c i对场景进行拍摄获得多张深度图。 S31. Shoot the scene from different shooting position points c i to obtain multiple depth maps.
在本公开实施例中,深度图可以通过双目摄像头、多个单目或双目摄像 头等图像采集设备获取;深度图还可以利用超声、激光等图像采集设备获取,本公开实施例对此不作限制。In the embodiments of the present disclosure, the depth map can be obtained by image acquisition devices such as binocular cameras, multiple monocular or binocular cameras; the depth map can also be obtained by image acquisition devices such as ultrasound and laser, and the embodiments of the present disclosure do not make any reference to this. limit.
由于直接采用深度图,该实施例可以省略图2中从色彩图生成稠密深度地图的步骤,而直接将深度图用作稠密深度地图。在一些实施例中,重建物体表面的装置也可以对深度图进行预处理,例如,变换深度图的角度、调整其大小等。Since the depth map is directly used, this embodiment can omit the step of generating a dense depth map from the color map in FIG. 2 , and directly use the depth map as the dense depth map. In some embodiments, the apparatus for reconstructing the surface of an object may also perform preprocessing on the depth map, for example, changing the angle of the depth map, adjusting its size, and so on.
S32,融合多张深度图,得到与场景对应的3D密集点云P。该步骤与上述步骤S23类似,在此不再赘述。S32, fusing multiple depth maps to obtain a 3D dense point cloud P corresponding to the scene. This step is similar to the above step S23, and will not be repeated here.
除了上述方法外,重建物体表面的装置也可以将深度图和色彩图结合在一起,生成密集点云P,本公开实施例对此不作限制。In addition to the above method, the apparatus for reconstructing the surface of an object may also combine the depth map and the color map to generate a dense point cloud P, which is not limited in the embodiments of the present disclosure.
参考图4,图4示出生成的密集点云的方法的再一实施例的流程示意图。如图4所示,密集点云P包括与多张图像I i中的采样点对应的坐标点p 1、p 2、p 3以及p 4。密集点云P还包括拍摄位置点c 1和c 2。密集点云P表现为3D点云的形式,示出了坐标点p和拍摄位置点c之间的空间关系。图4仅以示意的方式示出密集点云p,对于密集点云p中点的数量和/或几何位置关系等设置,本公开实施例不作限制。 Referring to FIG. 4 , FIG. 4 shows a schematic flowchart of still another embodiment of the method for generating a dense point cloud. As shown in FIG. 4 , the dense point cloud P includes coordinate points p 1 , p 2 , p 3 and p 4 corresponding to sampling points in multiple images I i . The dense point cloud P also includes shooting position points c 1 and c 2 . The dense point cloud P takes the form of a 3D point cloud showing the spatial relationship between the coordinate point p and the shooting location point c. Fig. 4 only schematically shows the dense point cloud p, and the embodiment of the present disclosure does not limit the number and/or geometric positional relationship of points in the dense point cloud p.
S12,基于密集点云P,生成与密集点云P对应的四面体网格。其中,四面体网格中的每个四面体τ的顶点为密集点云P中的坐标点p和/或拍摄位置点c iS12. Based on the dense point cloud P, generate a tetrahedral mesh corresponding to the dense point cloud P. Wherein, the vertex of each tetrahedron τ in the tetrahedral grid is the coordinate point p and/or the shooting location point c i in the dense point cloud P.
在一些实施例中,重建物体表面的装置可以基于拍摄位置点c i和坐标点p生成四面体网格。在一些实施例中,重建物体表面的装置可以基于步骤S11中所得到的更新的密集点云中的每个点生成四面体网格。也就是说,本公开实施例中,重建物体表面的装置可以采用来源于图像的坐标点p以及拍摄位置点对四面体网格进行构建。在一些实施例中,对于来源于图像的坐标点p,重建物体表面的装置可以基于采集到的图像中的像素点进行反投影,得到空间中的三维点,将该三维点作为密集点云P的坐标点p。在一些实施例中,拍摄位置点可以为摄像头的光心点的位置。每个拍摄位置点c i和坐标点p至少为一个四面体的顶点。在一些实施例中,四面体是Delaunay四面体。在一些实施例中,重建物体表面的装置可以对密集点云P做Delaunay三角剖分,构建三维Delaunay三角网格。在Delaunay三角网格中,所有的面都是三角面片,所有三角面片的三个顶点为密集点云P中的坐标点p和/或拍摄位置点c i,且所有三角面片的外接圆内不包含密集点云P中的坐标点p和拍摄位置点c i。Delaunay三角剖分能够使所形成的三角形中的最小内角最大化,从而提高三角网格的均匀性和规则程度。 In some embodiments, the apparatus for reconstructing the surface of an object may generate a tetrahedral mesh based on the shooting location point c i and the coordinate point p. In some embodiments, the apparatus for reconstructing the object surface may generate a tetrahedral mesh based on each point in the updated dense point cloud obtained in step S11. That is to say, in the embodiment of the present disclosure, the apparatus for reconstructing the object surface may use the coordinate point p and the shooting position point from the image to construct the tetrahedral mesh. In some embodiments, for the coordinate point p derived from the image, the device for reconstructing the object surface can perform back projection based on the pixels in the collected image to obtain a three-dimensional point in space, and use the three-dimensional point as a dense point cloud P The coordinate point p. In some embodiments, the shooting position point may be the position of the optical center point of the camera. Each shooting location point ci and coordinate point p is at least a vertex of a tetrahedron. In some embodiments, the tetrahedra are Delaunay tetrahedra. In some embodiments, the device for reconstructing the object surface may perform Delaunay triangulation on the dense point cloud P to construct a three-dimensional Delaunay triangular mesh. In the Delaunay triangular mesh, all faces are triangular faces, and the three vertices of all triangular faces are the coordinate point p and/or the shooting location point c i in the dense point cloud P, and the circumscribed boundary of all triangular faces The coordinate point p in the dense point cloud P and the shooting location point ci are not included in the circle. Delaunay triangulation can maximize the minimum interior angle in the formed triangle, thereby improving the uniformity and regularity of the triangular mesh.
需要说明的是,对于每一个密集点云P,其三维Delaunay三角网格是唯一的。在一些实施例中,实现Delaunay三角剖分的方法可以是逐点插入的Lawson方法。It should be noted that for each dense point cloud P, its three-dimensional Delaunay triangular mesh is unique. In some embodiments, the method of implementing the Delaunay triangulation may be the Lawson method of pointwise interpolation.
在本公开实施例中,重建物体表面的装置可以得到四面体集合T={τ}以及四面体的三角面片集合F={f}。相邻的两个四面体τ之间可以包括公共的三 角面片,下文也称公共面f。In the embodiment of the present disclosure, the apparatus for reconstructing the surface of an object can obtain a set of tetrahedrons T={τ} and a set of triangular faces of tetrahedrons F={f}. A common triangular facet may be included between two adjacent tetrahedrons τ, also referred to as the common face f hereinafter.
参考图5,图5示出与图4中的密集点云P对应的四面体网格的示意图。如图5所示,四面体网格包括四面体τ 1、τ 2和τ 3。其中,四面体τ 1以坐标点p 1、p 2、p 3以及拍摄位置点c 1为顶点。四面体τ 2以坐标点p 2、p 3以及拍摄位置点c 1、c 2为顶点。四面体τ 3以坐标点p 2、p 3、p 4以及拍摄位置点c 2为顶点。四面体τ 1和四面体τ 2具有公共面f 1。四面体τ 2和四面体τ 3具有公共面f 2。图5示例性地示出了一种可选的四面体网格的结构,对于四面体网格的四面体数量和位置关系等,本公开实施例不作限制。 Referring to FIG. 5 , FIG. 5 shows a schematic diagram of a tetrahedral mesh corresponding to the dense point cloud P in FIG. 4 . As shown in FIG. 5 , the tetrahedral grid includes tetrahedra τ 1 , τ 2 and τ 3 . Among them, tetrahedron τ 1 has coordinate points p 1 , p 2 , p 3 and shooting position point c 1 as vertices. Tetrahedron τ 2 has coordinate points p 2 and p 3 and shooting position points c 1 and c 2 as vertices. Tetrahedron τ 3 has coordinate points p 2 , p 3 , p 4 and shooting position point c 2 as vertices. Tetrahedron τ 1 and tetrahedron τ 2 have a common face f 1 . Tetrahedron τ 2 and tetrahedron τ 3 have a common face f 2 . FIG. 5 exemplarily shows an optional tetrahedral grid structure, and the embodiment of the present disclosure does not limit the tetrahedral quantity and positional relationship of the tetrahedral grid.
S13,基于能量函数最小化,确定每一个四面体τ的二元标签λ,其中二元标签λ用于表征四面体τ位于物体或场景的表面的内部或外部。S13, based on the minimization of the energy function, determine the binary label λ of each tetrahedron τ, where the binary label λ is used to indicate that the tetrahedron τ is located inside or outside the surface of the object or scene.
在本公开实施例中,重建物体表面的装置可以为上述四面体集合T中的每一个四面体τ生成一个二元标签λ∈{内,外},所有四面体τ的二元标签λ构成该四面体网格的二元标签集L={λ w,w=1,…M},M为四面体τ的数量。四面体τ的标签λ为“内”表征四面体τ位于场景表面的内部或位于场景内的物体表面的内部。四面体τ的标签λ为“外”则表征四面体τ位于场景表面的外部或位于场景内的物体表面的外部。相邻的两个四面体τ x和τ y,如果其中一个四面体τ x的标签λ x为“内”,另一个四面体τ y的标签λ y为“外”,则四面体τ x和τ y的公共面f将作为场景的表面或场景中的物体的表面。 In the embodiment of the present disclosure, the device for reconstructing the surface of an object can generate a binary label λ∈{inner, outer} for each tetrahedron τ in the above-mentioned tetrahedron set T, and the binary label λ of all tetrahedrons τ constitutes the The binary label set L={λ w , w=1, . . . M} of the tetrahedral grid, where M is the number of tetrahedrons τ. The label λ of the tetrahedron τ is "inner" to indicate that the tetrahedron τ is located inside the surface of the scene or inside the surface of an object in the scene. The label λ of the tetrahedron τ is "outside", which means that the tetrahedron τ is located outside the surface of the scene or outside the surface of objects in the scene. For two adjacent tetrahedrons τ x and τ y , if the label λ x of one tetrahedron τ x is "inside" and the label λ y of the other tetrahedron τ y is "outside", then the tetrahedron τ x and The common face f of τ y will be the surface of the scene or the surface of an object in the scene.
在本公开实施例中,二元标签λ可具有任何合适的标记,诸如二进制值、文本标记、标记编号、整数值、实数值等,对此本公开实施例不作限制。在一个示例中,该二元标签λ可为或0或1的集合。在一个示例中,0值可指示标签λ为“内”,1值可指示标签λ为“外”,反之亦然。In the disclosed embodiment, the binary label λ may have any suitable label, such as a binary value, a text label, a label number, an integer value, a real value, etc., and this disclosed embodiment is not limited. In one example, the binary label λ may be a set of either 0 or 1. In one example, a value of 0 may indicate that label λ is "in", a value of 1 may indicate that label λ is "out", and vice versa.
在本公开实施例中,重建物体表面的装置可以通过构建能量函数(也称代价函数),并且求解使该能量函数最小化的一个二元标签集L,来为每一个四面体τ生成二元标签λ。In the embodiment of the present disclosure, the device for reconstructing the surface of an object can generate a binary label set L for each tetrahedron τ by constructing an energy function (also called a cost function) and solving a binary label set L that minimizes the energy function. label lambda.
能量函数energy function
在一个实施例中,能量函数E 能量可以参考公式(1): In one embodiment, energy function E energy can refer to formula (1):
Figure PCTCN2022106501-appb-000001
Figure PCTCN2022106501-appb-000001
其中,λ(τ)≠λ(τ'),T为上述四面体网格的四面体τ的集合,F为三角面片f的集合,λ(τ)为四面体τ的二元标签λ,且三角面片f为四面体τ和四面体τ'的公共面f。Among them, λ(τ)≠λ(τ'), T is the set of tetrahedron τ of the above tetrahedral grid, F is the set of triangular facet f, λ(τ) is the binary label λ of tetrahedron τ, And the triangular facet f is the common face f of tetrahedron τ and tetrahedron τ'.
由上述公式可知,能量函数E 能量包括与四面体τ对应的第一惩罚项E 第一之和以及与公共面f对应的第二惩罚项E 第二之和。 It can be seen from the above formula that the energy function E energy includes the sum of the first penalty term E1 corresponding to the tetrahedron τ and the sum of the second penalty term E2 corresponding to the common surface f.
其中,第一惩罚项E 第一是基于四面体τ的顶点是拍摄位置点c i还是拍摄位置点c i以外的坐标点p设置的,而第二惩罚项E 第二则是基于视线与公共面f的相交关系设置的;其中,视线是指在上述3D密集点云P中,从坐标点p到拍摄位置点c i的连线。 Among them, the first penalty item E1 is set based on whether the vertex of the tetrahedron τ is the shooting location point c i or a coordinate point p other than the shooting location point c i , and the second penalty item E2 is based on the line of sight and the public The intersecting relationship of the surface f is set; where, the line of sight refers to the connection line from the coordinate point p to the shooting position point c i in the above-mentioned 3D dense point cloud P.
能量函数中系数的计算Calculation of Coefficients in Energy Function
参考图6A示出的视线。在图6A中,黑色圆点表示四面体网格中四面体τ的顶点,粗实线表示四面体τ的三角形面片f。其中f包括f 1、f 2和f 3。四面体的一个顶点对应于上述3D密集点云P中的一个坐标点p,当其可以看到拍摄位置点c时,从该坐标点p到该拍摄位置点c的连线称为视线p→c。其中,该坐标点p称为视线p→c的起点,拍摄位置点c称为视线p→c的终点。从视线p→c的起点到视线p→c的终点的距离可称为视线p→c的长度。视线p→c可以与多个公共面f相交。如图6所示,视线p→c与该四面体网格中的公共面f 1、f 2和f 3相交。 Refer to the line of sight shown in Figure 6A. In FIG. 6A , black dots represent vertices of tetrahedron τ in the tetrahedral mesh, and thick solid lines represent triangular facets f of tetrahedron τ. Where f includes f 1 , f 2 and f 3 . A vertex of the tetrahedron corresponds to a coordinate point p in the above-mentioned 3D dense point cloud P. When it can see the shooting position point c, the connection line from the coordinate point p to the shooting position point c is called the line of sight p→ c. Here, the coordinate point p is called the starting point of the line of sight p→c, and the shooting position point c is called the end point of the line of sight p→c. The distance from the starting point of the sight line p→c to the end point of the sight line p→c can be called the length of the sight line p→c. A line of sight p→c may intersect multiple common faces f. As shown in Fig. 6, the line of sight p→c intersects common faces f1 , f2 and f3 in this tetrahedral mesh.
需要说明的是,重建物体表面的装置可以为四面体网格中的每一个顶点构建至少一条视线,只要该顶点不是拍摄位置点c。在一个顶点对应的坐标点p可以从多个拍摄位置点c看到的情况下,可以为该坐标点p构建与多个拍摄位置点c对应的多条视线。It should be noted that the device for reconstructing the surface of an object may construct at least one line of sight for each vertex in the tetrahedron grid, as long as the vertex is not the shooting location point c. In the case that a coordinate point p corresponding to a vertex can be seen from multiple shooting position points c, multiple sight lines corresponding to multiple shooting position points c may be constructed for the coordinate point p.
在本公开实施例中,3D密集点云P中的每个坐标点p都是场景或物体的表面上的采样点,每一个顶点的视线不会穿过重建物体的表面,如此,重建物体表面的装置可以依据视线与四面体τ之间的关系构建能量函数E 能量中的第一惩罚项E 第一,以及,可以依据视线与公共面f之间的关系构建第二惩罚项E 第二In the embodiment of the present disclosure, each coordinate point p in the 3D dense point cloud P is a sampling point on the surface of the scene or object, and the line of sight of each vertex will not pass through the surface of the reconstructed object, thus, the reconstructed object surface The device can construct the first penalty term Efirst in the energy function E energy according to the relationship between the line of sight and the tetrahedron τ, and can construct the second penalty term E2 according to the relationship between the line of sight and the public surface f.
在一些实施例中,重建物体表面的装置可以将第一惩罚项E 第一和第二惩罚项E 第二初始化为零,然后依次对所有视线中的每一条视线,计算对应的第一惩罚项E 第一和第二惩罚项E 第二In some embodiments, the device for reconstructing the object surface may initialize the first penalty term E1 and the second penalty term E2 to zero, and then calculate the corresponding first penalty term for each line of sight in turn E first and second penalty term E second .
第一惩罚项E 第一的计算 Calculation of the first penalty term E first
如图6A所示,对于一条从坐标点p到拍摄位置点c的视线,τ p为顶点p所在的四面体τ,τ c为拍摄位置点c所在的四面体τ。 As shown in FIG. 6A , for a line of sight from the coordinate point p to the shooting position point c, τ p is the tetrahedron τ where the vertex p is located, and τ c is the tetrahedron τ where the shooting position point c is located.
在一些实施例中,坐标点p可以对应于多个四面体τ,示例性的,τ p是视线的反向延长线所穿过的一个四面体τ。 In some embodiments, the coordinate point p may correspond to multiple tetrahedrons τ. Exemplarily, τ p is a tetrahedron τ through which the reverse extension line of the line of sight passes.
在一些实施例中,拍摄位置点c可以对应于多个四面体τ,示例性的,τ c是视线所穿过的一个四面体τ。 In some embodiments, the shooting location point c may correspond to multiple tetrahedrons τ. Exemplarily, τ c is a tetrahedron τ through which the line of sight passes.
在本公开实施例中,τ p位于顶点p的视线后方,属于场景内侧的可能性更大,如公式(2)所示,重建物体表面的装置可以对τ p增加二元标签λ为“外”时的预设惩罚系数α vIn the embodiment of the present disclosure, τ p is located behind the line of sight of vertex p, and is more likely to belong to the inside of the scene. As shown in formula (2), the device for reconstructing the object surface can add a binary label λ to τ p as "outside ”The preset penalty coefficient α v .
E 第一p,外)+=α v    公式(2) E firstp , outer) + = α v formula (2)
其中,α v是一个正数,可以由用户根据需要设定。例如,α v可以设定为1。通过设置α v,可以减少将τ p被标记为外的可能性。 Wherein, α v is a positive number, which can be set by the user according to needs. For example, αv can be set to 1. By setting α v , the possibility of τ p being marked out can be reduced.
在本公开实施例中,拍摄位置点c所处的四面体τ c更可能属于场景外侧。如公式(3)所示,重建物体表面的装置可以对τ c增加标签为“内”时的预设惩罚系数α vIn the embodiment of the present disclosure, the tetrahedron τ c where the shooting position point c is located is more likely to belong to the outside of the scene. As shown in formula (3), the device for reconstructing the surface of an object can add a preset penalty coefficient α v when the label is “inside” to τ c .
E 第一c,内)+=α v   公式(3) E firstc , inner) + = α v formula (3)
通过设置α v,可以减少将τ c标记为内的可能性。该惩罚系数α v也可以利用其他预设值替代,本公开实施例对此不作限制。 By setting α v , the possibility of marking τ c as inside can be reduced. The penalty coefficient α v may also be replaced by other preset values, which is not limited in this embodiment of the present disclosure.
第二惩罚项E 第二的计算 Calculation of the second penalty term ESecond
参考图6A,从顶点p到拍摄位置点c的视线可以穿过一个或多个公共面f。由上文所述可知,顶点p的视线不会穿过重建后的表面,因此对该一个或多个被穿过的公共面f增加第二惩罚值E 第二(f)。 Referring to FIG. 6A , the line of sight from the vertex p to the shooting location point c may pass through one or more common planes f. It can be seen from the above that the line of sight of the vertex p does not pass through the reconstructed surface, so the second penalty value Esecond (f) is added to the one or more passed common faces f.
在一些实施例中,如公式(4)所示,对于公共面f i,第二惩罚值E 第二(f i)为网格密度权重ω d(f i)、距离权重ω v(f i)以及网格质量权重ω q(f i)与预设惩罚系数α v的乘积。 In some embodiments, as shown in formula (4), for the common plane f i , the second penalty value Esecond (f i ) is grid density weight ω d (f i ), distance weight ω v (f i ) and the product of the grid quality weight ω q (f i ) and the preset penalty coefficient α v .
E 第二(f i)+=ω d(f iv(f iq(f iv   公式(4) E second (f i )+=ω d (f iv (f iq (f iv formula (4)
在一些实施例中,第二惩罚值E 第二(f i)可以不包括距离权重ω v(f i)和/或网格质量权重ω q(f i)。在一些实施例中,预设惩罚系数α v是一个正数,可以根据需要设定。例如,α v可以设定为1。 In some embodiments, the second penalty value Esecond (f i ) may not include the distance weight ω v (f i ) and/or the mesh quality weight ω q (f i ). In some embodiments, the preset penalty coefficient α v is a positive number and can be set as required. For example, αv can be set to 1.
网格密度权重ω d(f i) Grid density weight ω d (f i )
在本公开实施例中,公共面f i的三个顶点中的每一个顶点所能看到的拍摄位置点c i的数量之和,即为公共面f i对应的拍摄位置点的数量;公共面对应的拍摄位置点的数量越多,网格密度权重越小。举例而言,公共面f的三个顶点分别能看到2个、1个和3个拍摄位置点c i,则公共面f的三个顶点所能看到的拍摄位置点c i的数量之和为2+1+3=6。该数量之和越大,网格密度权重ω d(f i)越小。该数量之和越大,表明该公共面f i的顶点在更多图像中出现,该公共面f i属于场景或物体表面的可能性越大。 In the embodiment of the present disclosure, the sum of the number of shooting location points c i that can be seen by each of the three vertices of the common surface f i is the number of shooting location points corresponding to the public surface f i ; The larger the number of shooting position points corresponding to the surface, the smaller the grid density weight. For example, the three vertices of the public surface f can respectively see 2, 1 and 3 shooting location points ci , then the number of shooting location points ci that can be seen by the three vertices of the public surface f The sum is 2+1+3=6. The larger the sum of the numbers is, the smaller the grid density weight ω d (f i ) is. The larger the sum of the numbers, the more vertices of the common face f i appear in more images, and the greater the possibility that the common face f i belongs to the scene or object surface.
在一些实施例中,公共面f i的边长之和越大,网格密度权重ω d(f i)越大。公共面f i的边长之和越大,表明该公共面f i附近的网格越稀疏,则该公共面f i属于场景或物体表面的可能性越小。网格密度权重ω d(f i)表示网格中密度更大的部分的公共面更接近真实的物体表面。 In some embodiments, the larger the sum of the side lengths of the common plane f i is, the larger the grid density weight ω d (f i ) is. The larger the sum of the side lengths of the common face f i is, the sparser the grid near the common face f i is, and the less likely the common face f i belongs to the scene or object surface. The grid density weight ω d (f i ) indicates that the common face of the denser part of the grid is closer to the real object surface.
在一些实施例中,网格密度权重可以通过公式(5)计算得到。In some embodiments, the grid density weight can be calculated by formula (5).
Figure PCTCN2022106501-appb-000002
Figure PCTCN2022106501-appb-000002
其中,V(f i)表示公共面f i的总边长除以公共面f i对应的拍摄位置点的数量所得到的值;公共面f i包括三个顶点,公共面f i对应的拍摄位置点的数量包括每一个顶点对应的拍摄位置点c i的数量之和。η d为网格密度控制因子,用于控制网格密度权重对第二惩罚值的影响。η d的大小可以根据需要设置,对此,本公开实施例不作限制。举例而言,η d可以设置为0.8。在网格密度权重公式中,σ d为尺度控制量,可用于使ω d(f)无量纲化。例如,σ d可以为所有公共面f i的V(f)中的最小值的四分之一,本公开实施例对此不作限制。 Among them, V(f i ) represents the value obtained by dividing the total side length of the public face f i by the number of shooting positions corresponding to the public face f i ; the public face f i includes three vertices, and the shooting position points corresponding to the public face f i The number of location points includes the sum of the numbers of shooting location points ci corresponding to each vertex. η d is a grid density control factor, which is used to control the influence of the grid density weight on the second penalty value. The size of ηd can be set as required, which is not limited by the embodiments of the present disclosure. For example, ηd can be set to 0.8. In the grid density weight formula, σ d is the scale control quantity, which can be used to make ω d (f) dimensionless. For example, σ d may be a quarter of the smallest value among V(f) of all common planes f i , which is not limited in this embodiment of the present disclosure.
可以理解的是,由于噪声点或异常点看到的拍摄位置点c较少,通过网格密度权重ω d(f i),可以对稀疏的网格施加更大的惩罚,也就是说,通过网格 密度权重ω d(f i)可以对噪声点或异常点施加更大的惩罚,从而降低噪声点或异常点对于细节的负面影响,保留更多的细节,提高重建物体表面的精度。 It is understandable that due to noise points or abnormal points seeing less shooting location point c, through the grid density weight ω d (f i ), a larger penalty can be imposed on the sparse grid, that is, by The grid density weight ω d (f i ) can impose greater penalties on noise points or abnormal points, thereby reducing the negative impact of noise points or abnormal points on details, retaining more details, and improving the accuracy of reconstructing the object surface.
距离权重ω v(f i) Distance weight ω v (f i )
在本公开实施例中,第二惩罚项E 第二还包括距离权重ω v(f i)。视线与公共面f i的交点距离视线的起点的距离越远,距离权重ω v(f i)越大。 In the embodiment of the present disclosure, the second penalty item E2 further includes a distance weight ω v (f i ). The farther the intersection point of the line of sight and the public plane f i is from the starting point of the line of sight, the greater the distance weight ω v (f i ).
如图6所示,任意一个公共面f i与视线均存在交点。交点与视线的起点p之间的距离越远,该交点对应的公共面的距离权重ω v(f i)就越大。网格密度权重ω v(f i)可以通过公式(6)计算得到。 As shown in FIG. 6 , there is an intersection point between any public plane f i and the line of sight. The farther the distance between the intersection point and the starting point p of the line of sight is, the greater the distance weight ω v (f i ) of the common plane corresponding to the intersection point is. Grid density weight ω v (f i ) can be calculated by formula (6).
Figure PCTCN2022106501-appb-000003
Figure PCTCN2022106501-appb-000003
其中,D(f i)表示公共面f i与视线的交点到视线起点p的距离。σ v是网格复杂度常数,σ v可以根据实际需要设置,本公开实施例对此不作限制。 Wherein, D(f i ) represents the distance from the intersection of the common plane f i and the line of sight to the starting point p of the line of sight. σv is a grid complexity constant, and σv can be set according to actual needs, which is not limited in the embodiments of the present disclosure.
在一些实施例中,考虑到噪声点或异常点可能会导致错误的视线穿过细小结构,本公开实施例在距离权重ω v(f i)中引入截断距离系数。重建物体表面的装置在距离D(f i)与视线长度
Figure PCTCN2022106501-appb-000004
之间的比值大于第一阈值(即,截断距离系数)的情况下,确定距离权重为零。
In some embodiments, considering that noise points or outliers may cause wrong line of sight to pass through fine structures, embodiments of the present disclosure introduce a truncated distance coefficient in the distance weight ω v (f i ). The device for reconstructing the surface of an object is at distance D(f i ) and line-of-sight length
Figure PCTCN2022106501-appb-000004
When the ratio between them is greater than the first threshold (that is, the truncated distance coefficient), it is determined that the distance weight is zero.
在一些实施例中,重建物体表面的装置可以在距离D(f i)与视线长度
Figure PCTCN2022106501-appb-000005
之间的比值大于第一阈值,且公共面f i的周长与公共面f i对应的拍摄位置点(图像采集点)的数量之间的比值V(f)大于第二阈值的情况下,确定距离权重为零。也就是说,在考虑到截断距离系数的同时还考虑到网格密度,可以减少对高密度网格区域施加不必要的惩罚的概率,提高施加惩罚的准确性。
In some embodiments, the device for reconstructing the surface of the object can be between the distance D(f i ) and the line-of-sight length
Figure PCTCN2022106501-appb-000005
When the ratio between is greater than the first threshold, and the ratio V(f) between the perimeter of the public plane fi and the number of shooting position points (image acquisition points) corresponding to the public plane fi is greater than the second threshold, Make sure the distance weight is zero. That is to say, considering the truncation distance coefficient while also considering the grid density can reduce the probability of imposing unnecessary penalties on high-density grid areas and improve the accuracy of imposing penalties.
在一些实施例中,截断距离系数可以为1-S(P)。其中,S(P)用于表征视线起点的不确定度,S(P)可以根据相关方法计算得出。In some embodiments, the cutoff distance factor may be 1-S(P). Among them, S(P) is used to represent the uncertainty of the line of sight origin, and S(P) can be calculated according to related methods.
在本公开实施例中,截断距离系数可以为根据需要设置的常数,本公开实施例对此不作限制。In the embodiment of the present disclosure, the truncation distance coefficient may be a constant set as required, which is not limited in the embodiment of the present disclosure.
在一些实施例中,第二阈值可以为尺度控制量σ d,也可以为根据需要设置的其他值,本公开实施例对此不作限制。 In some embodiments, the second threshold may be the scale control amount σ d , or may be other values set according to needs, which is not limited in this embodiment of the present disclosure.
示例性的,基于图6A,图6B示出了相关技术中权重的可视化示意图,图6C示出了本公开实施例提供的一种可选的权重可视化示意图。可以看出,f 3处网格密度大,且距离坐标点P较远,大概率为空间中的某一细小结构。相关技术中由于对其累加较大权重,会导致细节丢失;而本公开实施例中,重建物体表面的装置将此处权重置为0,以及与坐标点P的距离更远处的权重置为0,从而减少对高密度网格区域施加不必要的惩罚的概率,提高施加惩罚的准确性。 Exemplarily, based on FIG. 6A , FIG. 6B shows a schematic diagram of visualization of weights in the related art, and FIG. 6C shows an optional schematic diagram of visualization of weights provided by an embodiment of the present disclosure. It can be seen that the grid density at f 3 is high, and it is far away from the coordinate point P, which is likely to be a small structure in the space. In the related art, due to the accumulation of large weights to it, the details will be lost; however, in the embodiment of the present disclosure, the device for reconstructing the surface of the object resets the weight here to 0, and the weight farther away from the coordinate point P Set it to 0, so as to reduce the probability of applying unnecessary penalties to high-density grid areas and improve the accuracy of applying penalties.
可以理解的是,通过引入截断距离系数,减少了由噪声引起的错误视线的影响,增加了表面重建过程的鲁棒性,有效提升了模型的细节重建能力。It can be understood that by introducing the truncated distance coefficient, the influence of the false line of sight caused by noise is reduced, the robustness of the surface reconstruction process is increased, and the detailed reconstruction ability of the model is effectively improved.
网格质量权重ω q(f i) Grid quality weight ω q (f i )
在本公开实施例中,网格质量权重ω q(f i)用于考虑局部网格形状的影响。 一般而言,网格的形状越好,网格质量越高,利用该网格所得到的结果就越可靠,网格质量权重ω q(f i)就越小。 In the disclosed embodiments, the grid quality weight ω q (f i ) is used to consider the influence of the local grid shape. Generally speaking, the better the grid shape and the higher the grid quality, the more reliable the results obtained by using the grid, and the smaller the grid quality weight ω q (f i ).
参考图7,公共面f为四面体τ 1和四面体τ 2之间的公共面。ω q(f)可以根据公式(7)计算得到。 Referring to FIG. 7, the common plane f is the common plane between the tetrahedron τ 1 and the tetrahedron τ 2 . ω q (f) can be calculated according to formula (7).
Figure PCTCN2022106501-appb-000006
Figure PCTCN2022106501-appb-000006
其中,θ为四面体τ 1的外接球与公共面f的夹角,
Figure PCTCN2022106501-appb-000007
为四面体τ 2的外接球与公共面f的夹角。外接球与公共面f的夹角可以定义为外接球的球心和公共面的任一顶点之间的连线与公共面f之间的线面夹角。
Among them, θ is the angle between the circumscribed sphere of the tetrahedron τ1 and the common surface f,
Figure PCTCN2022106501-appb-000007
is the angle between the circumsphere of the tetrahedron τ 2 and the public surface f. The included angle between the circumscribing sphere and the public surface f can be defined as the line-surface angle between the line between the center of the circumscribing sphere and any vertex of the public surface and the public surface f.
网格质量权重ω q表征两个四面体τ 1和τ 2的相对角度的影响。两个四面体τ之间的相对角度较小证明局部网格的形状更好。 The mesh quality weight ωq characterizes the effect of the relative angle of the two tetrahedrons τ1 and τ2 . The smaller relative angle between the two tetrahedra τ proves the better shape of the local mesh.
在一些实施例中,对于存在第二惩罚项E 第二(f i)的公共面f i,开始计算能量函数E 能量的系数之前,第二惩罚项E 第二(f i)可以初始化为0。根据与该公共面f i相交的每一条视线,为该第二惩罚项E 第二(f i)累加一个对应的值ω d(f iv(f iq(f ivIn some embodiments, for the public surface f i with the second penalty term Esecond (f i ), before starting to calculate the coefficient of the energy function E energy , the second penalty term Esecond (f i ) can be initialized to 0 . Accumulate a corresponding value ω d (f i ) ω v (f i ) ω q (f iv .
在一些实施例中,对于四面体网格中的每个顶点p,存在标签为“内”的第一惩罚项E 第一(τ,内)和另一个标签为“外”的第一惩罚项E 第一(τ,外)。标签为“内”时的第一惩罚项E 第一(τ,内)表示四面体τ被标记为场景内部的惩罚。标签为“外”时的第一惩罚项E 第一(τ,外)表示四面体τ被标记为场景外部的惩罚。 In some embodiments, for each vertex p in the tetrahedral mesh, there is a first penalty term E(τ,inner) labeled "inner" and another first penalty term labeled "outer". Efirst (τ,outer). The first penalty term Efirst (τ,inner) when the label is "inside" means that the tetrahedron τ is marked as penalized inside the scene. The first penalty term Efirst (τ,outside) when the label is “outside” indicates the penalty for tetrahedron τ being marked as outside the scene.
在一些实施例中,在开始计算能量函数E 能量的系数之前,第一惩罚项E 第一(τ)可以初始化为0。对于每一条视线,在四面体τ是该视线的拍摄位置点c所处的四面体τ c的情况下,可以为第一惩罚项E 第一(τ,内)累加一个对应的预设惩罚系数α v。对于每一条视线,在四面体τ是该视线的起点p所处的四面体τ p时,可以为第一惩罚项E 第一(τ,外)累加一个对应的预设惩罚系数α vIn some embodiments, the first penalty term Efirst (τ) may be initialized to 0 before starting to calculate the coefficients of the energy function Eenergy . For each line of sight, in the case that the tetrahedron τ is the tetrahedron τ c where the shooting position point c of the line of sight is located, a corresponding preset penalty coefficient can be accumulated for the first penalty item Efirst (τ, inner) α v . For each line of sight, when the tetrahedron τ is the tetrahedron τ p where the starting point p of the line of sight is located, a corresponding preset penalty coefficient α v can be accumulated for the first penalty item Efirst (τ, outer).
在本公开实施例中,重建物体表面的装置在得到公式(1)之后,可以使能量函数E能量最小化,从而得到一个二元标签集L。从公式(1)中可以看出,能量函数E 能量(T,F,L)包括与每个四面体τ对应的第一惩罚项E 第一(τ,λ(τ))的和以及与具有不同标签λ的相邻四面体τ的公共面f对应的第二惩罚项E 第二(f,λ(τ),λ(τ′))之和。其中,对于任意一个公共面f i,E 第二(f i,λ(τ),λ(τ′))的值即为上文中所述的E 第二(f i)。 In the embodiment of the present disclosure, after obtaining the formula (1), the apparatus for reconstructing the object surface can minimize the energy of the energy function E, so as to obtain a binary label set L. It can be seen from the formula (1) that the energy function E energy (T, F, L) includes the sum of the first penalty term E first (τ, λ(τ)) corresponding to each tetrahedron τ and the sum with The sum of the second penalty term E2 (f, λ(τ), λ(τ′)) corresponding to the common face f of adjacent tetrahedra τ with different labels λ. Wherein, for any common surface f i , the value of Esecond (f i ,λ(τ),λ(τ′)) is Esecond (f i ) mentioned above.
显然,对于不同的二元标签集,四面体网格中的四面体τ的标签λ可能不同,第二惩罚项E 第二(f,λ(τ),λ(τ′))所对应的公共面f可能不同,第二惩罚项E 第二(f,λ(τ),λ(τ′))之和可以不同,能量函数E 能量(T,F,L)的值可以不同。如此,重建物体表面的装置可以求解一个二元标签集L,使能量函数E 能量(T,F,L)最小化。 Obviously, for different binary label sets, the label λ of the tetrahedron τ in the tetrahedral grid may be different, and the second penalty term E2 (f,λ(τ),λ(τ′)) corresponds to the public The surface f may be different, the sum of the second penalty item E (f, λ(τ), λ(τ′)) may be different, and the value of the energy function E energy (T, F, L) may be different. In this way, the device for reconstructing the object surface can solve a binary label set L to minimize the energy function E energy (T, F, L).
在一些实施例中,能量函数E 能量(T,F,L)最小化的问题可以利用s-t图割方法求解。参考图8,利用s-t图割方法求解能量函数E 能量(T,F,L)最小化的方法包括如下步骤。 In some embodiments, the problem of minimizing the energy function E energy (T, F, L) can be solved using the st graph cut method. Referring to FIG. 8 , the method for minimizing the energy function E energy (T, F, L) using the st graph cut method includes the following steps.
S41,将四面体网格映射为有向图G。S41, mapping the tetrahedron grid into a directed graph G.
在本公开实施例中,四面体网格中的每个四面体τ可以映射成有向图G中的一个图点ν,并将公共面f作为有向图G的图点ν之间的连线ζ。In the embodiment of the present disclosure, each tetrahedron τ in the tetrahedral grid can be mapped to a point ν in the directed graph G, and the common face f is used as the connection between the points ν of the directed graph G Line ζ.
如图9所示,有向图G中的图点ν 1、ν 2和ν 3分别对应图5中的四面体τ 1、τ 2和τ 3。有向图G中的连线ζ 1和ζ 2分别对应图5中的公共面f 1和f 2As shown in Fig. 9, the graph points ν 1 , ν 2 and ν 3 in the directed graph G correspond to the tetrahedrons τ 1 , τ 2 and τ 3 in Fig. 5, respectively. The connections ζ 1 and ζ 2 in the directed graph G correspond to the common planes f 1 and f 2 in Fig. 5, respectively.
S42,在有向图G中添加虚拟起点s和虚拟端点t,且将第一惩罚项E 第一和第二惩罚项E 第二映射为有向图G中的连线ζ上的流量。 S42. Add a virtual start point s and a virtual end point t to the directed graph G, and map the first penalty item E1 and the second penalty item E2 to the flow on the link ζ in the directed graph G.
在本公开实施例中,四面体的第一惩罚项E 第一(τ,外)为虚拟起点到该四面体对应的图点之间连线的流量容量。四面体的E 第一(τ,内)为该四面体对应的图点到虚拟端点之间连线的流量容量。示例性的,如图9所示,虚拟源点s与所有图点ν i经连线连接。虚拟汇点t与所有图点ν i经连线连接。四面体τ i的第一惩罚项E 第一i,外)作为从虚拟起点s流向图点ν i的连线的流量容量(flow capacity)。四面体τ i的第一惩罚项E 第一i,内)作为从图点ν i流向虚拟端点t的连线的流量容量。 In the embodiment of the present disclosure, the first penalty item Efirst (τ, outer) of the tetrahedron is the flow capacity of the line between the virtual starting point and the corresponding graph point of the tetrahedron. The first (τ, inner) of the tetrahedron is the flow capacity of the line between the corresponding graph point and the virtual end point of the tetrahedron. Exemplarily, as shown in FIG. 9 , the virtual source point s is connected to all graph points ν i via a connecting line. The virtual sink t is connected with all graph points ν i by connecting lines. The first penalty term E1i , outer) of the tetrahedron τ i is the flow capacity of the connection line flowing from the virtual starting point s to the graph point ν i . The first penalty term Efirsti ,inner) of the tetrahedron τ i is the flow capacity of the line flowing from the graph point ν i to the virtual endpoint t.
在本公开实施例中,两个不同的四面体τ和四面体τ四的公共面f i的第二惩罚项E 第二(f i)作为在四面体τ和四面体τ四之间的连线的流量容量。 In the present disclosure embodiment, the second penalty term E2( f i ) of the common face f i of two different tetrahedron τ and tetrahedron τ 4 acts as a connection between tetrahedron τ and tetrahedron τ 4 The flow capacity of the line.
示例性的,如图9所示,四面体τ 1和τ 2之间的公共面f 1的第二惩罚项E 第二(f i)为ν 1和ν 2之间的连线的流量容量。 Exemplarily, as shown in Figure 9, the second penalty term E2( f i ) of the common surface f 1 between the tetrahedron τ 1 and τ 2 is the flow capacity of the line between ν 1 and ν 2 .
需要说明的是,在有向图G中,在四面体τ和四面体τ四之间的流量是双向的,可以从τ至τ间,也可以从τ也至τ。在本公开实施例中,从τ至τ,以及从τ及至τ的流量容量相同,都等于公共面f i的第二惩罚项E 第二(f i)。 It should be noted that, in the directed graph G, the flow between tetrahedron τ and tetrahedron τ4 is bidirectional, and can be from τ to τ, or from τ to τ. In the embodiment of the present disclosure, the flow capacities from τ to τ, and from τ to τ are the same, which are equal to the second penalty term E2( f i ) of the common plane f i .
在本公开实施例中,连线的流量容量是指该连线所允许的流量的最大值,也称为该连线的权值。In the embodiments of the present disclosure, the traffic capacity of a connection refers to the maximum value of traffic allowed by the connection, which is also referred to as the weight of the connection.
在本公开实施例中,添加虚拟起点s和虚拟端点t后的有向图G可以视为一个网络流图。在该网络流中,只有虚拟起点s会产生流量,虚拟端点t会接收流量。流量从虚拟起点s经图点ν流动至虚拟端点t。对于有向图G中除了虚拟起点s和虚拟端点t以外的图点ν,其净流量一定为0。也就是说,虚拟起点s的流量最终都会通过有向图G中的连线到达虚拟端点t。In the embodiment of the present disclosure, the directed graph G after adding the virtual start point s and the virtual end point t can be regarded as a network flow graph. In this network flow, only the virtual origin s will generate traffic, and the virtual endpoint t will receive traffic. The flow flows from the virtual starting point s to the virtual end point t through the graph point ν. For the graph point ν in the directed graph G except the virtual start point s and the virtual end point t, its net flow must be 0. That is to say, the flow of the virtual starting point s will eventually reach the virtual endpoint t through the connection in the directed graph G.
S43,以从虚拟起点s到虚拟端点t的总流量最大化为目标,计算二元标签L,其中,在总流量的计算过程中,仅对分别对应于物体内部和外部的两个图点ν之间的流量进行求和。S43, with the goal of maximizing the total flow from the virtual starting point s to the virtual end point t, calculate the binary label L, wherein, in the calculation process of the total flow, only two map points ν corresponding to the inside and outside of the object respectively The flows between them are summed.
在本公开实施例中,对于每一个二元标签集,有向图G中的所有图点ν被分成两类:第一类,标签为外的图点,包括虚拟起点s;第二类,标签为内的图点,包括虚拟端点t。将所有图点ν分成分别包括虚拟起点s和虚拟端点 t的两类的过程称为s-t图割。In the embodiment of the present disclosure, for each binary label set, all graph points ν in the directed graph G are divided into two categories: the first category, the graph points whose labels are outside, including the virtual starting point s; the second category, Labeled plot points within, including the virtual endpoint t. The process of dividing all graph points ν into two classes including virtual start point s and virtual end point t is called s-t graph cut.
由于只有虚拟起点s会产生流量,虚拟端点t会接收流量,图点ν的净流量一定为零,所以网络流从虚拟起点s到虚拟端点t的净流量等于两类图点ν的相邻连线ζ上的净流量。即,总流量为对应于物体内部和外部的两个图点之间的流量之和。通过求解网络流中的最大流,可以实现上述s-t图割,进而得到我们所需要的二元标签集L。Since only the virtual starting point s will generate traffic, and the virtual endpoint t will receive traffic, the net flow of the graph point ν must be zero, so the net flow of the network flow from the virtual starting point s to the virtual endpoint t is equal to the adjacent connection of the two types of graph point ν Net flow on line ζ. That is, the total flow is the sum of the flows between two map points corresponding to the interior and exterior of the object. By solving the maximum flow in the network flow, the above s-t graph cut can be realized, and then the binary label set L we need can be obtained.
S14,提取具有不同二元标签λ的四面体τ之间的公共面f,基于公共面f重建物体表面。S14, extracting a common face f between tetrahedrons τ with different binary labels λ, and reconstructing the object surface based on the common face f.
在本公开实施例中,提取四面体网格中标签λ分别为“内”和“外”的两个相邻四面体τ之间的公共面f。将这些公共面f融合在一起作为重建后的物体表面。In the embodiment of the present disclosure, the common face f between two adjacent tetrahedrons τ whose labels λ are respectively "inner" and "outer" in the tetrahedral grid is extracted. These common faces f are fused together as the reconstructed object surface.
在本公开的一些实施例中,重建物体表面的装置可以对这些提取后的公共面f进行增强处理。这里,增强处理可以包括为重建后的表面进行色彩渲染等,本公开实施例对此不作限制。在本公开的一些实施例中,重建物体表面的装置也可以对提取后的公共面f进行的平滑处理。In some embodiments of the present disclosure, the apparatus for reconstructing object surfaces may perform enhancement processing on these extracted common surfaces f. Here, the enhancement processing may include color rendering for the reconstructed surface, etc., which is not limited in this embodiment of the present disclosure. In some embodiments of the present disclosure, the apparatus for reconstructing the object surface may also perform smoothing processing on the extracted common plane f.
通过上述步骤,重建物体表面的装置可以完成对场景中的物体或者整个场景的表面的重建。Through the above steps, the device for reconstructing the object surface can complete the reconstruction of the object in the scene or the surface of the entire scene.
如图10所示,本公开实施例还提供一种重建物体表面的装置100,重建物体表面的装置100包括:密集点云生成模块110、网格生成模块120、四面体标记模块130以及表面提取模块140。As shown in FIG. 10 , the embodiment of the present disclosure also provides a device 100 for reconstructing the surface of an object. The device 100 for reconstructing the surface of an object includes: a dense point cloud generation module 110, a mesh generation module 120, a tetrahedron marking module 130, and surface extraction Module 140.
其中,密集点云生成模块110,被配置为利用从不同拍摄位置点对场景进行拍摄所获得的多张图像生成对应的密集点云。网格生成模块120,被配置为基于密集点云,生成与密集点云对应的四面体网格。四面体标记模块130,被配置为基于能量函数最小化,确定每一个四面体的二元标签,其中,二元标签用于表征四面体位于物体表面的内部或外部;表面提取模块140,被配置为提取具有不同二元标签的四面体之间的公共面,并基于公共面重建物体表面;其中,所述能量函数包括与所述每一个四面体对应的第一惩罚项之和以及与所述每个公共面对应的第二惩罚项之和;所述第一惩罚项是基于对应的四面体的二元标签确定的;所述第二惩罚项包括网格密度权重;所述网格密度权重用于表征所述公共面对应的拍摄位置点的数量。Wherein, the dense point cloud generating module 110 is configured to generate a corresponding dense point cloud using multiple images obtained by shooting the scene from different shooting positions. The mesh generation module 120 is configured to generate a tetrahedral mesh corresponding to the dense point cloud based on the dense point cloud. The tetrahedron labeling module 130 is configured to determine the binary label of each tetrahedron based on the minimization of the energy function, wherein the binary label is used to indicate that the tetrahedron is located inside or outside the surface of the object; the surface extraction module 140 is configured To extract the common face between tetrahedrons with different binary labels, and reconstruct the surface of the object based on the common face; wherein, the energy function includes the sum of the first penalty term corresponding to each tetrahedron and the sum of the The sum of the second penalty term corresponding to each public face; the first penalty term is determined based on the binary label of the corresponding tetrahedron; the second penalty term includes a grid density weight; the grid density The weight is used to characterize the number of shooting location points corresponding to the common surface.
在一些实施例中,所述四面体标记模块130,还被配置为基于能量函数最小化,为每一个所述四面体生成二元标签之前,在从所述密集点云中的每个坐标点到所述拍摄位置点的视线,与所述公共面的相交的情况下,为所述公共面设置所述第二惩罚项;其中,所述第二惩罚项为所述网格密度权重与预设惩罚系数的乘积。In some embodiments, the tetrahedron labeling module 130 is further configured to generate a binary label for each tetrahedron based on energy function minimization, before each coordinate point in the dense point cloud When the line of sight to the shooting location point intersects with the common surface, set the second penalty item for the common surface; wherein, the second penalty item is the grid density weight and the preset Set the product of penalty coefficients.
在一些实施例中,所述公共面对应的拍摄位置点的数量包括所述公共面的三个顶点中每一个顶点对应的拍摄位置点的数量之和。In some embodiments, the number of shooting location points corresponding to the common surface includes a sum of the numbers of shooting location points corresponding to each of the three vertices of the common surface.
在一些实施例中,所述公共面的边长之和越大,所述网格密度权重越大。In some embodiments, the greater the sum of the side lengths of the common surfaces is, the greater the grid density weight is.
在一些实施例中,所述密集点云生成模块110,还被配置为基于所述多张 图像进行特征点匹配,得到多个共视点;所述共视点用于表征所述多张图像对应的多个拍摄位置点均拍摄到的所述场景中的点;基于所述共视点得到所述密集点云中的坐标点,从而得到所述密集点云。In some embodiments, the dense point cloud generation module 110 is further configured to perform feature point matching based on the multiple images to obtain multiple common view points; the common view points are used to represent the corresponding points of the multiple images Points in the scene captured by multiple shooting position points; coordinate points in the dense point cloud are obtained based on the common view points, thereby obtaining the dense point cloud.
在一些实施例中,所述密集点云生成模块110,还被配置为基于所述密集点云,生成与所述密集点云对应的四面体网格之前,查询所述密集点云的坐标点的来源图像;根据所述来源图像确定出所述来源图像对应的拍摄位置点;将所述拍摄位置点添加至所述密集点云中,得到更新的密集点云;所述网格生成模块120,还被配置为基于所述更新的密集点云,生成与所述密集点云对应的四面体网格。In some embodiments, the dense point cloud generating module 110 is further configured to query the coordinate points of the dense point cloud before generating the tetrahedral mesh corresponding to the dense point cloud based on the dense point cloud The source image of the source image; determine the shooting location point corresponding to the source image according to the source image; add the shooting location point to the dense point cloud to obtain an updated dense point cloud; the grid generation module 120 , is further configured to generate a tetrahedral mesh corresponding to the dense point cloud based on the updated dense point cloud.
在一些实施例中,所述网格生成模块120,还被配置为基于所述拍摄位置点和所述坐标点,生成所述四面体网格;其中,对于从所述密集点云中的每个坐标点到所述拍摄位置点的所述视线是基于所述四面体的顶点的类型确定的;所述四面体的顶点包括:所述视线的拍摄位置点或者所述视线的坐标点。In some embodiments, the grid generation module 120 is further configured to generate the tetrahedron grid based on the shooting location point and the coordinate point; wherein, for each point cloud from the dense The line of sight from coordinate points to the shooting position point is determined based on the type of vertices of the tetrahedron; the vertices of the tetrahedron include: the shooting position point of the line of sight or the coordinate point of the line of sight.
在一些实施例中,所述第二惩罚项还包括距离权重;所述视线与所述公共面的交点距离所述视线的起点的距离越远,所述距离权重越大。In some embodiments, the second penalty item further includes a distance weight; the farther the intersection point of the line of sight and the common plane is from the starting point of the line of sight, the greater the distance weight is.
在一些实施例中,在所述距离与所述视线的长度之间的比值大于第一阈值,且所述公共面的边长之和,与所述公共面的三个顶点中的每一个顶点对应的拍摄位置点数量之和的比值大于第二阈值的情况下,所述距离权重为零。In some embodiments, the ratio between the distance and the length of the line of sight is greater than a first threshold, and the sum of the side lengths of the common surface is equal to that of each of the three vertices of the common surface In a case where the ratio of the sum of the numbers of corresponding shooting location points is greater than the second threshold, the distance weight is zero.
在一些实施例中,所述四面体标记模块130,还被配置为将每一个所述四面体映射成有向图中的一个图点,并将所述公共面作为所述有向图的图点之间的连线;在所述有向图设置虚拟起点和虚拟端点,其中,所述第一惩罚项转换为所述图点与所述虚拟起点或虚拟端点之间的流量,所述第二惩罚项转换为所述图点之间的流量;以从虚拟起点到虚拟端点的总流量最大化为目标,计算所述二元标签,其中,在所述总流量的计算过程中,仅对分别对应于所述物体内部和外部的两个图点之间的流量进行求和。In some embodiments, the tetrahedron labeling module 130 is further configured to map each tetrahedron into a graph point in a directed graph, and use the common face as a graph of the directed graph A connection line between points; a virtual start point and a virtual end point are set in the directed graph, wherein the first penalty item is converted into the flow between the graph point and the virtual start point or virtual end point, and the second The two penalty items are converted into the flow between the graph points; with the goal of maximizing the total flow from the virtual start point to the virtual end point, the binary label is calculated, wherein, in the calculation process of the total flow, only for The flow is summed between two plot points corresponding to the interior and exterior of the object, respectively.
在一些实施例中,所述四面体是德洛内Delaunay四面体。In some embodiments, the tetrahedron is a Delaunay tetrahedron.
在一些实施例中,所述密集点云生成模块110,还被配置为生成与所述多张图像对应的深度点云作为所述密集点云。In some embodiments, the dense point cloud generating module 110 is further configured to generate depth point clouds corresponding to the multiple images as the dense point cloud.
上述重建物体表面的方法一般由一种用于从场景的多张图像重建物体表面的装置实现,因而本公开实施例还提出一种用于从场景的多张图像重建物体表面的装置。请参阅图11,图11是本公开实施例提供的重建物体表面的装置200一实施例的结构示意图。重建物体表面的装置200包括:处理器210和存储器220。存储器220中存储有计算机程序,处理器210用于执行计算机程序以实现如上述重建物体表面的方法的步骤。The above-mentioned method for reconstructing an object surface is generally implemented by an apparatus for reconstructing an object surface from multiple images of a scene. Therefore, an embodiment of the present disclosure also proposes an apparatus for reconstructing an object surface from multiple images of a scene. Please refer to FIG. 11 . FIG. 11 is a schematic structural view of an embodiment of an apparatus 200 for reconstructing an object surface provided by an embodiment of the present disclosure. The apparatus 200 for reconstructing an object surface includes: a processor 210 and a memory 220 . A computer program is stored in the memory 220, and the processor 210 is used to execute the computer program to realize the steps of the method for reconstructing the object surface as described above.
上述从场景的多张图像重建物体表面的方法的逻辑过程以计算机程序呈现,在计算机程序方面,若其作为独立的软件产品销售或使用时,其可存储在计算机存储介质中,因而本公开实施例提出一种计算机存储介质。请参阅图12,图12是本公开实施例提供的一种可选的计算机存储介质的结构示意图,本公开实施例提供的计算机存储介质300中存储有计算机程序310,计算机程 序310被处理器执行时实现上述重建物体表面的方法。The logical process of the method for reconstructing the object surface from multiple images of the scene is presented as a computer program. In terms of the computer program, if it is sold or used as an independent software product, it can be stored in a computer storage medium. Therefore, the present disclosure implements An example presents a computer storage medium. Please refer to FIG. 12. FIG. 12 is a schematic structural diagram of an optional computer storage medium provided by an embodiment of the present disclosure. A computer program 310 is stored in the computer storage medium 300 provided by an embodiment of the present disclosure, and the computer program 310 is executed by a processor. Realize the method for reconstructing the surface of an object as described above.
计算机存储介质300可以为U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory,)、磁碟或者光盘等可以存储计算机程序的介质,或者也可以为存储有该计算机程序的服务器,该服务器可将存储的计算机程序发送给其他设备运行,或者也可以自运行该存储的计算机程序。该计算机存储介质300从物理实体上来看,可以为多个实体的组合,例如多个服务器、服务器加存储器、或存储器加移动硬盘等多种组合方式。The computer storage medium 300 can be a medium that can store computer programs such as a U disk, a mobile hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, or It may also be a server storing the computer program, and the server may send the stored computer program to other devices for execution, or may run the stored computer program itself. From a physical point of view, the computer storage medium 300 may be a combination of multiple entities, such as multiple servers, servers plus storage, or storage plus a mobile hard disk.
综上所述,重建物体表面的装置通过将能量函数构建成包括与四面体对应的第一惩罚项之和以及与公共面对应的第二惩罚项之和,由于第二惩罚项包括网格密度权重,从而降低了噪声点或异常点对于重建后的物体表面细节的负面影响,提高重建精度。In summary, the device for reconstructing the surface of an object constructs the energy function to include the sum of the first penalty term corresponding to the tetrahedron and the sum of the second penalty term corresponding to the common face, since the second penalty term includes the grid Density weight, thereby reducing the negative impact of noise points or abnormal points on the details of the reconstructed object surface, and improving the reconstruction accuracy.
以上所述仅为本公开的实施方式,并非因此限制本公开的专利范围,凡是利用本公开说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本公开的专利保护范围内。The above is only the implementation of the present disclosure, and does not limit the patent scope of the present disclosure. Any equivalent structure or equivalent process conversion made by using the contents of this disclosure and the accompanying drawings, or directly or indirectly used in other related technologies fields, are equally included in the patent protection scope of the present disclosure.
工业实用性Industrial Applicability
本公开实施例中,由于噪声点或异常点对应的拍摄位置点较少,通过网格密度权重,可以对稀疏的网格施加更大的惩罚,也就是说,通过网格密度权重可以对噪声点或异常点施加更大的惩罚,从而降低噪声点或异常点对于细节的负面影响,保留更多的细节,提高物体重建表面的精度。In the embodiment of the present disclosure, since there are fewer shooting locations corresponding to noise points or abnormal points, a larger penalty can be imposed on sparse grids through the grid density weight, that is, the noise can be reduced through the grid density weight. Points or outliers impose greater penalties, thereby reducing the negative impact of noise points or outliers on details, retaining more details, and improving the accuracy of object reconstruction surfaces.

Claims (18)

  1. 一种重建物体表面的方法,所述方法包括:A method of reconstructing a surface of an object, the method comprising:
    利用场景的多张图像生成对应的密集点云,其中,所述多张图像为通过多个拍摄位置点对所述场景进行拍摄而获得的图像;Generate a corresponding dense point cloud by using multiple images of the scene, wherein the multiple images are images obtained by shooting the scene through multiple shooting position points;
    基于所述密集点云,生成与所述密集点云对应的四面体网格;Based on the dense point cloud, generating a tetrahedral mesh corresponding to the dense point cloud;
    基于能量函数最小化,确定每一个所述四面体的二元标签,其中,所述二元标签用于表征所述四面体位于物体表面的内部或外部;Based on the minimization of the energy function, determining a binary label of each tetrahedron, wherein the binary label is used to characterize that the tetrahedron is located inside or outside the surface of the object;
    提取具有不同二元标签的四面体之间的公共面,并基于所述公共面重建所述物体表面,extracting common faces between tetrahedrons with different binary labels, and reconstructing the object surface based on the common faces,
    其中,所述能量函数包括与所述每一个四面体对应的第一惩罚项之和以及与所述每个公共面对应的第二惩罚项之和;所述第一惩罚项是基于对应的四面体的二元标签确定的;所述第二惩罚项包括网格密度权重;所述网格密度权重用于表征所述公共面对应的拍摄位置点的数量。Wherein, the energy function includes the sum of the first penalty term corresponding to each tetrahedron and the sum of the second penalty term corresponding to each common face; the first penalty term is based on the corresponding The binary label of the tetrahedron is determined; the second penalty item includes a grid density weight; the grid density weight is used to characterize the number of shooting location points corresponding to the common surface.
  2. 根据权利要求1所述的方法,其中,所述基于能量函数最小化,为每一个所述四面体生成二元标签之前,所述方法还包括:The method according to claim 1, wherein, prior to generating a binary label for each tetrahedron based on the minimization of the energy function, the method further comprises:
    在从所述密集点云中的每个坐标点到所述拍摄位置点的视线,与所述公共面的相交的情况下,为所述公共面设置所述第二惩罚项;其中,所述第二惩罚项为所述网格密度权重与预设惩罚系数的乘积。In the case that the line of sight from each coordinate point in the dense point cloud to the shooting location point intersects with the common surface, the second penalty item is set for the common surface; wherein, the The second penalty item is the product of the grid density weight and a preset penalty coefficient.
  3. 根据权利要求2所述的方法,其中,所述公共面对应的拍摄位置点的数量包括所述公共面的三个顶点中每一个顶点对应的拍摄位置点的数量之和。The method according to claim 2, wherein the number of shooting location points corresponding to the common surface comprises a sum of the number of shooting location points corresponding to each of the three vertices of the common surface.
  4. 根据权利要求1-3任一项所述的方法,其中,所述公共面对应的拍摄位置点的数量越多,所述网格密度权重越小。The method according to any one of claims 1-3, wherein the greater the number of shooting location points corresponding to the common surface, the smaller the weight of the grid density.
  5. 根据权利要求1-3任一项所述的方法,其中,所述公共面的边长之和越大,所述网格密度权重越大。The method according to any one of claims 1-3, wherein the greater the sum of the side lengths of the common surfaces is, the greater the grid density weight is.
  6. 根据权利要求1-5任一项所述的方法,其中,所述利用场景的多张图像生成对应的密集点云包括:The method according to any one of claims 1-5, wherein said generating a corresponding dense point cloud using multiple images of the scene comprises:
    基于所述多张图像进行特征点匹配,得到多个共视点;所述共视点用于表征所述多张图像对应的多个拍摄位置点均拍摄到的所述场景中的点;Perform feature point matching based on the plurality of images to obtain a plurality of common view points; the common view points are used to represent points in the scene captured by the plurality of shooting position points corresponding to the plurality of images;
    基于所述共视点得到所述密集点云中的坐标点,从而得到所述密集点云。The coordinate points in the dense point cloud are obtained based on the common view points, so as to obtain the dense point cloud.
  7. 根据权利要求1-6任一项所述的方法,其中,所述基于所述密集点云,生成与所述密集点云对应的四面体网格之前,所述方法还包括:The method according to any one of claims 1-6, wherein, before generating a tetrahedral mesh corresponding to the dense point cloud based on the dense point cloud, the method further comprises:
    查询所述密集点云的坐标点的来源图像;Query the source image of the coordinate points of the dense point cloud;
    根据所述来源图像确定出所述来源图像对应的拍摄位置点;determining the shooting location point corresponding to the source image according to the source image;
    将所述拍摄位置点添加至所述密集点云中,得到更新的密集点云;Adding the shooting location point to the dense point cloud to obtain an updated dense point cloud;
    所述基于所述密集点云,生成与所述密集点云对应的四面体网格,包括:The generating a tetrahedron grid corresponding to the dense point cloud based on the dense point cloud includes:
    基于所述更新的密集点云,生成与所述密集点云对应的四面体网格。Based on the updated dense point cloud, a tetrahedral mesh corresponding to the dense point cloud is generated.
  8. 根据权利要求1-7任一项所述的方法,其中,所述基于所述密集点云, 生成与所述密集点云对应的四面体网格,包括:The method according to any one of claims 1-7, wherein said generating a tetrahedral mesh corresponding to said dense point cloud based on said dense point cloud comprises:
    基于所述拍摄位置点和所述坐标点,生成所述四面体网格;generating the tetrahedron grid based on the shooting location point and the coordinate point;
    其中,对于从所述密集点云中的每个坐标点到所述拍摄位置点的所述视线是基于所述四面体的顶点的类型确定的;所述四面体的顶点包括:所述视线的拍摄位置点或者所述视线的坐标点。Wherein, the line of sight from each coordinate point in the dense point cloud to the shooting position point is determined based on the type of vertices of the tetrahedron; the vertices of the tetrahedron include: the line of sight The shooting position point or the coordinate point of the line of sight.
  9. 根据权利要求2-8任一项所述的方法,其中,所述第二惩罚项还包括距离权重;所述方法还包括:The method according to any one of claims 2-8, wherein the second penalty item further comprises a distance weight; the method further comprises:
    所述视线与所述公共面的交点距离所述视线的起点的距离越远,所述距离权重越大。The farther the distance between the intersection point of the line of sight and the common plane is from the starting point of the line of sight, the greater the weight of the distance.
  10. 根据权利要求9所述的方法,其中,所述方法还包括:The method according to claim 9, wherein the method further comprises:
    在所述距离与所述视线的长度之间的比值大于第一阈值,且所述公共面的边长之和,与所述公共面的三个顶点中的每一个顶点对应的拍摄位置点数量之和的比值大于第二阈值的情况下,所述距离权重为零。The ratio between the distance and the length of the line of sight is greater than a first threshold, and the sum of the side lengths of the common surface corresponds to the number of shooting position points corresponding to each of the three vertices of the common surface When the ratio of the sum is greater than the second threshold, the distance weight is zero.
  11. 根据权利要求1所述的方法,其中,所述基于能量函数最小化,为每一个所述四面体生成二元标签,包括:The method according to claim 1, wherein said minimization based on energy function generates a binary label for each tetrahedron, comprising:
    将每一个所述四面体映射成有向图中的一个图点,并将所述公共面作为所述有向图的图点之间的连线;mapping each of the tetrahedrons into a point in a directed graph, and using the common face as a connection between the points in the directed graph;
    在所述有向图设置虚拟起点和虚拟端点,其中,所述第一惩罚项转换为所述图点与所述虚拟起点或虚拟端点之间的流量,所述第二惩罚项转换为所述图点之间的流量;A virtual start point and a virtual end point are set in the directed graph, wherein the first penalty item is converted to the flow between the graph point and the virtual start point or virtual end point, and the second penalty item is converted to the flow between graph points;
    以从虚拟起点到虚拟端点的总流量最大化为目标,计算所述二元标签,其中,在所述总流量的计算过程中,仅对分别对应于所述物体内部和外部的两个图点之间的流量进行求和。The binary label is calculated with the goal of maximizing the total flow from the virtual start point to the virtual end point, wherein, during the calculation of the total flow, only two map points corresponding to the inside and outside of the object The flows between them are summed.
  12. 根据权利要求1所述的方法,其中,所述四面体是德洛内Delaunay四面体。The method of claim 1, wherein the tetrahedron is a Delaunay tetrahedron.
  13. 根据权利要求1所述的方法,其中,所述多张图像是所述场景的至少一部分的色彩图像。The method of claim 1, wherein the plurality of images are color images of at least a portion of the scene.
  14. 根据权利要求1所述的方法,其中,所述利用所述多张图像生成对应的密集点云包括:The method according to claim 1, wherein said generating a corresponding dense point cloud using said plurality of images comprises:
    生成与所述多张图像对应的深度点云作为所述密集点云。Generate a depth point cloud corresponding to the plurality of images as the dense point cloud.
  15. 一种重建物体表面的装置,所述装置包括:A device for reconstructing the surface of an object, the device comprising:
    密集点云生成模块,被配置成利用场景的多张图像生成对应的密集点云,其中,所述多张图像为通过多个拍摄位置点对所述场景进行拍摄而获得的图像;The dense point cloud generation module is configured to generate a corresponding dense point cloud using multiple images of the scene, wherein the multiple images are images obtained by shooting the scene through multiple shooting position points;
    网格生成模块,被配置成基于所述密集点云,生成与所述密集点云对应的四面体网格,其中,所述四面体网格中的每个四面体的顶点为所述密集点云中的坐标点;A mesh generation module configured to generate a tetrahedral mesh corresponding to the dense point cloud based on the dense point cloud, wherein the vertex of each tetrahedron in the tetrahedral mesh is the dense point Coordinate points in the cloud;
    四面体标记模块,被配置成基于能量函数最小化,为所述四面体中的每一个生成二元标签,其中所述二元标签用于表征所述四面体位于物体表面的 内部或外部;A tetrahedron labeling module configured to generate a binary label for each of the tetrahedrons based on the minimization of an energy function, wherein the binary label is used to characterize that the tetrahedron is located inside or outside a surface of an object;
    表面提取模块,被配置成提取具有不同二元标签的四面体之间的公共面,并基于所述公共面重建所述物体表面,a surface extraction module configured to extract common faces between tetrahedrons with different binary labels, and reconstruct said object surface based on said common faces,
    其中,所述能量函数包括与所述每一个四面体对应的第一惩罚项之和以及与所述每个公共面对应的第二惩罚项之和;所述第一惩罚项是基于对应的四面体的二元标签确定的;所述第二惩罚项包括网格密度权重;所述网格密度权重用于表征所述公共面对应的拍摄位置点的数量。Wherein, the energy function includes the sum of the first penalty term corresponding to each tetrahedron and the sum of the second penalty term corresponding to each common face; the first penalty term is based on the corresponding The binary label of the tetrahedron is determined; the second penalty item includes a grid density weight; the grid density weight is used to characterize the number of shooting location points corresponding to the common surface.
  16. 一种重建物体表面的装置,所述装置包括处理器和存储器;所述存储器中存储有计算机程序,所述处理器用于执行所述计算机程序以实现如权利要求1-14中任一项所述方法的步骤。A device for reconstructing the surface of an object, the device comprising a processor and a memory; a computer program is stored in the memory, and the processor is used to execute the computer program to realize any one of claims 1-14 method steps.
  17. 一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-14中任一项所述方法的步骤。A computer storage medium, the computer storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the method according to any one of claims 1-14 are realized.
  18. 一种计算机程序产品,包括计算机可读代码,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行权利要求1-14中任意一项所述的方法。A computer program product comprising computer readable code, said computer program product comprising a computer program or instructions which, when run on an electronic device, causes said electronic device to perform claims 1-14 any one of the methods described.
PCT/CN2022/106501 2021-11-29 2022-07-19 Method and apparatus for reconstructing surface of object, and computer storage medium and computer program product WO2023093085A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111433346.7A CN114049466A (en) 2021-11-29 2021-11-29 Method, apparatus and computer storage medium for reconstructing a surface of an object
CN202111433346.7 2021-11-29

Publications (1)

Publication Number Publication Date
WO2023093085A1 true WO2023093085A1 (en) 2023-06-01

Family

ID=80211644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/106501 WO2023093085A1 (en) 2021-11-29 2022-07-19 Method and apparatus for reconstructing surface of object, and computer storage medium and computer program product

Country Status (2)

Country Link
CN (1) CN114049466A (en)
WO (1) WO2023093085A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049466A (en) * 2021-11-29 2022-02-15 浙江商汤科技开发有限公司 Method, apparatus and computer storage medium for reconstructing a surface of an object
CN114937124B (en) * 2022-07-25 2022-10-25 武汉大势智慧科技有限公司 Three-dimensional reconstruction method, device and equipment of sheet-shaped target object based on oblique photography

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150228114A1 (en) * 2014-02-13 2015-08-13 Microsoft Corporation Contour completion for augmenting surface reconstructions
CN107247834A (en) * 2017-05-31 2017-10-13 华中科技大学 A kind of three dimensional environmental model reconstructing method, equipment and system based on image recognition
CN109147038A (en) * 2018-08-21 2019-01-04 北京工业大学 Pipeline three-dimensional modeling method based on three-dimensional point cloud processing
CN113593037A (en) * 2021-07-29 2021-11-02 华中科技大学 Building method and application of Delaunay triangulated surface reconstruction model
CN114049466A (en) * 2021-11-29 2022-02-15 浙江商汤科技开发有限公司 Method, apparatus and computer storage medium for reconstructing a surface of an object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150228114A1 (en) * 2014-02-13 2015-08-13 Microsoft Corporation Contour completion for augmenting surface reconstructions
CN107247834A (en) * 2017-05-31 2017-10-13 华中科技大学 A kind of three dimensional environmental model reconstructing method, equipment and system based on image recognition
CN109147038A (en) * 2018-08-21 2019-01-04 北京工业大学 Pipeline three-dimensional modeling method based on three-dimensional point cloud processing
CN113593037A (en) * 2021-07-29 2021-11-02 华中科技大学 Building method and application of Delaunay triangulated surface reconstruction model
CN114049466A (en) * 2021-11-29 2022-02-15 浙江商汤科技开发有限公司 Method, apparatus and computer storage medium for reconstructing a surface of an object

Also Published As

Publication number Publication date
CN114049466A (en) 2022-02-15

Similar Documents

Publication Publication Date Title
WO2019157924A1 (en) Real-time detection method and system for three-dimensional object
WO2023093085A1 (en) Method and apparatus for reconstructing surface of object, and computer storage medium and computer program product
US20170330375A1 (en) Data Processing Method and Apparatus
Chen et al. Surface normals in the wild
WO2023015409A1 (en) Object pose detection method and apparatus, computer device, and storage medium
KR101869605B1 (en) Three-Dimensional Space Modeling and Data Lightening Method using the Plane Information
Li et al. Rts3d: Real-time stereo 3d detection from 4d feature-consistency embedding space for autonomous driving
CN113313832B (en) Semantic generation method and device of three-dimensional model, storage medium and electronic equipment
Kim et al. Interactive 3D building modeling method using panoramic image sequences and digital map
CN114519772A (en) Three-dimensional reconstruction method and system based on sparse point cloud and cost aggregation
Gadasin et al. A Model for Representing the Color and Depth Metric Characteristics of Objects in an Image
Li et al. Deep learning based monocular depth prediction: Datasets, methods and applications
CN113379748A (en) Point cloud panorama segmentation method and device
Lin et al. Visual saliency and quality evaluation for 3D point clouds and meshes: An overview
Yin et al. [Retracted] Virtual Reconstruction Method of Regional 3D Image Based on Visual Transmission Effect
CN112633293B (en) Three-dimensional sparse point cloud reconstruction image set classification method based on image segmentation
Arampatzakis et al. Monocular depth estimation: A thorough review
Li et al. Sat2vid: Street-view panoramic video synthesis from a single satellite image
CN113763468B (en) Positioning method, device, system and storage medium
Hu et al. 3D map reconstruction using a monocular camera for smart cities
CN111260544B (en) Data processing method and device, electronic equipment and computer storage medium
Wei et al. BuilDiff: 3D Building Shape Generation using Single-Image Conditional Point Cloud Diffusion Models
He et al. Manhattan‐world urban building reconstruction by fitting cubes
CN113487741A (en) Dense three-dimensional map updating method and device
Lv et al. Optimisation of real‐scene 3D building models based on straight‐line constraints

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897181

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE