CN112489207A - Space-constrained dense matching point cloud plane element extraction method - Google Patents
Space-constrained dense matching point cloud plane element extraction method Download PDFInfo
- Publication number
- CN112489207A CN112489207A CN202110167128.7A CN202110167128A CN112489207A CN 112489207 A CN112489207 A CN 112489207A CN 202110167128 A CN202110167128 A CN 202110167128A CN 112489207 A CN112489207 A CN 112489207A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- hyper
- plane
- matching point
- normal vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/005—Tree description, e.g. octree, quadtree
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a space-constrained dense matching point cloud plane element extraction method, which comprises the following steps: obtaining a dense matching point cloud of a target object, and dividing the dense matching point cloud into a plurality of voxels; determining a plane support area of each hyper-voxel by adopting a region growing method, and constructing a maximum plane support area based on the plane support area; performing global optimization on normal vectors of points in the point cloud according to the maximum plane support area to obtain normal vector information of the dense matching point cloud; and extracting the plane element of the target object and repairing the plane element according to the normal vector information. According to the invention, by constructing the maximum plane support area, carrying out global normal vector optimization based on the maximum plane support area, and carrying out plane elements and plane element repair according to the optimized normal vector, the influence of noise can be effectively reduced, the structural characteristics of an object can be retained, and complete and continuous plane elements can be extracted.
Description
Technical Field
The invention relates to the technical field of geographic information systems, in particular to a space-constrained dense matching point cloud plane element extraction method.
Background
The buildings are important ground objects in urban areas, and how to extract the buildings from scenes is always a key point and a hot spot for researches of scholars at home and abroad. The oblique aerial image has the advantages of being visible in the vertical face and free of shielding, and not only can provide information of the roof of the building, but also can provide information of the vertical face of the building.
The first step of the building structural reconstruction method is how to extract building plane elements from the dense matching point cloud of the oblique aerial image. The existing plane element extraction method is mainly divided into three types: in practical application, the clustering, region growing and model fitting methods are all easily affected by noise, so that the extraction of plane elements is incomplete, and sharp features are missing or extracted wrongly.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a space-constrained dense matching point cloud plane element extraction method, aiming at solving the problems of incomplete plane element extraction, sharp feature missing or wrong extraction caused by the fact that the existing plane element extraction method is easily influenced by noise.
The technical scheme adopted by the invention for solving the technical problem is as follows:
a method for extracting space-constrained dense matching point cloud plane elements comprises the following steps:
obtaining a dense matching point cloud of a target object, and dividing the dense matching point cloud into a plurality of voxels;
determining a plane support area of each hyper-voxel by adopting a region growing method, and constructing a maximum plane support area based on the plane support area;
performing global optimization on normal vectors of points in the point cloud according to the maximum plane support area to obtain normal vector information of the dense matching point cloud;
and extracting the plane element of the target object and repairing the plane element according to the normal vector information.
The method for extracting the space-constrained dense matching point cloud plane elements comprises the following steps of:
constructing an octree space index for the dense matching point cloud, and uniformly selecting a plurality of points from the octree space index as seed points;
and acquiring the feature vector of each point in the dense matching point cloud, and combining the leaf points of the octree spatial index according to the feature vector to obtain a plurality of hyper-voxels.
The method for extracting the space-constrained dense matching point cloud plane elements comprises the following steps of:
determining a characteristic value of each hyper-voxel according to singular value decomposition of a covariance matrix of a plurality of hyper-voxels;
and screening out the hyper-voxels crossing the boundary from the hyper-voxels according to the characteristic value, and reallocating points in the hyper-voxels crossing the boundary.
The method for extracting the spatially-constrained dense matching point cloud plane elements, wherein the step of reassigning the points in the hyper-voxels crossing the boundary comprises:
calculating the arithmetic mean of the three-dimensional coordinates of all points in each hyper-voxel which does not cross the boundary to obtain the three-dimensional centroid of each hyper-voxel which does not cross the boundary;
determining, from the three-dimensional centroid, a corresponding neighboring superpixel of the superpixels that spans the boundary, and assigning a point of the superpixels that spans the boundary to its corresponding neighboring superpixel.
The method for extracting the dense matching point cloud plane elements with the space constraint comprises the following steps of:
acquiring the 3D shape feature and normal vector of each hyper-voxel;
and performing neighborhood search by adopting a K nearest neighbor algorithm according to the 3D shape characteristics and the normal vector, and determining a plane support area of each hyper-voxel by using an area growing method.
The method for extracting the dense matching point cloud plane elements with the space constraint comprises the following steps of:
and performing mutual inclusion relation verification on the plane support areas of the super voxels by adopting a set intersection algorithm, and taking the plane support areas subjected to the mutual inclusion relation verification as the maximum plane support areas.
The method for extracting the dense matching point cloud plane elements with the space constraint comprises the following steps of performing global optimization on normal vectors of points in the point cloud according to the maximum plane support area, and acquiring normal vector information of the dense matching point cloud:
determining a rotation vector of each hyper-voxel according to the maximum plane support area and a preset regularization function and a preset loss function;
and reorienting the normal vector of each hyper-voxel according to the rotation vector, and performing global optimization on the normal vectors of the points in the dense matching point cloud according to the reoriented normal vectors to obtain the normal vector information of the dense matching point cloud.
The space-constrained dense matching point cloud plane element extraction method is characterized in that the formula of the regularization function is as follows:
the formula of the loss function is:
wherein the content of the first and second substances,is a set of neighboring hyper-voxels,as a loss function, corner marksFor the pair of neighboring hyper-voxels,in order to be a vector of rotation,is a rotation matrix calculated from the rotation vector,is the initial normal vector direction of the hyper-voxel,in order to be a weight parameter, the weight parameter,the number of hyper-voxels in the maximum planar support region,to wind around a fixed shaftThe angle of rotation of (a) is,is a preset parameter.
The method for extracting the dense matching point cloud plane elements of the space constraint comprises the following steps of:
taking the central point of the plane support area as a seed point, and acquiring the Euclidean distance and the normal vector similarity between the seed point and the field point thereof according to the normal vector information;
and extracting the plane element of the target object according to the Euclidean distance and the normal vector similarity.
The method for extracting the dense matching point cloud plane elements with the space constraint comprises the following steps of:
calculating an average normal vector of the plane primitive according to the normal vector information;
and carrying out reprojection on points on the plane element according to the average normal vector so as to repair the plane element.
An intelligent terminal, comprising: a processor, a storage medium communicatively coupled to the processor, the storage medium adapted to store a plurality of instructions; the processor is adapted to call instructions in the storage medium to execute the steps of the dense matching point cloud plane primitive extraction method for realizing the space constraint.
A storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded and executed by a processor to perform the steps in the dense matching point cloud plane primitive extraction method that implements the spatial constraints described above.
The invention has the beneficial effects that: according to the invention, by constructing the maximum plane support area, carrying out global normal vector optimization based on the maximum plane support area, and carrying out plane elements and plane element repair according to the optimized normal vector, the influence of noise can be effectively reduced, the structural characteristics of an object can be retained, and complete and continuous plane elements can be extracted.
Drawings
FIG. 1 is a flow chart of an embodiment of a spatially constrained dense matching point cloud planar primitive extraction method provided in embodiments of the present invention;
FIG. 2 is a 3D shape feature map of a hyper-voxel provided in an embodiment of the present invention;
FIG. 3 is a normal vector angle differential diagram between superpixels provided in an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a plane support region with a single-side inclusion relationship according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a plane support area with mutual inclusion relationship according to an embodiment of the present invention;
fig. 6 is a functional schematic diagram of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The space-constrained dense matching point cloud plane element extraction method can be applied to terminals. The terminal may be, but is not limited to, various personal computers, notebook computers, mobile phones, tablet computers, vehicle-mounted computers, and portable wearable devices. The terminal of the invention adopts a multi-core processor. The processor of the terminal may be at least one of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Video Processing Unit (VPU), and the like.
Exemplary method
The existing plane element extraction method is mainly divided into three types: clustering, region growing, and model fitting methods. The clustering-based plane element extraction method generally groups points into clusters according to local points or surface features, but due to noise and complex surface structures, it is difficult to segment point data into plane structures with continuity and integrity; different from a clustering algorithm, a region growing algorithm divides point cloud into more meaningful regions, the segmentation quality of the point cloud depends on the selection and growing conditions of seed points, and although the region growing algorithm is simple and easy to implement and widely used, the region growing algorithm is sensitive to noise and large in calculation amount; model fitting algorithms are generally applicable when the extracted planar surface can be defined mathematically, represented by Random Sample Consensus (RANSAC), and the RANSAC-based planar extraction method estimates the parameters of the geometric primitive in an iterative manner, and each iteration generates a plane with the largest number of points, which results in redundant "false faces". In order to prevent the detection of the false surface, the RANSAC is considered to be expanded by combining a local surface normal vector or a region growing method, but the plane extraction result becomes unstable due to the noise of point cloud data.
In order to solve the above problems, an embodiment of the present invention provides a space-constrained dense-matching point cloud planar primitive extraction method, and please refer to fig. 1, where fig. 1 is a flowchart of an embodiment of the space-constrained dense-matching point cloud planar primitive extraction method provided by the present invention.
In one embodiment of the invention, the method for extracting the spatially-constrained dense matching point cloud plane elements comprises four steps:
s100, obtaining dense matching point cloud of a target object, and dividing the dense matching point cloud into a plurality of hyper-voxels.
Specifically, the target is a building, the dense matching point cloud of the target object is obtained through an oblique aerial image, and the purpose of the hyper-voxel segmentation is to over-segment the dense matching point cloud into voxels with uniform size and shape and mutual connection and mutual non-overlapping. After the dense matching point cloud of the target object is obtained, the dense matching point cloud is divided into a plurality of hyper-voxels, so that complete and continuous plane elements can be stably extracted from the dense matching point cloud in the subsequent steps.
In a specific embodiment, the step of segmenting the dense matching point cloud into a plurality of hyper-voxels in step S100 includes:
s110, constructing an octree spatial index for the dense matching point cloud, and uniformly selecting a plurality of points from the octree spatial index as seed points;
s120, obtaining a feature vector of each point in the dense matching point cloud, and combining the leaf points of the octree spatial index according to the feature vector to obtain a plurality of hyper-voxels.
Specifically, in the embodiment, when the dense matching point cloud is segmented, an octree space index is first constructed for the dense matching point cloud, a 26-neighborhood connected graph is generated in a 3 × 3 × 3 cube neighborhood, and a plurality of points are uniformly selected from the octree space index as seed points; and then, taking the feature vector of the midpoint of the dense matching point cloud as a similarity measure, starting from an initial leaf point, iteratively analyzing the similarity degree between the brother nodes layer by layer from bottom to top, and if the similarity is greater than a preset threshold, merging the brother nodes until no mergeable brother nodes exist, so as to obtain a plurality of hyper-voxels with high homogeneity. The calculation formula of the feature vector of the midpoint of the dense matching point cloud is as follows:wherein, in the step (A),in order to be a spatial position,for color information in the CIE uniform color space,and (4) carrying out dense matching on the initial normal vector in the point cloud.
In a specific embodiment, the step of segmenting the dense matching point cloud into a plurality of hyper-voxels in step S100 further includes:
s130, determining a characteristic value of each hyper-voxel according to singular value decomposition of a covariance matrix of a plurality of hyper-voxels;
s140, selecting the hyper-voxels crossing the boundary from the hyper-voxels according to the characteristic values, and redistributing the points in the hyper-voxels crossing the boundary.
After the super-voxel segmentation is completed, each voxel cannot be guaranteed to be in a plane structure, and the situation that the individual voxel crosses the plane boundary may occur. Based on the plane hypothesis, the 3D shape features of the points can be used to measure the flatness of each superpixel, and for the problem that individual voxels cross plane boundaries, a superpixel refinement step is added in this embodiment, and the feature value of each superpixel is obtained according to the singular value decomposition of the oblique square difference matrix of the point set in several superpixels; then, according to the characteristic value of each hyper-voxel, the hyper-voxel crossing the boundary is screened out from each hyper-voxel, and the points in the hyper-voxel crossing the boundary are redistributed, so that the problem that the individual voxel crosses the plane boundary is solved. Wherein, the screening condition of the hyper-voxel crossing the plane boundary is as follows:wherein, in the step (A),、、for the three characteristic values of the hyper-voxel,andrespectively a pre-set planarity threshold and a slit threshold,it is ensured that the hyper-voxels can be approximately fitted to a plane,avoiding hyper-voxels producing bands that are too long and narrow.
In a specific embodiment, the step of reassigning the points in the superpixel crossing the boundary in step S140 includes:
s141, calculating an arithmetic mean value of three-dimensional coordinates of all points in each hyper-voxel not crossing the boundary to obtain a three-dimensional centroid of each hyper-voxel not crossing the boundary;
s142, determining the corresponding adjacent superpixel in the superpixel crossing the boundary according to the three-dimensional centroid, and distributing the point in the superpixel crossing the boundary to the corresponding adjacent superpixel.
For a hyper-voxel that crosses a plane boundary, it is again split into points and reassigned to its corresponding neighboring hyper-voxel in this embodiment. Specifically, in this embodiment, an arithmetic average of three-dimensional coordinates of all points in each superpixel that does not cross the boundary is first calculated to obtain a three-dimensional centroid of each superpixel that does not cross the boundary; and calculating the Euclidean distance between the coordinate of each point in the superpixel crossing the plane boundary and the three-dimensional centroid of each superpixel not crossing the plane boundary, determining the adjacent superpixel corresponding to each point in the superpixel crossing the plane boundary according to the Euclidean distance, and distributing each point in the superpixel crossing the boundary to the corresponding adjacent superpixel, thereby realizing the refinement of the superpixel and ensuring that each superpixel segmented by dense matching point clouds has a planar structure.
S200, determining the plane support area of each hyper-voxel by adopting an area growing method, and constructing a maximum plane support area based on the plane support area.
Specifically, each hyper-voxel has its corresponding planar support region, and the largest planar support region is a planar support region that has been verified by mutual inclusion relationship. In this embodiment, after the point cloud is divided into a plurality of voxels, a region growing method is used to determine the plane support regions of the voxels, and then mutual inclusion relationship verification is performed on the plane support regions of the voxels to construct a maximum plane support region, so that in the subsequent step, global optimization is performed on normal vectors of points in the point cloud according to the maximum plane support region, and accurate extraction of building plane elements and point cloud repair are realized.
In a specific embodiment, the step of determining the planar support region of each of the voxels by using a region growing method in step S200 includes:
s210, obtaining the 3D shape characteristics and normal vectors of the hyper-voxels;
s220, performing neighborhood search by adopting a K nearest neighbor algorithm according to the 3D shape characteristics and the normal vector, and determining the plane support area of each hyper-voxel by using an area growing method.
In order to determine the plane support region of each voxel, in this embodiment, the 3D shape feature and the normal vector angle difference of the voxel are used as the criteria for determining the contained region, fig. 2 is the 3D shape feature of the voxel, and fig. 3 is the normal vector angle difference between the voxels, where the plane support region is a planeThe plane of the hyper-voxels is fitted for least squares, approximating the true plane of the building. Firstly, obtaining 3D shape characteristics and normal vectors corresponding to the super voxels, then adopting a K nearest neighbor algorithm to perform neighborhood search according to the angle difference of the normal vectors between the 3D shape characteristics and the super voxels, and determining the plane support region of each super voxel by a region production method, wherein the angle difference of the normal vectors between the super voxels is the normal vector of the super voxel and the plane of least square fitting super voxelThe angle difference of the normal vector of (2). For hyper-voxel ciAnd cjThe angle difference between the normal vectors, in this embodiment, the plane is adoptedAnd hyper-voxel ciIs not the voxel c of the voxeliAnd cjThe included angle can effectively reduce the influence of noise.
In a specific embodiment, the step of constructing the maximum plane support area based on the plane support area in step S200 includes:
and S230, performing mutual inclusion relationship verification on the plane support areas of the hyper-voxels by adopting a set intersection algorithm, and taking the plane support areas subjected to mutual inclusion relationship verification as maximum plane support areas.
To further reduce the planar support region S during the region growing processiIn the case of crossing plane boundaries, after determining the plane support regions of each superpixel, performing mutual inclusion relationship verification on the plane support regions of each superpixel by adopting a set intersection algorithm in a sequencing array, and taking the plane support regions of the superpixel pairs subjected to mutual inclusion relationship verification as the maximum plane support region, so that only the superpixel pairs contained in the corresponding plane support regions are considered in the subsequent vector global optimization process. As shown in FIGS. 4 and 5, the structural diagrams of the plane support regions with single-side inclusion relationship and mutual inclusion relationship are shown, the points represent hyper-voxels, the rounded rectangles represent the plane support regions, and FIG. 4 represents single-side inclusion, i.e. the plane support regions are shown(ii) a FIG. 5 shows a mutually inclusive, i.e.。
S300, carrying out global optimization on the normal vectors of the points in the point cloud according to the maximum plane support area, and obtaining the normal vector information of the dense matching point cloud.
Although the maximum plane support area can recover part of the global structure in the scene, under complex urban conditions, the plane support areas contained in each other may not intersect with each other, thereby hindering the recovery of the whole structure. Therefore, in this embodiment, global normal vector optimization is performed on the maximum plane support area, and the purpose of global optimization is to redirect the voxel normal vector and obtain normal vector information of the dense matching point cloud.
In a specific embodiment, the step S300 specifically includes:
s310, determining a rotation vector of each hyper-voxel according to the maximum plane support area and a preset regularization function and a preset loss function;
s320, reorienting the normal vector of each hyper-voxel according to the rotation vector, and performing global optimization on the normal vectors of the points in the dense matching point cloud according to the reoriented normal vectors to acquire the normal vector information of the dense matching point cloud.
When global optimization of normal vectors is performed, if direct forced vectors are equal, unit vector constraint of the normal vectors may be violated in iterative least square optimization, and therefore, it is necessary to indirectly optimize the rotation vector r = [ r ] of each superpixel normal vector1,r2,r3]T. In this embodiment, a regularization function and a loss function for global normal vector optimization are preset, and a rotation vector corresponding to each voxel is determined according to the maximum plane support region and the preset regularization function and loss function; and then, reorienting the normal vector of each hyper-voxel according to the obtained rotation vector, and performing global optimization on the normal vectors of the points in the point cloud according to the reoriented normal vectors to obtain accurate point cloud normal vector information for subsequent plane element extraction.
In a specific embodimentThe formula of the regularization function is:the formula of the loss function is:wherein, in the step (A),is a set of neighboring hyper-voxels,as a loss function, corner marksFor the pair of neighboring hyper-voxels,in order to be a vector of rotation,is a rotation matrix calculated from the rotation vector,is the initial normal vector direction of the hyper-voxel,in order to be a weight parameter, the weight parameter,the number of hyper-voxels in the maximum planar support region,to wind around a fixed shaftThe angle of rotation of (a) is,is a preset parameter.
S400, extracting the plane element of the target object and repairing the plane element according to the normal vector information.
The normal vector is information essential for patch extraction, and in this embodiment, after obtaining the normal vector information of the densely-matched point cloud, continuous and complete building plane elements are extracted by using a region growing method according to the normal vector information, and point cloud restoration is performed on the extracted building plane elements according to the point cloud normal vector information, so that accurate extraction of the building plane elements and point cloud restoration are achieved.
In a specific embodiment, the step of extracting the plane primitive of the target object according to the normal vector information in step S400 includes:
s410, taking the central point of the plane support area as a seed point, and acquiring the Euclidean distance and the normal vector similarity between the seed point and the field point thereof according to the normal vector information;
and S420, extracting a plane element of the target object according to the Euclidean distance and the normal vector similarity.
In order to extract a plane primitive, in this embodiment, first, a central point of a plane support area is selected as a seed point, and starting from the seed point, a facet primitive is extracted by expanding through similarity of adjacent objects, where the adjacent objects include three types from coarse to fine: a neighboring plane support region, a neighboring super voxel, and a neighboring single point. When extended by similarity, euclidean distances, which can be used to determine spatial connectivity, and angular differences are used as criteria for discriminationThe discrimination formula is as follows:i.e. if distanceLess than a distance thresholdThen seed pointAnd its neighborhood points may be considered spatial connections. To ensure smoothness of the extracted planar primitives, the coplanar points should share a similar normal vector, i.e., if the angle between the seed point normal vector and its neighborhood is less than an angle thresholdThen the seed point and its neighborhood can be considered as a smooth plane.
In a specific embodiment, the step of repairing the planar primitive according to the normal vector information in step S400 includes:
s430, calculating an average normal vector of the plane primitive according to the normal vector information;
s440, carrying out reprojection on the points on the plane element according to the average normal vector so as to repair the plane element.
In order to ensure the planarity of the extracted planar primitives, in this embodiment, after the planar primitives are extracted, an average normal vector of each extracted planar primitive is calculated by using the optimized normal vector information and the planar primitive extraction result, and points on the planar primitives are re-projected according to the average normal vector to repair the planar primitives. The calculation formula for carrying out the reprojection on the points on the plane element is as follows:wherein, in the step (A),in order to require a point on the plane,to need to correctThe point (c) of (a) is,is the normal vector of the plane.
Exemplary device
Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 6. The intelligent terminal comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The computer program is executed by a processor to implement a spatially constrained dense matching point cloud plane primitive extraction method. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the intelligent terminal is arranged inside the device in advance and used for detecting the current operating temperature of internal equipment.
It will be understood by those skilled in the art that the block diagram of fig. 6 is only a block diagram of a portion of the structure associated with the inventive arrangements, and does not limit the terminal to which the inventive arrangements are applied, and a particular intelligent terminal may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
In one embodiment, an intelligent terminal is provided, which includes a memory and a processor, the memory stores a computer program, and the processor can realize at least the following steps when executing the computer program:
obtaining a dense matching point cloud of a target object, and dividing the dense matching point cloud into a plurality of voxels;
determining a plane support area of each hyper-voxel by adopting a region growing method, and constructing a maximum plane support area based on the plane support area;
performing global optimization on normal vectors of points in the point cloud according to the maximum plane support area to obtain normal vector information of the dense matching point cloud;
and extracting the plane element of the target object and repairing the plane element according to the normal vector information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the invention discloses a space-constrained dense matching point cloud plane primitive extraction method, which comprises the following steps: obtaining a dense matching point cloud of a target object, and dividing the dense matching point cloud into a plurality of voxels; determining a plane support area of each hyper-voxel by adopting a region growing method, and constructing a maximum plane support area based on the plane support area; performing global optimization on normal vectors of points in the point cloud according to the maximum plane support area to obtain normal vector information of the dense matching point cloud; and extracting the plane element of the target object and repairing the plane element according to the normal vector information. According to the invention, by constructing the maximum plane support area, carrying out global normal vector optimization based on the maximum plane support area, and carrying out plane elements and plane element repair according to the optimized normal vector, the influence of noise can be effectively reduced, the structural characteristics of an object can be retained, and complete and continuous plane elements can be extracted.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.
Claims (10)
1. A space-constrained dense matching point cloud plane element extraction method is characterized by comprising the following steps:
obtaining a dense matching point cloud of a target object, and dividing the dense matching point cloud into a plurality of voxels;
determining a plane support area of each hyper-voxel by adopting a region growing method, and constructing a maximum plane support area based on the plane support area;
performing global optimization on normal vectors of points in the point cloud according to the maximum plane support area to obtain normal vector information of the dense matching point cloud;
and extracting the plane element of the target object and repairing the plane element according to the normal vector information.
2. The spatially constrained dense matching point cloud planar primitive extraction method of claim 1, wherein said step of segmenting said dense matching point cloud into voxels comprises:
constructing an octree space index for the dense matching point cloud, and uniformly selecting a plurality of points from the octree space index as seed points;
and acquiring the feature vector of each point in the dense matching point cloud, and combining the leaf points of the octree spatial index according to the feature vector to obtain a plurality of hyper-voxels.
3. The spatially constrained dense matching point cloud planar primitive extraction method of claim 1, wherein said step of segmenting said dense matching point cloud into voxels further comprises:
determining a characteristic value of each hyper-voxel according to singular value decomposition of a covariance matrix of a plurality of hyper-voxels;
and screening out the hyper-voxels crossing the boundary from the hyper-voxels according to the characteristic value, and reallocating points in the hyper-voxels crossing the boundary.
4. The spatially-constrained dense-matching point cloud planar primitive extraction method of claim 3, wherein said step of reassigning points in said hyper-voxels that cross a boundary comprises:
calculating the arithmetic mean of the three-dimensional coordinates of all points in each hyper-voxel which does not cross the boundary to obtain the three-dimensional centroid of each hyper-voxel which does not cross the boundary;
determining, from the three-dimensional centroid, a corresponding neighboring superpixel of the superpixels that spans the boundary, and assigning a point of the superpixels that spans the boundary to its corresponding neighboring superpixel.
5. The spatially-constrained dense-matching point cloud planar primitive extraction method of claim 1, wherein said step of determining a planar support region for each of said voxels using a region growing method comprises:
acquiring the 3D shape feature and normal vector of each hyper-voxel;
and performing neighborhood search by adopting a K nearest neighbor algorithm according to the 3D shape characteristics and the normal vector, and determining a plane support area of each hyper-voxel by using an area growing method.
6. The spatially-constrained dense-matching point cloud planar primitive extraction method of claim 1, wherein said step of constructing a maximum planar support region based on said planar support region comprises:
and performing mutual inclusion relation verification on the plane support areas of the super voxels by adopting a set intersection algorithm, and taking the plane support areas subjected to the mutual inclusion relation verification as the maximum plane support areas.
7. The spatially-constrained dense-matching point cloud planar primitive extraction method according to claim 1, wherein the step of performing global optimization on normal vectors of points in the point cloud according to the maximum planar support area to obtain normal vector information of the dense-matching point cloud comprises:
determining a rotation vector of each hyper-voxel according to the maximum plane support area and a preset regularization function and a preset loss function;
and reorienting the normal vector of each hyper-voxel according to the rotation vector, and performing global optimization on the normal vectors of the points in the dense matching point cloud according to the reoriented normal vectors to obtain the normal vector information of the dense matching point cloud.
8. The spatially constrained dense matching point cloud planar primitive extraction method of claim 7, wherein the formula of the regularization function is:
the formula of the loss function is:
wherein the content of the first and second substances,is a set of neighboring hyper-voxels,as a loss function, corner marksFor the pair of neighboring hyper-voxels,in order to be a vector of rotation,is a rotation matrix calculated from the rotation vector,is the initial normal vector direction of the hyper-voxel,in order to be a weight parameter, the weight parameter,the number of hyper-voxels in the maximum planar support region,to wind around a fixed shaftThe angle of rotation of (a) is,is a preset parameter.
9. The spatially constrained dense matching point cloud planar primitive extraction method of claim 1, wherein said extracting the planar primitive of the target object according to the normal vector information comprises:
taking the central point of the plane support area as a seed point, and acquiring the Euclidean distance and the normal vector similarity between the seed point and the field point thereof according to the normal vector information;
and extracting the plane element of the target object according to the Euclidean distance and the normal vector similarity.
10. The spatially-constrained dense-matching point cloud planar primitive extraction method of claim 1, wherein the repairing the planar primitive according to the normal vector information comprises:
calculating an average normal vector of the plane primitive according to the normal vector information;
and carrying out reprojection on points on the plane element according to the average normal vector so as to repair the plane element.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110167128.7A CN112489207B (en) | 2021-02-07 | 2021-02-07 | Space-constrained dense matching point cloud plane element extraction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110167128.7A CN112489207B (en) | 2021-02-07 | 2021-02-07 | Space-constrained dense matching point cloud plane element extraction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112489207A true CN112489207A (en) | 2021-03-12 |
CN112489207B CN112489207B (en) | 2021-07-13 |
Family
ID=74912441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110167128.7A Active CN112489207B (en) | 2021-02-07 | 2021-02-07 | Space-constrained dense matching point cloud plane element extraction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489207B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113223067A (en) * | 2021-05-08 | 2021-08-06 | 东莞市三姆森光电科技有限公司 | Online real-time registration method for three-dimensional scanning point cloud with plane reference and incomplete |
CN114821013A (en) * | 2022-07-01 | 2022-07-29 | 深圳大学 | Element detection method and device based on point cloud data and computer equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109658431A (en) * | 2018-12-26 | 2019-04-19 | 中国科学院大学 | Rock mass point cloud plane extracting method based on region growing |
CN110009726A (en) * | 2019-03-08 | 2019-07-12 | 浙江中海达空间信息技术有限公司 | A method of according to the structural relation between plane primitive to data reduction plane |
CN110189400A (en) * | 2019-05-20 | 2019-08-30 | 深圳大学 | A kind of three-dimensional rebuilding method, three-dimensional reconstruction system, mobile terminal and storage device |
CN110264416A (en) * | 2019-05-28 | 2019-09-20 | 深圳大学 | Sparse point cloud segmentation method and device |
CN111222516A (en) * | 2020-01-06 | 2020-06-02 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Method for extracting key outline characteristics of point cloud of printed circuit board |
CN111932688A (en) * | 2020-09-10 | 2020-11-13 | 深圳大学 | Indoor plane element extraction method, system and equipment based on three-dimensional point cloud |
-
2021
- 2021-02-07 CN CN202110167128.7A patent/CN112489207B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109658431A (en) * | 2018-12-26 | 2019-04-19 | 中国科学院大学 | Rock mass point cloud plane extracting method based on region growing |
CN110009726A (en) * | 2019-03-08 | 2019-07-12 | 浙江中海达空间信息技术有限公司 | A method of according to the structural relation between plane primitive to data reduction plane |
CN110189400A (en) * | 2019-05-20 | 2019-08-30 | 深圳大学 | A kind of three-dimensional rebuilding method, three-dimensional reconstruction system, mobile terminal and storage device |
CN110264416A (en) * | 2019-05-28 | 2019-09-20 | 深圳大学 | Sparse point cloud segmentation method and device |
CN111222516A (en) * | 2020-01-06 | 2020-06-02 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Method for extracting key outline characteristics of point cloud of printed circuit board |
CN111932688A (en) * | 2020-09-10 | 2020-11-13 | 深圳大学 | Indoor plane element extraction method, system and equipment based on three-dimensional point cloud |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113223067A (en) * | 2021-05-08 | 2021-08-06 | 东莞市三姆森光电科技有限公司 | Online real-time registration method for three-dimensional scanning point cloud with plane reference and incomplete |
CN114821013A (en) * | 2022-07-01 | 2022-07-29 | 深圳大学 | Element detection method and device based on point cloud data and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112489207B (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11209837B2 (en) | Method and device for generating a model of a to-be reconstructed area and an unmanned aerial vehicle flight trajectory | |
WO2016106950A1 (en) | Zonal underground structure detection method based on sun illumination and shade compensation | |
CN112489207B (en) | Space-constrained dense matching point cloud plane element extraction method | |
WO2010042466A1 (en) | Apparatus and method for classifying point cloud data based on principal axes | |
CN112164145B (en) | Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data | |
CN114782499A (en) | Image static area extraction method and device based on optical flow and view geometric constraint | |
Wendel et al. | Unsupervised facade segmentation using repetitive patterns | |
Zhou et al. | Three-dimensional (3D) reconstruction of structures and landscapes: a new point-and-line fusion method | |
CN112396701A (en) | Satellite image processing method and device, electronic equipment and computer storage medium | |
CN112396133A (en) | Multi-scale space-based urban area air-ground integrated fusion point cloud classification method | |
CN107220996A (en) | A kind of unmanned plane linear array consistent based on three-legged structure and face battle array image matching method | |
CN113192174A (en) | Mapping method and device and computer storage medium | |
Park et al. | Segmentation of Lidar data using multilevel cube code | |
CN113409332B (en) | Building plane segmentation method based on three-dimensional point cloud | |
Dimiccoli et al. | Exploiting t-junctions for depth segregation in single images | |
JPWO2015151553A1 (en) | Change detection support device, change detection support method, and program | |
CN117011658A (en) | Image processing method, apparatus, device, storage medium, and computer program product | |
Zhang et al. | Building façade element extraction based on multidimensional virtual semantic feature map ensemble learning and hierarchical clustering | |
CN116721230A (en) | Method, device, equipment and storage medium for constructing three-dimensional live-action model | |
CN112767424B (en) | Automatic subdivision method based on indoor three-dimensional point cloud space | |
CN116310899A (en) | YOLOv 5-based improved target detection method and device and training method | |
CN116128919A (en) | Multi-temporal image abnormal target detection method and system based on polar constraint | |
Lee et al. | Determination of building model key points using multidirectional shaded relief images generated from airborne LiDAR data | |
CN105405147B (en) | A kind of Algorism of Matching Line Segments method based on fusion LBP and gray feature description | |
Huang et al. | A multiview stereo algorithm based on image segmentation guided generation of planar prior for textureless regions of artificial scenes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |