CN110827302A - Point cloud target extraction method and device based on depth map convolutional network - Google Patents

Point cloud target extraction method and device based on depth map convolutional network Download PDF

Info

Publication number
CN110827302A
CN110827302A CN201911114195.1A CN201911114195A CN110827302A CN 110827302 A CN110827302 A CN 110827302A CN 201911114195 A CN201911114195 A CN 201911114195A CN 110827302 A CN110827302 A CN 110827302A
Authority
CN
China
Prior art keywords
super
point
depth map
points
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911114195.1A
Other languages
Chinese (zh)
Inventor
刘启亮
杨柳
邓敏
刘文凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201911114195.1A priority Critical patent/CN110827302A/en
Publication of CN110827302A publication Critical patent/CN110827302A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a point cloud target extraction method and device based on a depth map convolutional network, wherein the method comprises the following steps: dividing the urban scene point cloud data into a plurality of super points; obtaining local features of each of the plurality of super points; constructing a space topological graph according to the topological connection relation among the multiple super points; constructing a depth map convolution network based on the spatial topological graph; and obtaining labels of each point in each super point according to the depth map convolution network and the local characteristics of each super point in the plurality of super points. The invention can improve the efficiency and accuracy of target extraction.

Description

Point cloud target extraction method and device based on depth map convolutional network
Technical Field
The invention relates to the technical field of spatial data processing, in particular to a point cloud target extraction method and device based on a depth map convolutional network.
Background
The development of three-dimensional scanners, represented by lidar scanning systems, has led to an explosive growth in the production of three-dimensional data. The urban vehicle-mounted laser point cloud can quickly acquire accurate three-dimensional information of buildings, vehicles, the number of the buildings, road auxiliary facilities and the like on two sides of an urban road, and becomes an important means for acquiring urban spatial data. The method for extracting the multi-type city targets from the vehicle-mounted laser point cloud has important application value in the fields of digital cities, traffic, city planning, basic mapping and the like.
The vehicle-mounted laser point cloud data has the characteristics of various targets, large data volume, uneven point density distribution and the like, and brings great challenges to target extraction. The current method for extracting point cloud targets is mainly divided into two types: (1) a clustering method based on the local geometric characteristics of the point cloud; (2) a machine learning based method. The clustering method based on the point cloud local geometric features adopts manual calculation, calculates the geometric attributes such as normal vectors, main directions and the like hidden in the point cloud, and then clusters the point cloud through the similarity or difference of the geometric attributes to realize the extraction of final ground objects. The method based on machine learning aims to automatically learn point cloud characteristics from a labeled sample to extract a target and reduce the influence of artificially set parameters or priori knowledge. The machine learning-based method mainly comprises a fully supervised deep learning method and a semi-supervised learning method. The fully supervised deep learning method mainly comprises two types, wherein some scholars convert irregular and disordered point cloud data into regular three-dimensional data or two-dimensional images in different directions and further extract a target by adopting a convolutional neural network; other scholars try to directly modify the convolutional neural network to adapt to the problems of inconsistent point cloud input sequence, rotation invariance and the like. The fully supervised deep learning method needs a large amount of labeled data and has low training efficiency. In order to reduce the labeling of training data, part of scholars adopt a semi-supervised learning method to extract targets, but the methods are difficult to accurately extract the targets and have low efficiency.
Disclosure of Invention
The invention provides a point cloud target extraction method and device based on a depth map convolutional network, and aims to solve the problems of low target extraction accuracy and efficiency.
In order to achieve the above object, an embodiment of the present invention provides a point cloud target extraction method based on a depth map convolutional network, including:
dividing the urban scene point cloud data into a plurality of super points;
obtaining local features of each of the plurality of super points;
constructing a space topological graph according to the topological connection relation among the multiple super points;
constructing a depth map convolution network based on the spatial topological graph;
and obtaining labels of each point in each super point according to the depth map convolution network and the local characteristics of each super point in the plurality of super points.
Wherein the local feature of each of the plurality of waypoints comprises: the normal vector of the super point, the principal direction of the super point, the height of the super point, the eigenvalue attribute of the super point and the geometry attribute of the super point.
Wherein the plurality of super points correspond to a plurality of nodes in the spatial topological graph one to one.
Wherein the step of constructing a depth map convolution network based on the spatial topological graph comprises:
by the formula Hl+1=Hl+GCN(Hl) Constructing a depth map convolution network; wherein Hl+1Representing the output of the l +1 th layer of the depth map convolutional network, HlRepresents the output of the l-th layer of the depth map convolutional network, and GCN () represents the semi-supervised map convolution.
Wherein the step of obtaining labels for each point within each of the plurality of waypoints according to the depth map convolution network and the local characteristics of each of the plurality of waypoints comprises:
and respectively aiming at each of the multiple super points, obtaining the label of each point in the super point by taking the local feature of the super point as the input feature of the depth map convolution network.
The embodiment of the invention also provides a point cloud target extraction device based on the depth map convolutional network, which comprises the following steps:
the system comprises a dividing module, a processing module and a processing module, wherein the dividing module is used for dividing urban scene point cloud data into a plurality of super points;
an obtaining module configured to obtain a local feature of each of the plurality of super points;
the first construction module is used for constructing a space topological graph according to the topological connection relation among the multiple super points;
the second construction module is used for constructing a depth map convolution network based on the spatial topological graph;
and the extraction module is used for obtaining the label of each point in each super point according to the depth map convolution network and the local characteristic of each super point in the plurality of super points.
Wherein the second building block is specifically configured to pass formula Hl+1=Hl+GCN(Hl) Constructing a depth map convolution network; wherein Hl+1Representing the output of the l +1 th layer of the depth map convolutional network, HlRepresents the output of the l-th layer of the depth map convolutional network, and GCN () represents the semi-supervised map convolution.
The extraction module is specifically configured to, for each of the plurality of super points, obtain a label of each point in the super point by using a local feature of the super point as an input feature of the depth map convolution network.
The embodiment of the invention also provides a point cloud target extraction device based on the depth map convolutional network, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the steps of the point cloud target extraction method based on the depth map convolutional network when executing the computer program.
Embodiments of the present invention further provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the steps of the above-mentioned point cloud target extraction method based on a depth map convolutional network are implemented.
The scheme of the invention has at least the following beneficial effects:
in the embodiment of the invention, the urban scene point cloud data is divided into a plurality of super points, the local characteristics of each super point are obtained, then a space topological graph is constructed according to the topological connection relation among the super points, a depth map convolution network is constructed based on the space topological graph, and finally the label of each point in each super point is obtained according to the depth map convolution network and the local characteristics of each super point. The point cloud super-point is used as a unit for extracting the target, so that on one hand, the training efficiency of the graph convolution network can be improved, the efficiency of extracting the target can be improved, on the other hand, richer local features can be introduced, and the accuracy of extracting the target can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of a point cloud target extraction method based on a depth map convolutional network according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a point cloud target extraction device based on a depth map convolutional network according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a point cloud target extraction device based on a depth map convolutional network according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
As shown in fig. 1, an embodiment of the present invention provides a point cloud target extraction method based on a depth map convolutional network, where the method includes:
and step 11, dividing the urban scene point cloud data into a plurality of super points.
In the embodiment of the invention, clustering can be performed based on the point cloud space position and the color characteristic information, and the urban scene point cloud data is divided into a series of smaller point clusters, namely, super points. The specific division process is as follows: first, a suitable voxel size R is selected1Dividing the three-dimensional space into small grids and setting the super-volume resolution R2(R2Far greater than R1) Will be spaced along R2Dividing by initial seed point to distance R2The central closest point of the partitioned super volume size. And according to the adjacency graph, the distance from the voxel to the center of the hyper-voxel is calculated by diffusing from the seed point to the outer edges of the adjacent voxels, the minimum distance is obtained, and the voxel is included in the hyper-voxel. The center point of each voxel is continuously updated and the process is repeated. The distance refers to a comprehensive distance which is determined by the space distance, the color distance and the normal distance, and the comprehensive distance can be specifically determined by a formula
Figure BDA0002273605460000051
Calculated, wherein D represents the integrated distance, DcolorRepresenting the color distance, the color distance representing the Euclidean distance of the RGB three-dimensional color space, DspaceRepresenting spatial distance, which means XYZ three-dimensional spatial distance, DnormalAnd b represents a normal distance, m is a constant for normalizing the color distance, and λ, μ and γ represent the contribution of the color distance, the contribution of the spatial distance and the contribution of the normal distance, respectively, and can be set according to actual requirements. And (4) obtaining the super points after all the voxels are combined, wherein each super point is a point set with irregular shape and size.
And step 12, acquiring local characteristics of each of the plurality of the super points.
In an embodiment of the present invention, the local feature of each of the plurality of super points includes: the normal vector of the super point, the principal direction of the super point, the height of the super point, the eigenvalue attribute of the super point and the geometry attribute of the super point.
It should be noted that, in the embodiment of the present invention, each of the super points obtained in step 11 has different geometric characteristics due to different shapes, sizes, physical meanings, and the like. In addition, due to the fact that the large-scale urban point cloud scene is large in ground object types, complex in space structure, serious in mutual shielding and the like, a lot of noise interference exists in feature calculation, particularly the feature based on the point is seriously interfered by noise, and the calculation amount is large. Therefore, the calculated point cloud based on the characteristics of the clusters is more robust under the condition of noise interference. Features computed by the present invention include normal vectors, principal directions, geometric attributes (i.e., geometric attributes), eigenvalue attributes, Eeigen, and coordinate-based features.
The normal vector and the main direction reflect the layout and trend of the midpoint of the point cloud to a certain extent, and the normal vector and the main direction are relatively stable description information. As is typical, the normal vector of a building facade is parallel to the ground, with its principal direction perpendicular to the ground; the main direction of ground rod-shaped ground objects such as street lamps, tree trunks and the like is vertical to the ground.
The geometrical property of the super point indicates the shape of the super point, such as a line shape, a plane shape and a scattered body shape. Constructing covariance matrices using points within a hyper-point
Figure BDA0002273605460000052
Three eigenvalues were obtained: lambda [ alpha ]1、λ2And λ31>λ2>λ3). Wherein n represents the number of points of the point cloud in the super point,
Figure BDA0002273605460000053
the vector representing the point is then used to represent the point,representing the three-dimensional centroid of the hyper-point. The linear, planar and scattered body shape is calculated by the formulaIf V is 0, the over-point is indicated as a line, and is denoted by [1, 0](ii) a If V is 1, the over point is planar and is denoted as [0, 1, 0 ]](ii) a If V is 2, the over point is expressed as a planar form, and is denoted by [0, 0, 1]。
And (4) calculating a covariance matrix formed by the points in the super point to obtain a characteristic value, and constructing a characteristic value attribute based on the characteristic value. The eigenvalue attributes are represented by six-dimensional vectors, i.e. by formulas
Figure BDA0002273605460000062
And (4) showing. The geometrical significance of the six-dimensional features is a variance structure tensor, an anisotropic structure tensor, an area structure tensor, a sphericity structure tensor, an characteristic entropy structure tensor and a linear structure tensor, respectively.
The coordinates of the super-point reflect the position information of the point in the scene, wherein the z-value of the coordinates reflects the height information of the super-point.
And step 13, constructing a space topological graph according to the topological connection relation among the multiple super points.
It should be noted that the urban scene point cloud data is a disordered point structure in a three-dimensional space, and after being aggregated into clusters, the urban scene point cloud data is still a disordered point in the three-dimensional space, and rich topological information is hidden in the disordered space structure. In an embodiment of the invention, the graph is constructed by spatial topological correlation. In the graph, each super point is regarded as a node instead of a set of points, i.e., the super points correspond to nodes in the spatial topology one-to-one. If the point topologies in the two super points are adjacent, the corresponding super points are marked as adjacent, that is, the nodes corresponding to the two super points are marked as adjacent, so that the space topological graph structure is obtained.
And 14, constructing a depth map convolution network based on the spatial topological graph.
Wherein, in the embodiment of the present invention, the formula H can be specifically usedl+1=Hl+GCN(Hl) And constructing a depth map convolution network. Wherein Hl+1Representing the output of the l +1 th layer of the depth map convolutional network, HlRepresents the output of the l-th layer of the depth map convolutional network, and GCN () represents the semi-supervised map convolution.
It is worth mentioning that, since the original image volume has only two to three layers, residual connection is introduced in the image volume for better learning the features. The graph convolution better processes irregular data structures in the space, so that topological information and geographical position information hidden among the super points are better used, available information of the data is more fully mined and used, and the utilization rate of the data is increased. In addition, residual error connection is introduced, and the gradient disappearance phenomenon of the network is effectively avoided. The deepened network can extract more remarkable characteristics, and is beneficial to final surface feature extraction, namely target extraction.
The principle of the graph convolution is as follows: the graph convolution can fully consider the local correlation of the object through the connectivity of the graph, such as two nodes with edges on the graph, and the characteristics of the two nodes can be mutually transmitted and influenced in the convolution process. The convolution introduced into the graph is transformed into the formula g according to the definition of the functional convolutionθ*x=F-1[F(gθ)·F(x)]Wherein F () represents a fourier transform on the graph, which is defined as the formula F ═ Σ F × U according to the definition of the fourier transform, wherein U represents an eigenvector obtained by eigendecomposition of the graph laplace matrix (U is an orthogonal matrix, UU)T=UU-1) In the embodiment of the invention, a regularized Laplace matrix is used and defined as a formula
Figure BDA0002273605460000071
Wherein A is an adjacency matrix (A ∈ R)N×N),D=∑jAijIs the degree matrix of the vertices, and the elements of the diagonal are, in turn, the degrees of each vertex. The graph convolution considering only the first order neighborhood is defined as the formula g x UgUTx, in the embodiment of the invention, when convolution is carried out, a second-order neighborhood is considered, and the convolution formula is deformed to obtain the formula
Figure BDA0002273605460000072
Wherein w ∈ R in the Fourier domainNIs a parameter of convolution, INIs the connection of the nodes themselves in the figure. It should be noted that the graph convolution is a graph convolution commonly used at present, and therefore, the principle of the graph convolution is not described in detail herein.
And step 15, obtaining labels of each point in each super point according to the depth map convolution network and the local characteristics of each super point in the plurality of super points.
In an embodiment of the present invention, specifically, for each of the plurality of super points, the local feature of the super point is used as the input feature of the depth map convolution network, so as to obtain the label of each point in the super point. That is, for each of the super points, the labels of the points in the super point can be obtained by using the local feature of the super point as the input feature of the depth map convolution network. The label may be a name of a ground object, such as a building, an automobile, etc.
In the training process, points are aggregated in the form of a super point, and in graph convolution, each super point is regarded as a node. In order to ensure the accuracy of the result, the prediction result of the over point is mapped back to a single point during testing to obtain a clustering result based on the point, thereby realizing the purpose of endowing each point with a label.
It is worth mentioning that, in the embodiment of the present invention, the urban scene point cloud data is divided into a plurality of super points, the local feature of each super point is obtained, then a spatial topological graph is constructed according to the topological connection relationship between the plurality of super points, a depth map convolution network is constructed based on the spatial topological graph, and finally the label of each point in each super point is obtained according to the depth map convolution network and the local feature of each super point. The point cloud super-point is used as a unit for extracting the target, so that on one hand, the training efficiency of the graph convolution network can be improved, the efficiency of extracting the target can be improved, on the other hand, richer local features can be introduced, and the accuracy of extracting the target can be improved.
Next, the above-mentioned point cloud object extracting method is further described with an embodiment. In this example, the oakland point cloud data set is used to describe the embodiment of the present invention. Wherein, the data is obtained by vehicle-mounted laser scanning, the data acquisition site is around the university of Kanai Meilong, Navlab11 equipment is used for data acquisition, and a side-view SICK LMS laser scanner is equipped for push scanning. The data provides x, y, z three-dimensional coordinates and corresponding label information. In this data set, surface features are divided into four categories, respectively: rod-shaped ground objects, building vertical surfaces, linear ground objects and tree crowns. And visualizing the point cloud scene according to the label. In this example, since the data does not have color information, in the process of the over-point segmentation, the contribution degree of the color distance is set to 0, the contribution degree of the spatial distance is set to 0.8, and the contribution degree of the normal vector distance is set to 0.2. In the process of inputting the network, the initial input features of the network are selected to be divided into a main direction, a normal vector, a geometric attribute and a characteristic value attribute. By the point cloud target extraction method, the extraction accuracy of various ground objects on the oakland data set is shown in table 1. In table 1, Scatter _ misc represents scattered points such as tree crowns and the like, Default _ wire represents linear lines such as electric wires, Utility _ pole represents rod-shaped ground objects such as tree trunks and telegraph poles, Load _ bearing represents ground surfaces, and facade represents building facades.
TABLE 1
As shown in fig. 2, an embodiment of the present invention further provides a point cloud target extraction apparatus based on a depth map convolutional network, including: a dividing module 21, an obtaining module 22, a first building module 23, a second building module 24 and an extracting module 25.
The dividing module 21 is configured to divide the urban scene point cloud data into a plurality of super points.
An obtaining module 22, configured to obtain a local feature of each of the plurality of super points.
A first constructing module 23, configured to construct a spatial topological graph according to the topological connection relationship among the multiple waypoints.
And a second constructing module 24, configured to construct a depth map convolution network based on the spatial topology map.
Wherein the second building block 24 is specifically adapted to pass formula Hl+1=Hl+GCN(Hl) Constructing a depth map convolution network; wherein Hl+1Representing the output of the l +1 th layer of the depth map convolutional network, HlRepresents the output of the l-th layer of the depth map convolutional network, and GCN () represents the semi-supervised map convolution.
And the extraction module 25 is configured to obtain a label of each point in each of the plurality of the super points according to the depth map convolution network and the local feature of each of the plurality of the super points.
The extracting module 25 is specifically configured to, for each of the multiple hyper-points, obtain the label of each point in the hyper-point by using the local feature of the hyper-point as the input feature of the depth map convolution network.
In the embodiment of the present invention, the depth map convolutional network-based point cloud target extraction device 20 is a device corresponding to the above-mentioned depth map convolutional network-based point cloud target extraction method, and can improve the efficiency and accuracy of target extraction.
It should be noted that the depth map convolutional network-based point cloud target extraction device 20 includes all modules or units for implementing the above-described depth map convolutional network-based point cloud target extraction method, and in order to avoid too many repetitions, details of each module or unit of the depth map convolutional network-based point cloud target extraction device 20 are not described here.
As shown in fig. 3, an embodiment of the present invention further provides a point cloud object extraction apparatus based on a depth map convolutional network, including a memory 31, a processor 32, and a computer program 33 stored in the memory 31 and executable on the processor 32, where the processor 32 implements the steps of the point cloud object extraction method based on a depth map convolutional network when executing the computer program 33.
That is, in the embodiment of the present invention, when the processor 32 of the depth map convolutional network-based point cloud target extraction device 30 executes the computer program 33, the steps of the depth map convolutional network-based point cloud target extraction method described above are implemented, so that the efficiency and accuracy of target extraction can be improved.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the point cloud target extraction method based on the depth map convolutional network.
That is, in an embodiment of the present invention, when being executed by a processor, a computer program of a computer-readable storage medium implements the steps of the above-mentioned point cloud target extraction method based on a depth map convolutional network, which can improve the efficiency and accuracy of target extraction.
Illustratively, the computer program of the computer-readable storage medium comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A point cloud target extraction method based on a depth map convolutional network is characterized by comprising the following steps:
dividing the urban scene point cloud data into a plurality of super points;
obtaining local features of each of the plurality of super points;
constructing a space topological graph according to the topological connection relation among the multiple super points;
constructing a depth map convolution network based on the spatial topological graph;
and obtaining labels of each point in each super point according to the depth map convolution network and the local characteristics of each super point in the plurality of super points.
2. The method of claim 1, wherein the local feature of each of the plurality of the waypoints comprises: the normal vector of the super point, the principal direction of the super point, the height of the super point, the eigenvalue attribute of the super point and the geometry attribute of the super point.
3. The method of claim 1, wherein the plurality of hyper-points correspond one-to-one to a plurality of nodes in the spatial topology.
4. The method of claim 3,
the step of constructing a depth map convolution network based on the spatial topological graph comprises the following steps:
by the formula Hl+1=Hl+GCN(Hl) Constructing a depth map convolution network; wherein Hl+1Representing the output of the l +1 th layer of the depth map convolutional network, HlRepresents the output of the l-th layer of the depth map convolutional network, and GCN () represents the semi-supervised map convolution.
5. The method of claim 1, wherein said step of deriving labels for points within each of the plurality of waypoints based on the depth map convolution network and local features of each of the plurality of waypoints comprises:
and respectively aiming at each of the multiple super points, obtaining the label of each point in the super point by taking the local feature of the super point as the input feature of the depth map convolution network.
6. A point cloud target extraction device based on a depth map convolutional network is characterized by comprising:
the system comprises a dividing module, a processing module and a processing module, wherein the dividing module is used for dividing urban scene point cloud data into a plurality of super points;
an obtaining module configured to obtain a local feature of each of the plurality of super points;
the first construction module is used for constructing a space topological graph according to the topological connection relation among the multiple super points;
the second construction module is used for constructing a depth map convolution network based on the spatial topological graph;
and the extraction module is used for obtaining the label of each point in each super point according to the depth map convolution network and the local characteristic of each super point in the plurality of super points.
7. The apparatus of claim 6,
the second building block is specifically configured to pass formula Hl+1=Hl+GCN(Hl) Constructing a depth map convolution network; wherein Hl+1Representing the output of the l +1 th layer of the depth map convolutional network, HlRepresents the output of the l-th layer of the depth map convolutional network, and GCN () represents the semi-supervised map convolution.
8. The apparatus of claim 6,
the extraction module is specifically configured to, for each of the plurality of super points, obtain a label of each point in the super point by using a local feature of the super point as an input feature of the depth map convolution network.
9. A depth map convolutional network-based point cloud target extraction device, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the depth map convolutional network-based point cloud target extraction method of any one of claims 1 to 5.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for depth map convolutional network-based point cloud target extraction of any one of claims 1 to 5.
CN201911114195.1A 2019-11-14 2019-11-14 Point cloud target extraction method and device based on depth map convolutional network Pending CN110827302A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911114195.1A CN110827302A (en) 2019-11-14 2019-11-14 Point cloud target extraction method and device based on depth map convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911114195.1A CN110827302A (en) 2019-11-14 2019-11-14 Point cloud target extraction method and device based on depth map convolutional network

Publications (1)

Publication Number Publication Date
CN110827302A true CN110827302A (en) 2020-02-21

Family

ID=69555522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911114195.1A Pending CN110827302A (en) 2019-11-14 2019-11-14 Point cloud target extraction method and device based on depth map convolutional network

Country Status (1)

Country Link
CN (1) CN110827302A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539949A (en) * 2020-05-12 2020-08-14 河北工业大学 Point cloud data-based lithium battery pole piece surface defect detection method
CN112241676A (en) * 2020-07-07 2021-01-19 西北农林科技大学 Method for automatically identifying terrain sundries
CN112862719A (en) * 2021-02-23 2021-05-28 清华大学 Laser radar point cloud cell feature enhancement method based on graph convolution
CN113034596A (en) * 2021-03-26 2021-06-25 浙江大学 Three-dimensional object detection and tracking method
WO2022262219A1 (en) * 2021-06-18 2022-12-22 中国科学院深圳先进技术研究院 Method for constructing semantic perturbation reconstruction network of self-supervised point cloud learning
CN115641583A (en) * 2022-12-26 2023-01-24 苏州赫芯科技有限公司 Point cloud detection method, system and medium based on self-supervision and active learning
CN116977572A (en) * 2023-09-15 2023-10-31 南京信息工程大学 Building elevation structure extraction method for multi-scale dynamic graph convolution

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319957A (en) * 2018-02-09 2018-07-24 深圳市唯特视科技有限公司 A kind of large-scale point cloud semantic segmentation method based on overtrick figure
CN109389671A (en) * 2018-09-25 2019-02-26 南京大学 A kind of single image three-dimensional rebuilding method based on multistage neural network
US20190108639A1 (en) * 2017-10-09 2019-04-11 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Semantic Segmentation of 3D Point Clouds
CN109857895A (en) * 2019-01-25 2019-06-07 清华大学 Stereoscopic vision search method and system based on polycyclic road view convolutional neural networks
CN109887028A (en) * 2019-01-09 2019-06-14 天津大学 A kind of unmanned vehicle assisted location method based on cloud data registration
CN110059620A (en) * 2019-04-17 2019-07-26 安徽艾睿思智能科技有限公司 Bone Activity recognition method based on space-time attention
CN110097556A (en) * 2019-04-29 2019-08-06 东南大学 Large-scale point cloud semantic segmentation algorithm based on PointNet
CN110222771A (en) * 2019-06-10 2019-09-10 成都澳海川科技有限公司 A kind of classification recognition methods of zero samples pictures
CN110414577A (en) * 2019-07-16 2019-11-05 电子科技大学 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190108639A1 (en) * 2017-10-09 2019-04-11 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Semantic Segmentation of 3D Point Clouds
CN108319957A (en) * 2018-02-09 2018-07-24 深圳市唯特视科技有限公司 A kind of large-scale point cloud semantic segmentation method based on overtrick figure
CN109389671A (en) * 2018-09-25 2019-02-26 南京大学 A kind of single image three-dimensional rebuilding method based on multistage neural network
CN109887028A (en) * 2019-01-09 2019-06-14 天津大学 A kind of unmanned vehicle assisted location method based on cloud data registration
CN109857895A (en) * 2019-01-25 2019-06-07 清华大学 Stereoscopic vision search method and system based on polycyclic road view convolutional neural networks
CN110059620A (en) * 2019-04-17 2019-07-26 安徽艾睿思智能科技有限公司 Bone Activity recognition method based on space-time attention
CN110097556A (en) * 2019-04-29 2019-08-06 东南大学 Large-scale point cloud semantic segmentation algorithm based on PointNet
CN110222771A (en) * 2019-06-10 2019-09-10 成都澳海川科技有限公司 A kind of classification recognition methods of zero samples pictures
CN110414577A (en) * 2019-07-16 2019-11-05 电子科技大学 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DAVID I SHUMAN; SUNIL K. NARANG; PASCAL FROSSARD; ANTONIO ORTEGA: "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", 《IEEE》 *
LOIC LANDRIEU; MARTIN SIMONOVSKY: "Large-Scale Point Cloud Semantic Segmentation with Superpoint Graphs", 《IEEE》 *
ZHEN WANG;LIQIANG ZHANG;LIANG ZHANG;ROUJING LI;YIBO ZHENG;ZIDONG: "A Deep Neural_Network With Spatial Pooling DNNSP for 3-D Point Cloud Classification", 《IEEE》 *
刘浪: "图卷积神经网络(GCN)详解_包括了数学基础(傅里叶,拉普拉斯)", 《知乎》 *
张爱武; 肖涛; 段乙好: "一种机载LiDAR点云分类的自适应特征选择方法", 《激光与光电子学进展》 *
汪汉云: "高分辨率三维点云目标识别技术研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539949A (en) * 2020-05-12 2020-08-14 河北工业大学 Point cloud data-based lithium battery pole piece surface defect detection method
CN111539949B (en) * 2020-05-12 2022-05-13 河北工业大学 Point cloud data-based lithium battery pole piece surface defect detection method
CN112241676A (en) * 2020-07-07 2021-01-19 西北农林科技大学 Method for automatically identifying terrain sundries
CN112862719A (en) * 2021-02-23 2021-05-28 清华大学 Laser radar point cloud cell feature enhancement method based on graph convolution
CN112862719B (en) * 2021-02-23 2022-02-22 清华大学 Laser radar point cloud cell feature enhancement method based on graph convolution
CN113034596A (en) * 2021-03-26 2021-06-25 浙江大学 Three-dimensional object detection and tracking method
CN113034596B (en) * 2021-03-26 2022-05-13 浙江大学 Three-dimensional object detection and tracking method
WO2022262219A1 (en) * 2021-06-18 2022-12-22 中国科学院深圳先进技术研究院 Method for constructing semantic perturbation reconstruction network of self-supervised point cloud learning
CN115641583A (en) * 2022-12-26 2023-01-24 苏州赫芯科技有限公司 Point cloud detection method, system and medium based on self-supervision and active learning
CN116977572A (en) * 2023-09-15 2023-10-31 南京信息工程大学 Building elevation structure extraction method for multi-scale dynamic graph convolution
CN116977572B (en) * 2023-09-15 2023-12-08 南京信息工程大学 Building elevation structure extraction method for multi-scale dynamic graph convolution

Similar Documents

Publication Publication Date Title
CN110827302A (en) Point cloud target extraction method and device based on depth map convolutional network
CN111882593B (en) Point cloud registration model and method combining attention mechanism and three-dimensional graph convolution network
CN112070769B (en) Layered point cloud segmentation method based on DBSCAN
Sun et al. Aerial 3D building detection and modeling from airborne LiDAR point clouds
CN109682381A (en) Big visual field scene perception method, system, medium and equipment based on omnidirectional vision
CA3037360A1 (en) Computer vision systems and methods for detecting and modeling features of structures in images
CN116543117B (en) High-precision large-scene three-dimensional modeling method for unmanned aerial vehicle images
CN106408581B (en) A kind of quick three-dimensional point cloud lines detection method
Jin et al. A point-based fully convolutional neural network for airborne LiDAR ground point filtering in forested environments
CN105447452A (en) Remote sensing sub-pixel mapping method based on spatial distribution characteristics of features
CN115359195A (en) Orthoimage generation method and device, storage medium and electronic equipment
CN116416366A (en) 3D model construction method and device and electronic equipment
CN110910435B (en) Building point cloud extraction method and device, computer equipment and readable storage medium
Ni et al. Applications of 3d-edge detection for als point cloud
Lei et al. Automatic identification of street trees with improved RandLA-Net and accurate calculation of shading area with density-based iterative α-shape
Yu et al. Saliency computation and simplification of point cloud data
CN116993750A (en) Laser radar SLAM method based on multi-mode structure semantic features
CN116681844A (en) Building white film construction method based on sub-meter stereopair satellite images
CN112862921B (en) Power grid distribution image drawing method
CN114187404A (en) Three-dimensional reconstruction method and system for high resolution of offshore area
Su et al. Automatic multi-source data fusion technique of powerline corridor using UAV Lidar
Rajabi et al. Optimization of DTM interpolation using SFS with single satellite imagery
Liu et al. Segmentation and reconstruction of buildings with aerial oblique photography point clouds
Song et al. 3D hough transform algorithm for ground surface extraction from LiDAR point clouds
Mahmood et al. Learning indoor layouts from simple point-clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221

RJ01 Rejection of invention patent application after publication