CN116912404A - Laser radar point cloud mapping method for scanning distribution lines in dynamic environment - Google Patents

Laser radar point cloud mapping method for scanning distribution lines in dynamic environment Download PDF

Info

Publication number
CN116912404A
CN116912404A CN202310818051.4A CN202310818051A CN116912404A CN 116912404 A CN116912404 A CN 116912404A CN 202310818051 A CN202310818051 A CN 202310818051A CN 116912404 A CN116912404 A CN 116912404A
Authority
CN
China
Prior art keywords
curvature
voxel
point cloud
pose
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310818051.4A
Other languages
Chinese (zh)
Inventor
钱堃
房一鑫
张赞
史彤
徐达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202310818051.4A priority Critical patent/CN116912404A/en
Publication of CN116912404A publication Critical patent/CN116912404A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of laser SLAM mapping of power scenes, and discloses a laser radar point cloud mapping method for scanning distribution lines in a dynamic environment, which comprises the steps of firstly extracting a segmented curvature voxel occupancy descriptor; then, a voxel generalized iterative nearest point algorithm based on curvature voxel neighbor search is used, and the current pose of the current frame is obtained by optimizing pose residual errors of associated voxel registration; then using a curvature voxel clustering method based on laser intensity assistance to segment the current frame point cloud, marking an absolute static object and a potential dynamic object, retaining a low dynamic object and removing a high dynamic object; secondly, performing secondary optimization on the pose of the current frame by using a voxelized generalized iterative nearest point algorithm based on curvature voxel neighbor search; and finally, carrying out pose transformation frame by frame and splicing to obtain the global static point cloud map with the high dynamic objects removed. The method can be used for handheld scanning modeling of the scene point cloud containing the distribution line in the dynamic environment.

Description

Laser radar point cloud mapping method for scanning distribution lines in dynamic environment
Technical Field
The invention relates to the field of laser SLAM mapping of power scenes, in particular to a laser radar point cloud mapping method for scanning distribution lines in a dynamic environment.
Background
The real-time positioning and mapping technology (SLAM, simultaneous Localization and Mapping) is that a robot senses the surrounding environment in an unknown environment by using a visual sensor, estimates the pose of the sensor in the moving process, and simultaneously realizes self positioning according to a map and incremental map establishment according to the positioning condition.
With the wide application of SLAM technology, the handheld mobile 3D laser radar scanning technology is used for scanning modeling of power scenes such as urban distribution lines, and the method has the advantages of flexible operation and low cost. The scanning point cloud modeling of the scene environment of the urban distribution line is an important basis for carrying out inspection and facility state discrimination on the distribution line. However, since there are a large number of plants, buildings, and often beside roads containing traffic in urban distribution lines in addition to power towers and wires. Therefore, how to solve the environmental scan modeling in the dynamic environment containing the vehicle is to improve the mapping accuracy and key in view of generating a precise electric power scene point cloud model.
Aiming at the problem of example segmentation of point clouds, most algorithms increasingly use convolutional neural networks for point cloud segmentation and semantic annotation. The example segmentation method of the image is directly applied after being adjusted by the example segmentation method of the point cloud based on the distance image, such as Mask R-CNN (see 'Segmenting unknown 3d objects from real depth images using Mask R-CNN trained on synthetic data', danielczuk et al) uses a multi-stage segmentation method. Meanwhile, 3D convolutional neural networks PointNet (see "Pointnet: deep learning on point sets for 3D classification and segmentation", qi et al) have been developed in consideration of the three-dimensional representation method of point clouds. However, the deep learning method requires high-cost training through accurate labels, and the recognition generalization capability of the pre-trained model on the power equipment is poor in the face of a power scene lacking point cloud labeling. In addition, the construction of the power scene point cloud model is more focused on the real-time scanning construction of the mobile scanning equipment, and the traditional point cloud segmentation network is difficult to meet the diversified requirements and the real-time requirements.
Aiming at the problem of removing dynamic objects, the existing method can remove most dynamic objects more effectively by specially representing dynamic information. A multi-resolution pixel difference detection mode is adopted by a point and pixel cascade removal method based on a distance image, such as a remote (see 'remote, the next reverse: static point cloud map construction using multiresolution range images', kim et al). An occupancy grid-based ray tracing method, such as ERASOR (see "ERASOR: egocentric ratio of pseudo occupancy-based dynamic object removal for static 3D point cloud map building", lim et al), devised a pseudo-centered space occupancy descriptor. However, the existing method ignores the object property of dynamic information, is not robust to the problems of inconsistent scanning and the like, and therefore a large number of static points are mistakenly removed, which causes difficulty in preserving fine components of power equipment, especially wires, insulator strings and the like.
At present, aiming at the requirement of modeling of the electric field scenic spot cloud scanning, the domestic and foreign patents do not have a handheld 3D laser SLAM mapping algorithm for simultaneously integrating the spot cloud instance segmentation and the dynamic object removal.
Disclosure of Invention
The invention aims to: aiming at the problems, the laser radar point cloud mapping method for scanning the distribution line in the dynamic environment is provided.
The technical scheme is as follows: the invention provides a laser radar point cloud mapping method for scanning distribution lines in a dynamic environment, which comprises the following steps:
step 1, a hand-held multi-line laser radar is adopted to scan a distribution line at the street side, the original laser point cloud of the current frame is obtained as the current frame, and a segmentation curvature voxel occupancy rate descriptor is constructed in the current frame;
step 2, using a voxelized generalized iterative nearest point algorithm based on curvature voxel neighbor search, and obtaining the current pose of the current frame by optimizing the pose residual error of associated voxel registration;
step 3, using a curvature voxel clustering method based on laser intensity assistance to segment the current frame point cloud, then using an object classification method based on geometric characteristics to identify the segmented object, and marking an absolute static object;
step 4, using the segmented curvature voxel occupancy descriptors of the front and rear adjacent frames to align curvature voxels of the front and rear adjacent frames, tracking potential dynamic objects, then detecting according to the curvature voxel occupancy changes, retaining low-dynamic objects, and removing high-dynamic objects;
step 5, carrying out secondary optimization on pose residual errors of static curvature voxel registration in the heavy-load curvature voxel occupancy descriptors by using a voxelized generalized iterative closest point algorithm based on curvature voxel neighbor search again to obtain the current pose of the optimized current frame;
and 6, carrying out pose transformation on a frame-by-frame basis according to the optimized pose, splicing to obtain a global static point cloud map with high-dynamic objects removed, and extracting the power lines and the iron towers in the point cloud by utilizing the spatial distribution geometric features of the power lines and the iron towers.
Further, the constructed segmented curvature voxel occupancy descriptor in step 1 encodes the object semantics and the position in the environment into segmented curvature voxels, and the specific method is as follows:
encoding the preprocessed point cloud into a curvature voxel set according to fixed resolution in three projection directions, wherein curvature voxels containing at least one point are stored by a hash table structure, and the specific space expression for dividing the occupation ratio description of the curvature voxels is as follows:
in formula (1), SCV ijk,t Dividing curvature voxels indexed (i, j, k) in the curvature voxel occupancy descriptor for the current timestamp t, N ρ ,N σ Andis the maximum index value in three projection directions.
Further, the voxelized generalized iterative closest point algorithm based on the curvature voxel neighbor search in the step 2 is adapted to segment the spatial structure of the curvature voxel occupancy descriptor, and the Kd-tree search of the curvature voxel center point cloud is used to obtain the associated curvature voxel pair, which comprises the following specific steps: assuming that the local point cloud in the curvature voxel-correlated pair { a, b } follows a gaussian distribution:and->Wherein a is m Representing the mth curvature voxel +.>Representing a voxel a of curvature m Position mean of>Representing a voxel a of curvature m Covariance matrix of b n Representing the nth curvature voxel ++>Representing the voxel b of curvature n Position mean of>Representing the voxel b of curvature n Covariance matrix of>Representing a voxel a of curvature m Is higher than the height of (1)The expression of the gaussian distribution,representing the voxel b of curvature n Is expressed by gaussian distribution, then its corresponding pose transformation error is:
in the formula (2), T t+1,t For a pose between time stamps t and t +1,representing the pose transformation error, then T t+1,t Can be estimated by maximum likelihood as:
in the formula (3), N m Is a as m The superscript T indicates the transpose of the matrix.
Further, the curvature voxel clustering method based on laser intensity assistance in the step 3 specifically comprises the following sub-steps:
substep 3-1, generating a laser intensity map projection onto the segmented curvature voxel occupancy descriptors by computing the intensity mean and variance within each curvature voxel, the process being represented as:
in the formula (4), the amino acid sequence of the compound,representing the generated laser intensity map, av ijk ,var ijk Respectively representing the mean and variance of the intensities in the curvature voxels;
substep 3-2, based on the generated laser intensity mapDividing object point clouds by using constraint and lifting strategies based on laser intensity, wherein a constraint function E r (. Cndot.) constraint neighbor voxel clustering process, expressed as:
in the formula (5), r is a neighbor search radius, h av And h var Search thresholds for intensity matrix and variance, respectively, (u, v, w) represent the curvature voxel index, SCV uvw,t Represents SCV ijk,t Neighborhood curvature voxels, av (i,j,k) Representing the intensity mean, av, of the (i, j, k) th curvature voxel (u,v,w) Representing the intensity mean value, var, of the (u, v, w) th curvature voxel (u,v,w) Representing the intensity variance of the (u, v, w) th curvature voxel;
substep 3-3, lifting the function G τ,r (. Cndot.) fusion neighbor partition object, expressed as:
in the formula (6), tau is the iteration number,splitting an object for neighbors,/->An object with index k;
sub-step 3-4, calculating 7-dimensional feature vectors of each object by using an object classification method based on geometric features, wherein the 7-dimensional feature vectors comprise linearity, planeness, divergence, main direction, maximum height, minimum height and distribution scale, firstly calculating three feature values of covariance matrixes of the segmented point cloud objects, and sequencing the three feature values from large to small to sigma 1 ,σ 2 Sum sigma 3 Wherein the linearity, the planeness and the divergence are calculated as:
(7) f l ,f p And f s Respectively representing linearity, planarity and divergence;
ultimately, objects are classified as absolute stationary objects including floors, buildings, trees, and potentially moving objects including vehicles and pedestrians.
Further, in the step 4, the segmented curvature voxel occupancy descriptors of the front and rear adjacent frames are utilized to align curvature voxels of the front and rear adjacent frames, and to track potential dynamic objects, the registration of the curvature voxel vertices is used to realize the mapping and searching of the objects in the segmented curvature voxel occupancy descriptors, which specifically comprises the following sub-steps:
substep 4-1, extracting three vertices of each curvature voxel, including the nearest vertex p near,t Center apex p central,t And the furthest vertex p far,t Then, coordinate transformation is performed:
in formula (8), SCV t+1,t An expression registered for the curvature voxels;
and a substep 4-2, after realizing the segmented curvature voxel occupancy descriptor registration, obtaining a response object of the tracked object through the segmentation labeling of adjacent curvature voxels, wherein the response object is as follows:
in the formula (9), id is an object index, PD represents a potential dynamic object,for the object indicated by id in the t +1 frame,is->Object registered by curvature voxels, +.>In response to the object set, v (·) is the process of extracting the set of curvature voxels occupied by the object,/->A process for extracting curvature voxel labels;
sub-step 4-3, calculating the object overlapping ratio r of the tracked object according to the curvature voxel occupancy rate change detection, obtaining a corresponding response object after tracking the object after the alignment of the curvature voxels, and detecting the object overlapping ratio r of the tracked object based on the curvature voxel occupancy rate change id,t The method comprises the following steps:
setting a threshold h according to the obtained object overlapping ratio r To determine the motion attribute of the tracked object, there are basic assumptions that are:
a)r id,t <h r the tracked object is a low dynamic object and should be preserved;
b)r id,t >h r the tracked object is a high dynamic object and should be removed.
Further, in step 5, the pose residual error of static curvature voxel registration in the heavy curvature voxel occupancy descriptor is calculated by the following method:
in the formula (11)To take into account the pose error of the dynamic weights +.>For the heavy loaded pose residual error, T is an optimized pose intermediate variable, and +.>To optimize the pose.
Further, in the step 6, the method for acquiring the static instance map according to the pose of the point cloud and the concatenation is to perform optimized pose transformation on the absolute static object and the low dynamic object which are reserved by each frame of the point cloud, register the absolute static object and the low dynamic object under a global world coordinate system, and further divide the point cloud of the power line and the point cloud of the iron tower by utilizing suspension analysis according to the spatial distribution characteristics of the power line and the iron tower.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
1. the proposed segmented curvature voxel descriptor is applicable to multi-class 3D laser sensors, so that the laser scanning mapping algorithm can be adapted to a variety of mobile lidar devices. The mapping algorithm overcomes the interference of dynamic objects in the scene, and removes the dynamic objects in the mapping process, so that the trawling caused by the dynamic objects in the traditional manual late-deleting point cloud is avoided.
2. Aiming at the problems of insufficient cost labeling, high cost training and the like faced by a deep learning method for point cloud instance segmentation, the strategy of combining the voxel clustering based on laser intensity assistance and the object classification method based on geometric features, which is provided by the invention, adopts a non-deep learning method, does not depend on a GPU in operation, and has good segmentation effect on a street-side distribution line scene.
3. Aiming at the problems of static point cloud error removal and the like caused by improper dynamic information expression of traditional dynamic object removal, the high-dynamic object removal method based on curvature voxel occupancy rate change detection has higher robustness to motion blur and scanning all the time, and most of dynamic objects are removed while as many static objects as possible are reserved.
Drawings
FIG. 1 is a basic flow chart of the present invention;
FIG. 2 is a flow chart of a voxel clustering method based on laser intensity assistance according to the present invention;
FIG. 3 is a flow chart of a method for removing a high dynamic object based on curvature voxel occupancy change detection according to the present invention;
fig. 4 is a laser scanning modeling effect diagram of a power scene, in fig. 4, (a) is an originally built global map, (b) is a global map with semantic segmentation, (c) is an effect diagram of removing vehicles in a point cloud, and (d) is an effect diagram of extracting power lines and iron towers.
Detailed description of the preferred embodiments
The invention will be further elucidated with reference to the drawings and the detailed description, it being understood that the following specific embodiments are only illustrative of the invention and are not intended to limit the scope of the invention.
And step 1, adopting a handheld multi-line laser radar to scan a distribution line on the street side, wherein the distribution line comprises power lines, iron towers and ground features such as ground, vegetation, buildings, vehicles, pedestrians and the like. And acquiring the original laser point cloud of the current frame as the current frame, and constructing a segmented curvature voxel occupancy descriptor in the current frame.
And 2, obtaining the current pose of the current frame by optimizing the pose residual error of the associated voxel registration by using a voxelized generalized iterative nearest point algorithm based on curvature voxel neighbor search.
And 3, performing object segmentation on the point cloud of the current frame by using a curvature voxel clustering method based on laser intensity assistance. The segmented objects are then identified using geometric feature-based object classification methods, and absolute static objects, such as ground, vegetation, buildings, etc., and potentially dynamic objects, such as vehicles, pedestrians, etc., are labeled.
And 4, carrying out curvature voxel alignment on the front and rear adjacent frames by utilizing segmented curvature voxel occupancy descriptors of the front and rear adjacent frames, and tracking potential dynamic objects. Then, according to curvature voxel occupancy change detection, low-dynamic objects such as parked vehicles, stopped pedestrians and the like are reserved, and high-dynamic objects such as running vehicles, walking pedestrians and the like are removed.
And 5, carrying out secondary optimization on the pose residual errors of static curvature voxel registration in the heavy-load curvature voxel occupancy descriptor by using a voxelized generalized iterative closest point algorithm based on curvature voxel neighbor search, and obtaining the current pose of the optimized current frame.
And 6, according to the optimized pose, carrying out pose transformation frame by frame and splicing to obtain the global static point cloud map with the high-dynamic object removed. And then extracting the power lines and the iron towers in the point cloud by using the spatial distribution geometric characteristics of the power lines and the iron towers.
Specifically, in the process of constructing the segmented curvature voxel occupancy descriptor in step 1, the preprocessed point cloud is encoded into a curvature voxel set according to a fixed resolution in three projection directions, curvature voxels containing at least one point are stored by a hash table structure, and a specific spatial expression of the segmented curvature voxel occupancy descriptor is:
in formula (1), SCV ijk,t Dividing curvature voxels indexed (i, j, k) in the curvature voxel occupancy descriptor for the current timestamp t, N ρ ,N σ Andis the maximum index value in three projection directions.
Specifically, the voxel generalized iterative closest point algorithm based on curvature voxel neighbor search in the step 2 is adapted to the space structure of the segmented curvature voxel occupancy descriptor, and the Kd-tree search of the curvature voxel center point cloud is used to obtain the associated curvature voxel pair. Assuming that the local point cloud in the curvature voxel-correlated pair { a, b } follows a gaussian distribution:and->Wherein a is m Representing the mth curvature voxel +.>Representing a voxel a of curvature m Position mean of>Representing a voxel a of curvature m Covariance matrix of b n Representing the nth curvature voxel ++>Representing the voxel b of curvature n Position mean of>Representing the voxel b of curvature n Covariance matrix of>Representing a voxel a of curvature m Is expressed in terms of a gaussian distribution of (c),representing the voxel b of curvature n Is expressed by gaussian distribution, then its corresponding pose transformation error is:
in the formula (2), T t+1,t For a pose between time stamps t and t +1,representing the pose transformation error, then T t+1,t Can be estimated by maximum likelihood as:
in the formula (3), N m The superscript T denotes the transpose of the matrix for the number of points in the associated pair.
Specifically, in the method for clustering curvature voxels based on laser intensity assistance in step 3, firstly, a laser intensity map is generated by calculating the mean value and variance of the intensity in each curvature voxel, and projected onto the segmented curvature voxel occupancy descriptors, and the process is expressed as:
in the formula (4), the amino acid sequence of the compound,representing the generated laser intensity map, av ijk ,var ijk Respectively representing the mean and variance of intensities within the voxels of curvature. Then, according to the generated intensity map +.>And (3) performing object point cloud segmentation by using a constraint and lifting strategy based on laser intensity. Wherein the constraint function E r (. Cndot.) constraint neighbor voxel clustering process, expressed as:
in the formula (5), r is a neighbor search radius, h av And h vae Search thresholds for intensity matrix and variance, respectively, (u, v, w) represent the curvature voxel index, SCV uvw,t Represents SCV ijk,t Neighborhood curvature voxels, av (i,j,k) Representing the intensity mean, av, of the (i, j, k) th curvature voxel (u,v,w) Representing the intensity mean value, var, of the (u, v, w) th curvature voxel (u,v,w) The intensity variance of the (u, v, w) th curvature voxel is represented. Then, the function G is lifted τ,r (. Cndot.) fusion neighbor partition object, expressed as:
SCV in formula (6) uvw,t Represents SCV ijk,t Is the number of iterations,the object is segmented for neighbors.
The geometric feature-based object classification method then calculates 7-dimensional feature vectors for each object, including linearity, planeness, divergence, principal direction, maximum height, minimum height, and distribution scale. Firstly, three eigenvalues of covariance matrix of the segmented point cloud object are calculated, and are ranked as sigma from big to small 1 ,σ 2 Sum sigma 3 . Wherein linearity, planarity, divergence are calculated as:
(7) f l ,f p And f s Respectively, linearity, planarity and divergence. Finally, objects are classified as absolute stationary objects including floors, buildings, trees, etc., and potentially moving objects including vehicles and pedestrians, etc.
The specific flow of the step 3 is shown in fig. 2.
Specifically, the strategy for tracking potential dynamic objects based on curvature voxel alignment described in step 4 extracts three vertices, including the nearest vertex p, for each curvature voxel near,t Center apex p central,t And the furthest vertex p far,t Then, coordinate transformation is performed:
in formula (8), SCV t+1,t Is an expression after registration of curvature voxels. After achieving segmented curvature voxel occupancy descriptor registration, one canThe response object of the tracked object is obtained by the segmentation labeling of adjacent curvature voxels, and the response object comprises the following steps:
in the formula (9), id is an object index, PD represents a potential dynamic object,for the object indicated by id in the t +1 frame,is->Object registered by curvature voxels, +.>In response to the object set, v (·) is the process of extracting the set of curvature voxels occupied by the object,/->A process for extracting curvature voxel labels.
Then, a method for removing the high dynamic object based on the curvature voxel occupancy rate change detection is adopted, after the object is tracked after the curvature voxel is aligned, a corresponding response object is obtained, and the object overlapping ratio r of the tracked object is calculated according to the curvature voxel occupancy rate change detection id,t The method comprises the following steps:
in the formula (10), v (·) is a process of extracting a set of curvature voxels occupied by the object. Setting a threshold h according to the obtained object overlapping ratio r To determine the motion attribute of the tracked object, there are basic assumptions that are:
a)r id,t <h r the tracked object is lowDynamic objects, should be preserved;
b)r id,t >h r the tracked object is a high dynamic object and should be removed;
the specific flow of the step 4 is shown in fig. 3.
Specifically, in the step 5, the secondary pose optimization method based on static curvature voxel reloading ignores curvature voxels occupied by a high dynamic object, only considers static curvature voxels, and recalculates pose residual errors of curvature voxel registration, and the process is as follows:
in the formula (11)To take into account the pose error of the dynamic weights +.>For the heavy loaded pose residual error, T is an optimized pose intermediate variable, and +.>To optimize the pose.
Specifically, in step 6, according to the method for acquiring the point cloud pose and the spliced static instance map, the absolute static object and the low dynamic object which are reserved by each frame of point cloud are registered under the global world coordinate system through the transformation of the optimized pose. In consideration of the spatial distribution characteristics of the power lines and the iron towers, namely, the iron tower point cloud has elevation continuity in the z-axis direction, the power line point cloud has suspension property, and the suspension power lines and the iron tower point cloud are roughly separated based on the suspension property characteristics.
The original global map is shown in fig. 4 (a), the global map with semantic segmentation is shown in fig. 4 (b), the effect of removing vehicles in the point cloud is shown in fig. 4 (c), and the effect of extracting power lines and towers is shown in fig. 4 (d).
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any other way, but any modifications or equivalent variations according to the technical spirit of the present invention are still within the scope of the present invention as claimed.

Claims (7)

1. The laser radar point cloud mapping method for scanning distribution lines in a dynamic environment is characterized by comprising the following steps of:
step 1, a hand-held multi-line laser radar is adopted to scan a distribution line at the street side, the original laser point cloud of the current frame is obtained as the current frame, and a segmentation curvature voxel occupancy rate descriptor is constructed in the current frame;
step 2, using a voxelized generalized iterative nearest point algorithm based on curvature voxel neighbor search, and obtaining the current pose of the current frame by optimizing the pose residual error of associated voxel registration;
step 3, using a curvature voxel clustering method based on laser intensity assistance to segment the current frame point cloud, then using an object classification method based on geometric characteristics to identify the segmented object, and marking an absolute static object;
step 4, using the segmented curvature voxel occupancy descriptors of the front and rear adjacent frames to align curvature voxels of the front and rear adjacent frames, tracking potential dynamic objects, then detecting according to the curvature voxel occupancy changes, retaining low-dynamic objects, and removing high-dynamic objects;
step 5, carrying out secondary optimization on pose residual errors of static curvature voxel registration in the heavy-load curvature voxel occupancy descriptors by using a voxelized generalized iterative closest point algorithm based on curvature voxel neighbor search again to obtain the current pose of the optimized current frame;
and 6, carrying out pose transformation on a frame-by-frame basis according to the optimized pose, splicing to obtain a global static point cloud map with high-dynamic objects removed, and extracting the power lines and the iron towers in the point cloud by utilizing the spatial distribution geometric features of the power lines and the iron towers.
2. The lidar point cloud mapping method for power distribution line scanning in a dynamic environment of claim 1, wherein the method comprises the steps of: the constructed segmented curvature voxel occupancy descriptors in the step 1 encode object semantics and positions in the environment into segmented curvature voxels, and the specific method is as follows:
encoding the preprocessed point cloud into a curvature voxel set according to fixed resolution in three projection directions, wherein curvature voxels containing at least one point are stored by a hash table structure, and the specific space expression for dividing the occupation ratio description of the curvature voxels is as follows:
in formula (1), SCV ijk,t Dividing curvature voxels indexed (i, j, k) in the curvature voxel occupancy descriptor for the current timestamp t, N ρ ,N σ Andis the maximum index value in three projection directions.
3. The method for laser radar point cloud mapping for power distribution line scanning in a dynamic environment according to claim 1, wherein the method comprises the following steps: the voxelized generalized iterative closest point algorithm based on the curvature voxel neighbor search in the step 2 is adapted to the space structure of the segmented curvature voxel occupancy descriptor, and the Kd-tree search of the curvature voxel center point cloud is used to obtain the associated curvature voxel pair, and the specific method is as follows: assuming that the local point cloud in the curvature voxel-correlated pair { a, b } follows a gaussian distribution:and->Wherein a is m Representing the mth curvature voxel +.>Representing a voxel a of curvature m Position mean of>Representing a voxel a of curvature m Covariance matrix of b n Representing the nth curvature voxel ++>Representing the voxel b of curvature n Position mean of>Representing the voxel b of curvature n Covariance matrix of>Representing a voxel a of curvature m Is expressed in terms of a gaussian distribution of (c),representing the voxel b of curvature n Is expressed by gaussian distribution, then its corresponding pose transformation error is:
in the formula (2), T t+1,t For a pose between time stamps t and t +1,representing the pose transformation error, then T t+1,t Can be estimated by maximum likelihood as:
in the formula (3), N m Is a as m The superscript T indicates the transpose of the matrix.
4. The method for laser radar point cloud mapping for power distribution line scanning in a dynamic environment according to claim 1, wherein the method comprises the following steps: the curvature voxel clustering method based on laser intensity assistance in the step 3 specifically comprises the following sub-steps:
substep 3-1, generating a laser intensity map projection onto the segmented curvature voxel occupancy descriptors by computing the intensity mean and variance within each curvature voxel, the process being represented as:
in the formula (4), the amino acid sequence of the compound,representing the generated laser intensity map, av ijk ,var ijk Respectively representing the mean and variance of the intensities in the curvature voxels;
substep 3-2, based on the generated laser intensity mapDividing object point clouds by using constraint and lifting strategies based on laser intensity, wherein a constraint function E r (. Cndot.) constraint neighbor voxel clustering process, expressed as:
in the formula (5), r is a neighbor search radius, h av And h var Search thresholds for intensity matrix and variance, respectively, (u, v, w) represent the curvature voxel index, SCV uvw,t Represents SCV ijk,t Neighborhood curvature voxels, av (i,j,k) Representing the intensity mean, av, of the (i, j, k) th curvature voxel (u,v,w) Representing the intensity mean value, var, of the (u, v, w) th curvature voxel (u,v,w) Representing the intensity variance of the (u, v, w) th curvature voxel;
substep 3-3, lifting the function G τ,r (. Cndot.) fusion neighbor partition object, expressed as:
in the formula (6), tau is the iteration number,splitting an object for neighbors,/->An object with index k;
sub-step 3-4, calculating 7-dimensional feature vectors of each object by using an object classification method based on geometric features, wherein the 7-dimensional feature vectors comprise linearity, planeness, divergence, main direction, maximum height, minimum height and distribution scale, firstly calculating three feature values of covariance matrixes of the segmented point cloud objects, and sequencing the three feature values from large to small to sigma 1 ,σ 2 Sum sigma 3 Wherein linearity, planarity, divergence are calculated as:
(7) f l ,f p And f s Respectively representing linearity, planarity and divergence;
ultimately, objects are classified as absolute stationary objects including floors, buildings, trees, and potentially moving objects including vehicles and pedestrians.
5. The method for laser radar point cloud mapping for power distribution line scanning in a dynamic environment according to claim 1, wherein the method comprises the following steps: in the step 4, the segmented curvature voxel occupancy descriptors of the front and rear adjacent frames are utilized to align curvature voxels of the front and rear adjacent frames, and potential dynamic objects are tracked, namely, registration of curvature voxel vertices is used to realize mapping and searching of the objects in the segmented curvature voxel occupancy descriptors, and the method specifically comprises the following sub-steps:
substep 4-1, extracting three vertices of each curvature voxel, including the nearest vertex p near,t Center apex p central,t And the furthest vertex p far,t Then, coordinate transformation is performed:
in formula (8), SCV t+1,t An expression registered for the curvature voxels;
and a substep 4-2, after realizing the segmented curvature voxel occupancy descriptor registration, obtaining a response object of the tracked object through the segmentation labeling of adjacent curvature voxels, wherein the response object is as follows:
in the formula (9), id is an object index, PD represents a potential dynamic object,for the object indicated by id in the t+1 frame, < >>Is->Object registered by curvature voxels, +.>In response to the object set, v (·) is the process of extracting the set of curvature voxels occupied by the object,/->A process for extracting curvature voxel labels;
sub-step 4-3, calculating the object overlapping ratio r of the tracked object according to the curvature voxel occupancy rate change detection, obtaining a corresponding response object after tracking the object after the alignment of the curvature voxels, and detecting the object overlapping ratio r of the tracked object based on the curvature voxel occupancy rate change id,t The method comprises the following steps:
setting a threshold h according to the obtained object overlapping ratio r To determine the motion attribute of the tracked object, there are basic assumptions that are:
a)r id,t <h r the tracked object is a low dynamic object and should be preserved;
b)r id,t >h r the tracked object is a high dynamic object and should be removed.
6. The method for laser radar point cloud mapping for power distribution line scanning in a dynamic environment according to claim 1, wherein the method comprises the following steps: in the step 5, the pose residual error of static curvature voxel registration in the heavy curvature voxel occupancy descriptor is calculated by the following method:
in the formula (11)To take into account the pose error of the dynamic weights +.>For the heavy loaded pose residual error, T is an optimized pose intermediate variable, and +.>To optimize the pose.
7. The method for laser radar point cloud mapping for power distribution line scanning in a dynamic environment according to claim 1, wherein the method comprises the following steps: in the step 6, the method for acquiring the static instance map according to the pose of the point cloud and the splicing is to convert the absolute static object and the low dynamic object which are reserved by each frame of the point cloud into a global world coordinate system through optimization pose transformation, and further divide the point cloud of the power line and the iron tower by utilizing suspension analysis according to the spatial distribution characteristics of the power line and the iron tower.
CN202310818051.4A 2023-07-05 2023-07-05 Laser radar point cloud mapping method for scanning distribution lines in dynamic environment Pending CN116912404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310818051.4A CN116912404A (en) 2023-07-05 2023-07-05 Laser radar point cloud mapping method for scanning distribution lines in dynamic environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310818051.4A CN116912404A (en) 2023-07-05 2023-07-05 Laser radar point cloud mapping method for scanning distribution lines in dynamic environment

Publications (1)

Publication Number Publication Date
CN116912404A true CN116912404A (en) 2023-10-20

Family

ID=88359474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310818051.4A Pending CN116912404A (en) 2023-07-05 2023-07-05 Laser radar point cloud mapping method for scanning distribution lines in dynamic environment

Country Status (1)

Country Link
CN (1) CN116912404A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117872398A (en) * 2024-03-13 2024-04-12 中国科学技术大学 Large-scale scene real-time three-dimensional laser radar intensive mapping method
CN118228602A (en) * 2024-04-11 2024-06-21 中国科学技术大学 Thunder and lightning prediction method based on variable resolution SCVT grid and machine learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117872398A (en) * 2024-03-13 2024-04-12 中国科学技术大学 Large-scale scene real-time three-dimensional laser radar intensive mapping method
CN117872398B (en) * 2024-03-13 2024-05-17 中国科学技术大学 Large-scale scene real-time three-dimensional laser radar intensive mapping method
CN118228602A (en) * 2024-04-11 2024-06-21 中国科学技术大学 Thunder and lightning prediction method based on variable resolution SCVT grid and machine learning
CN118228602B (en) * 2024-04-11 2024-09-03 中国科学技术大学 Thunder and lightning prediction method based on variable resolution SCVT grid and machine learning

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
CN110243370A (en) A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning
CN116912404A (en) Laser radar point cloud mapping method for scanning distribution lines in dynamic environment
CN110487286B (en) Robot pose judgment method based on point feature projection and laser point cloud fusion
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
WO2021155558A1 (en) Road marking identification method, map generation method and related product
CN116879870B (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN113269147B (en) Three-dimensional detection method and system based on space and shape, and storage and processing device
CN116222577B (en) Closed loop detection method, training method, system, electronic equipment and storage medium
CN112861700A (en) DeepLabv3+ based lane line network identification model establishment and vehicle speed detection method
CN114187447A (en) Semantic SLAM method based on instance segmentation
Xiong et al. Road-Model-Based road boundary extraction for high definition map via LIDAR
CN116772820A (en) Local refinement mapping system and method based on SLAM and semantic segmentation
CN114581307A (en) Multi-image stitching method, system, device and medium for target tracking identification
CN116664851A (en) Automatic driving data extraction method based on artificial intelligence
CN114820931B (en) Virtual reality-based CIM (common information model) visual real-time imaging method for smart city
CN113850864B (en) GNSS/LIDAR loop detection method for outdoor mobile robot
Han et al. GardenMap: Static point cloud mapping for Garden environment
CN113792660B (en) Pedestrian detection method, system, medium and equipment based on improved YOLOv3 network
Xie et al. A cascaded framework for robust traversable region estimation using stereo vision
Dong et al. Semantic lidar odometry and mapping for mobile robots using rangeNet++
Tao et al. Accurate localization in underground garages via cylinder feature based map matching
El Amrani Abouelassad et al. Vehicle Pose and Shape Estimation in UAV Imagery Using a CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB03 Change of inventor or designer information

Inventor after: Qian Kun

Inventor after: Fang Yixin

Inventor after: Zhang Bin

Inventor after: Shi Tong

Inventor after: Xu Da

Inventor before: Qian Kun

Inventor before: Fang Yixin

Inventor before: Zhang Zan

Inventor before: Shi Tong

Inventor before: Xu Da

CB03 Change of inventor or designer information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination