CN118067102A - Laser radar static point cloud map construction method and system based on viewpoint visibility - Google Patents
Laser radar static point cloud map construction method and system based on viewpoint visibility Download PDFInfo
- Publication number
- CN118067102A CN118067102A CN202410030021.1A CN202410030021A CN118067102A CN 118067102 A CN118067102 A CN 118067102A CN 202410030021 A CN202410030021 A CN 202410030021A CN 118067102 A CN118067102 A CN 118067102A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- map
- laser radar
- local
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003068 static effect Effects 0.000 title claims abstract description 30
- 238000010276 construction Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000007781 pre-processing Methods 0.000 claims abstract description 14
- 238000012216 screening Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims description 8
- 230000033001 locomotion Effects 0.000 claims description 5
- 230000010354 integration Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 3
- 238000013507 mapping Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000012847 principal component analysis method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mathematical Physics (AREA)
- Electromagnetism (AREA)
- Computational Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention provides a laser radar static point cloud map construction method and system based on viewpoint visibility. The laser radar static point cloud map construction method based on the viewpoint visibility comprises the following steps: collecting laser radar point cloud data and IMU information, and preprocessing; performing point cloud registration on multi-frame point cloud scanning near the currently scanned laser radar point cloud to generate a local point cloud sub-map; removing the currently scanned laser radar point cloud and the ground point cloud in the local point cloud map to obtain a non-ground point cloud; performing spherical projection on the non-ground point cloud to generate a corresponding distance image; differentiating the distance images, and screening dynamic points in each local point cloud sub map; and splicing all the local point cloud sub-maps in the scene based on the dynamic points in all the local point cloud sub-maps to generate a static map of the scene. The method is optimized based on the defects of the visibility method, and has better applicability in high-dynamic traffic scenes.
Description
Technical Field
The invention relates to the technical field of simultaneous localization and mapping based on a laser radar, in particular to a laser radar static point cloud map construction method based on viewpoint visibility.
Background
Meanwhile, the positioning and mapping technology is used as one of key technologies in the fields of automatic driving and intelligent traffic, and the precision, the instantaneity and the robustness of mapping algorithms in dynamic traffic scenes still need to be further optimized. The high-precision map construction-dependent sensor comprises a laser radar, an industrial camera, a GNSS, an IMU and the like, wherein the laser radar can provide massive point cloud data for point cloud map construction, the existing simultaneous positioning and map construction algorithm based on the laser radar is based on a static environment assumption, but a large number of moving automobiles, pedestrians and cyclists exist in a traffic environment where an automatic driving automobile runs, and in the map construction process, the precision of the map construction algorithm and the quality of a constructed map are influenced by the dynamic objects.
Firstly, the dynamic point cloud can influence the registration effect of the laser radar odometer, and even the phenomenon of point cloud registration running can occur when the number of the dynamic point cloud in a scene is too large; secondly, moving marks are left by the dynamic object in the finally constructed point cloud map, the spatial structure of the point cloud map is changed, and the marks can be used as barriers in the map, so that the map precision and the positioning precision are reduced, and the efficiency of the automatic driving downstream task is reduced. Meanwhile, as related technologies of automatic driving are continuously developed in recent years, a high dynamic scene is used as a main application scene of an automatic driving automobile. The method has the advantages that the map of the static structure in the scene is accurately constructed in real time, and the method has important significance for realizing the real-time positioning and downstream path planning tasks of high-level automatic driving.
Disclosure of Invention
In view of the above, the present invention provides a laser radar static point cloud map construction method based on viewpoint visibility to solve the above problems.
According to a first aspect of the present invention, there is provided a laser radar static point cloud map construction method based on viewpoint visibility, including: collecting laser radar point cloud data and IMU information, and preprocessing; performing point cloud registration on multi-frame point cloud scanning near the currently scanned laser radar point cloud to generate a local point cloud sub-map; removing the currently scanned laser radar point cloud and the ground point cloud in the local point cloud map to obtain a non-ground point cloud; performing spherical projection on the non-ground point cloud to generate a corresponding distance image; differentiating the distance images, and screening dynamic points in each local point cloud sub map; and splicing all the local point cloud sub-maps in the scene based on the dynamic points in all the local point cloud sub-maps to generate a static map of the scene.
In another implementation of the invention, preprocessing includes data parsing, IMU pre-integration, radar motion compensation, point cloud segmentation and feature extraction; the preprocessed data includes valid lidar point cloud data including available features and its own initial pose output by the IMU.
In another implementation of the present invention, performing point cloud registration on a multi-frame point cloud scan near a currently scanned laser radar point cloud to generate a local point cloud sub-map includes: selecting multi-frame point cloud scanning near the currently scanned laser radar point cloud; performing point cloud registration by minimizing distances between line features and plane features in different point cloud scans; and generating a local point cloud sub-map according to the registered point cloud scanning.
In another implementation of the present invention, a spherical projection is performed on a non-ground point cloud to generate a corresponding distance image, and a pixel value of each point is calculated by the following formula:
wherein, Representing the distance of point p to the local coordinate system of the kth keyframe.
In another implementation manner of the present invention, differentiating the distance image, and screening dynamic points in each local point cloud sub-map includes: the pixel values of the local point cloud sub-map and the point cloud in the current scan are subtracted by their matrix elements:
To calculate the pixel value Comparing with the threshold value tau, if the value tau is larger than the threshold value tau, the corresponding point is a dynamic point, and the specific dynamic point is defined as follows:
τ=γdist(p)
Where γ is the sensitivity to the point distance and dist (. Cndot.) is the distance value taken for the corresponding point.
In another implementation manner of the present invention, based on a dynamic point in each local point cloud sub-map, each local point cloud sub-map in a scene is spliced, and a static map of the scene is generated, including: and fusing the ground point cloud with the non-ground point cloud with the dynamic point cloud filtered out, generating a current scanning and point cloud sub-map, and outputting a scene static map after the point cloud sub-map registration.
According to a second aspect of the present invention, there is provided a laser radar static point cloud map construction system based on viewpoint visibility, comprising: and a data preprocessing module: the method is used for collecting laser radar point cloud data and IMU information and preprocessing. And the feature extraction module is used for: and the method is used for carrying out point cloud registration on multi-frame point cloud scanning near the currently scanned laser radar point cloud to generate a local point cloud sub-map. And a ground fitting module: and the method is used for removing the currently scanned laser radar point cloud and the ground point cloud in the local point cloud map to obtain the non-ground point cloud. Performing spherical projection on the non-ground point cloud to generate a corresponding distance image; and differentiating the distance images, and screening dynamic points in each local point cloud sub map. And (3) a drawing building module: and the method is used for splicing the local point cloud sub-maps in the scene based on the dynamic points in the local point cloud sub-maps to generate a static map of the scene.
According to the laser radar static point cloud map construction method based on the viewpoint visibility, non-ground point clouds are input to conduct dynamic point cloud removal, so that the problem that the ground point clouds are misjudged as dynamic points by the method based on the viewpoint visibility is solved; selecting non-ground point clouds through screening, and recovering ground points after the non-ground point clouds are processed by the dynamic point removing module, so that the complete ground characteristics are reserved in the finally constructed point cloud map, and the accuracy of dynamic point cloud screening is improved; the screened ground point cloud does not carry out dynamic point cloud removal, so that the data volume processed by the dynamic point cloud removal module is reduced, and the real-time performance of dynamic point cloud removal is improved; the mapping algorithm provided by the invention optimizes the defects based on the visibility method, and has better applicability in high-dynamic traffic scenes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described, and advantages and benefits in the solutions will become apparent to those skilled in the art from reading the detailed description of the embodiments below. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. In the drawings:
Fig. 1 is a schematic step flow diagram of a laser radar static point cloud map construction method based on viewpoint visibility according to an embodiment of the present invention.
FIG. 2 is a schematic workflow diagram of ground point cloud screening according to one embodiment of the invention.
Fig. 3 is a view of a point cloud ball projected onto a range image according to an embodiment of the present invention.
Fig. 4 is a point cloud map constructed by the mapping module without the dynamic point cloud removal module according to an embodiment of the present invention.
Fig. 5 is a point cloud map constructed by the mapping module after adding the dynamic point cloud removing module according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions in the embodiments of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly and specifically described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the present invention, shall fall within the scope of protection of the embodiments of the present invention.
Fig. 1is a step flowchart of a laser radar static point cloud map construction method based on viewpoint visibility, which is provided in an embodiment of the present invention, as shown in fig. 1, and the embodiment mainly includes the following steps:
S101, collecting laser radar point cloud data and IMU information, and preprocessing.
S102, performing point cloud registration on multi-frame point cloud scanning near the currently scanned laser radar point cloud to generate a local point cloud sub-map.
And S103, removing the currently scanned laser radar point cloud and the ground point cloud in the local point cloud sub-map to obtain a non-ground point cloud.
S104, performing spherical projection on the non-ground point cloud to generate a corresponding distance image.
S105, differentiating the distance images, and screening dynamic points in each local point cloud sub-map.
And S106, splicing all the local point cloud sub-maps in the scene based on the dynamic points in all the local point cloud sub-maps to generate a static map of the scene.
According to the laser radar static point cloud map construction method based on the viewpoint visibility, non-ground point clouds are input to conduct dynamic point cloud removal, so that the problem that the ground point clouds are misjudged as dynamic points by the method based on the viewpoint visibility is solved; selecting non-ground point clouds through screening, and recovering ground points after the non-ground point clouds are processed by the dynamic point removing module, so that the complete ground characteristics are reserved in the finally constructed point cloud map, and the accuracy of dynamic point cloud screening is improved; the screened ground point cloud does not carry out dynamic point cloud removal, so that the data volume processed by the dynamic point cloud removal module is reduced, and the real-time performance of dynamic point cloud removal is improved; the mapping algorithm provided by the invention optimizes the defects based on the visibility method, and has better applicability in high-dynamic traffic scenes.
In another implementation of the invention, preprocessing includes data parsing, IMU pre-integration, radar motion compensation, point cloud segmentation and feature extraction; the preprocessed data includes valid lidar point cloud data including available features and its own initial pose output by the IMU.
Exemplary, the data preprocessing module performs data analysis, IMU pre-integration, radar motion compensation, point cloud segmentation and feature extraction on the acquired laser radar point cloud data and inertial navigation data (IMU), and the preprocessed data comprises effective point cloud data containing available features and self-initial pose output by the IMU.
In another implementation of the present invention, performing point cloud registration on a multi-frame point cloud scan near a currently scanned laser radar point cloud to generate a local point cloud sub-map includes: selecting multi-frame point cloud scanning near the currently scanned laser radar point cloud; performing point cloud registration by minimizing distances between line features and plane features in different point cloud scans; and generating a local point cloud sub-map according to the registered point cloud scanning.
Illustratively, the preprocessed laser radar point cloud is input to a feature extraction module, features are extracted and registered to generate a local point cloud sub-map.
Preferably, 5 frames of scanning near the current scanning of the laser radar are selected, and the line characteristic and the surface characteristic of the current scanning are selected to be expressed by the following formulas:
{εk-2,εk-1,εk,εk+1,εk+2}
{Ηk-2Ηk-1ΗkΗk+1Ηk+2}
Where ε k represents the set of line features extracted in the kth lidar scan, and H k represents the set of surface features extracted in the kth lidar scan.
Preferably, the point cloud registration is performed by minimizing the distances between line features and surface features of different laser radar scans, and the distances between the line features and the surface features of different scans are expressed by the following formula:
and generating a local point cloud sub-map through laser radar scanning after registration.
In another implementation of the present invention, as shown in fig. 2, the processed lidar point cloud is input to a ground fitting module to perform ground point cloud removal, and non-ground point clouds are screened out.
S1031, partitioning the input point cloud through polar coordinates, wherein the partitioning formula is as follows:
Wherein C represents the entire region, Z m represents the mth region, N z represents the number of regions, and N z is empirically set to 4,Z m as follows:
Zm={pk∈p|Lmin,m≤ρk<Lmax,m}
Wherein, L min,m and L max,m are sub-tables representing the minimum and maximum radial boundaries of Z m; then, Z m is also divided into N r,m×Nθ,m bins, with each region having a different bin size.
S1032, the set of data points in each bin is named S n, and the specific formula for the total number of bins N c is as follows:
Where N r,m denotes the number of areas divided in the radial direction in the m area, and N θ,m denotes the number of areas divided in the circumferential direction in the m area.
S1033, extracting the third feature representing the feature in the height direction by using principal component analysis, selecting the lowest height point as the seed point, and makingFor the average of the selected total number of seed points N seed, then the initial ground point cloud estimate is as follows:
Where z (·) returns the point height value and z seed represents the height threshold.
S1034, the ground point obtained by one iteration isThe normal vector of the ground point of the first iteration is/>
S1035, calculating the ground coefficient by the ground point normal vector of the last iteration, wherein the formula is as follows:
wherein, Is the average of all points classified as ground at iteration 1.
S1036, the iterative formula of the 2 nd time ground point is as follows:
wherein the kth data point is represented by the following formula M d represents a ground-set distance threshold for a point.
S1037, carrying out ground likelihood estimation by integrating probabilities of three directions of verticality, height and flatness, regarding the product being more than 0.5 as the ground, and carrying out likelihood estimation on the ground by the following formula:
Wherein f (χ n|θn) is calculated by:
wherein, R n、σn represents the average z-value, the distance between the origin and the centroid of S n, and the surface variables, respectively, where
Further, for the steps of calculating verticality, height and flatness, the following steps are adopted:
S10371, calculating the perpendicularity of the planes, judging by using a third three-dimensional feature vector v 3 extracted from each plane through a principal component analysis method, wherein the formula is as follows:
Wherein z represents [0, 1], and θ τ represents the verticality threshold.
S10372, estimating a situation where the data point above the occlusion space is considered to be the ground using the following formula:
Where κ (·) represents an adaptive midpoint function that grows exponentially according to r n.
S10373, based on the result of the previous height judgment, calculating the flatness to reduce erroneous judgment at the time of ascending the slope plane using the following formula:
In another implementation of the present invention, a spherical projection is performed on a non-ground point cloud to generate a corresponding distance image, and a pixel value of each point is calculated by the following formula:
wherein, Representing the distance of point p to the local coordinate system of the kth keyframe.
Illustratively, as shown in fig. 3, spherical projection of the point cloud converts the point cloud into a range image, and the projection coordinates of each point are calculated by the following equation:
Ball projection is carried out on the current scanning point cloud and the radar local map to generate a corresponding distance image, and the pixel value of each point is calculated by the following formula:
wherein, Representing the distance of point p to the local coordinate system of the kth keyframe.
In another implementation manner of the present invention, differentiating the distance image, and screening dynamic points in each local point cloud sub-map includes: the pixel values of the local point cloud sub-map and the point cloud in the current scan are subtracted by their matrix elements:
To calculate the pixel value Comparing with the threshold value tau, if the value tau is larger than the threshold value tau, the corresponding point is a dynamic point, and the specific dynamic point is defined as follows:
τ=γdist(p)
Where γ is the sensitivity to the point distance and dist (. Cndot.) is the distance value taken for the corresponding point.
In another implementation manner of the present invention, based on a dynamic point in each local point cloud sub-map, each local point cloud sub-map in a scene is spliced, and a static map of the scene is generated, including: and fusing the ground point cloud with the non-ground point cloud with the dynamic point cloud filtered out, generating a current scanning and point cloud sub-map, and outputting a scene static map after the point cloud sub-map registration.
Exemplary, the generated point cloud map is shown in fig. 4 and fig. 5, where fig. 4 is a schematic diagram of a point cloud map constructed by the mapping module according to an embodiment of the present invention without the dynamic point cloud removal module; fig. 5 is a schematic diagram of a point cloud map constructed by adding a dynamic point cloud removal module to a mapping module according to an embodiment of the present invention.
According to a second aspect of the present invention, there is provided a laser radar static point cloud map construction system based on viewpoint visibility, comprising:
And a data preprocessing module: the method is used for collecting laser radar point cloud data and IMU information and preprocessing.
And the feature extraction module is used for: and the method is used for carrying out point cloud registration on multi-frame point cloud scanning near the currently scanned laser radar point cloud to generate a local point cloud sub-map.
And a ground fitting module: and the method is used for removing the currently scanned laser radar point cloud and the ground point cloud in the local point cloud map to obtain the non-ground point cloud.
Performing spherical projection on the non-ground point cloud to generate a corresponding distance image; and differentiating the distance images, and screening dynamic points in each local point cloud sub map.
And (3) a drawing building module: and the method is used for splicing the local point cloud sub-maps in the scene based on the dynamic points in the local point cloud sub-maps to generate a static map of the scene.
Preferably, a dynamic point cloud removal module can be arranged in the mapping module.
Thus, specific embodiments of the present invention have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
It should be noted that, in the embodiments of the present invention, all directional indicators (such as up, down, left, right, and back … …) are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific gesture (as shown in the drawings), and if the specific gesture changes, the directional indicators correspondingly change.
In the description of the present invention, the terms "first," "second," and the like are used merely for convenience in describing the various components or names, and are not to be construed as indicating or implying a sequential relationship, relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
It should be noted that, although specific embodiments of the present invention have been described in detail with reference to the accompanying drawings, the present invention should not be construed as limiting the scope of the present invention. Various modifications and variations which may be made by those skilled in the art without the creative effort fall within the protection scope of the present invention within the scope described in the claims.
Examples of embodiments of the present invention are intended to briefly illustrate technical features of embodiments of the present invention so that those skilled in the art may intuitively understand the technical features of the embodiments of the present invention, and are not meant to be undue limitations of the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (7)
1. A laser radar static point cloud map construction method based on viewpoint visibility is characterized by comprising the following steps:
collecting laser radar point cloud data and IMU information, and preprocessing;
performing point cloud registration on multi-frame point cloud scanning near the currently scanned laser radar point cloud to generate a local point cloud sub-map;
removing the currently scanned laser radar point cloud and the ground point cloud in the local point cloud map to obtain a non-ground point cloud;
Performing spherical projection on the non-ground point cloud to generate a corresponding distance image;
Differentiating the distance images, and screening dynamic points in each local point cloud sub map;
and splicing all the local point cloud sub-maps in the scene based on the dynamic points in all the local point cloud sub-maps to generate a static map of the scene.
2. The method of claim 1, wherein the preprocessing comprises data parsing, IMU pre-integration, radar motion compensation, point cloud segmentation and feature extraction;
the preprocessed data includes valid lidar point cloud data including available features and its own initial pose output by the IMU.
3. The method of claim 1, wherein the performing point cloud registration of the multi-frame point cloud scan around the currently scanned laser radar point cloud to generate the local point cloud map comprises:
selecting multi-frame point cloud scanning near the currently scanned laser radar point cloud;
Performing point cloud registration by minimizing distances between line features and plane features in different point cloud scans;
and generating a local point cloud sub-map according to the registered point cloud scanning.
4. The method of claim 1, wherein the spherically projecting the non-ground point cloud generates a corresponding range image, and wherein the pixel value of each point is calculated by:
wherein, Representing the distance of point p to the local coordinate system of the kth keyframe.
5. The method of claim 4, wherein differentiating the distance image to screen dynamic points in each local point cloud map comprises:
the pixel values of the local point cloud sub-map and the point cloud in the current scan are subtracted by their matrix elements:
To calculate the pixel value Comparing with the threshold value tau, if the value tau is larger than the threshold value tau, the corresponding point is a dynamic point, and the specific dynamic point is defined as follows:
τ=γdist(p)
Where γ is sensitivity to the point distance, and dist () is a distance value taking the corresponding point.
6. The method of claim 5, wherein the generating a static map of the scene based on stitching each partial point cloud sub-map of the scene with a dynamic point in each partial point cloud sub-map, comprises:
And fusing the ground point cloud with the non-ground point cloud with the dynamic point cloud filtered out, generating a current scanning and point cloud sub-map, and outputting a scene static map after the point cloud sub-map registration.
7. A laser radar static point cloud map construction system based on viewpoint visibility is characterized by comprising:
And a data preprocessing module: the method comprises the steps of collecting laser radar point cloud data and IMU information, and preprocessing;
And the feature extraction module is used for: the method comprises the steps of performing point cloud registration on multi-frame point cloud scanning near a currently scanned laser radar point cloud to generate a local point cloud map;
And a ground fitting module: the method comprises the steps of removing the currently scanned laser radar point cloud and the ground point cloud in the local point cloud map to obtain a non-ground point cloud; performing spherical projection on the non-ground point cloud to generate a corresponding distance image; differentiating the distance images, and screening dynamic points in each local point cloud sub map;
and (3) a drawing building module: and the method is used for splicing the local point cloud sub-maps in the scene based on the dynamic points in the local point cloud sub-maps to generate a static map of the scene.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410030021.1A CN118067102A (en) | 2024-01-09 | 2024-01-09 | Laser radar static point cloud map construction method and system based on viewpoint visibility |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410030021.1A CN118067102A (en) | 2024-01-09 | 2024-01-09 | Laser radar static point cloud map construction method and system based on viewpoint visibility |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118067102A true CN118067102A (en) | 2024-05-24 |
Family
ID=91106779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410030021.1A Pending CN118067102A (en) | 2024-01-09 | 2024-01-09 | Laser radar static point cloud map construction method and system based on viewpoint visibility |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118067102A (en) |
-
2024
- 2024-01-09 CN CN202410030021.1A patent/CN118067102A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111882612B (en) | Vehicle multi-scale positioning method based on three-dimensional laser detection lane line | |
CN111563442B (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
CN112396650B (en) | Target ranging system and method based on fusion of image and laser radar | |
CN110675418B (en) | Target track optimization method based on DS evidence theory | |
CN115439424B (en) | Intelligent detection method for aerial video images of unmanned aerial vehicle | |
CN113903011B (en) | Semantic map construction and positioning method suitable for indoor parking lot | |
CN112740225B (en) | Method and device for determining road surface elements | |
CN108804992B (en) | Crowd counting method based on deep learning | |
CN115376109B (en) | Obstacle detection method, obstacle detection device, and storage medium | |
CN111540027B (en) | Detection method, detection device, electronic equipment and storage medium | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN115273034A (en) | Traffic target detection and tracking method based on vehicle-mounted multi-sensor fusion | |
CN114972968A (en) | Tray identification and pose estimation method based on multiple neural networks | |
CN117593650B (en) | Moving point filtering vision SLAM method based on 4D millimeter wave radar and SAM image segmentation | |
CN115222884A (en) | Space object analysis and modeling optimization method based on artificial intelligence | |
CN116597122A (en) | Data labeling method, device, electronic equipment and storage medium | |
CN113838129B (en) | Method, device and system for obtaining pose information | |
CN113554705A (en) | Robust positioning method for laser radar in changing scene | |
CN117409386A (en) | Garbage positioning method based on laser vision fusion | |
CN117419719A (en) | IMU-fused three-dimensional laser radar positioning and mapping method | |
Liu et al. | A lightweight lidar-camera sensing method of obstacles detection and classification for autonomous rail rapid transit | |
CN116704037A (en) | Satellite lock-losing repositioning method and system based on image processing technology | |
CN115236643B (en) | Sensor calibration method, system, device, electronic equipment and medium | |
CN116862832A (en) | Three-dimensional live-action model-based operator positioning method | |
CN116045965A (en) | Multi-sensor-integrated environment map construction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |