CN113640826B - Obstacle identification method and system based on 3D laser point cloud - Google Patents

Obstacle identification method and system based on 3D laser point cloud Download PDF

Info

Publication number
CN113640826B
CN113640826B CN202110920554.3A CN202110920554A CN113640826B CN 113640826 B CN113640826 B CN 113640826B CN 202110920554 A CN202110920554 A CN 202110920554A CN 113640826 B CN113640826 B CN 113640826B
Authority
CN
China
Prior art keywords
point
ground
point cloud
threshold value
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110920554.3A
Other languages
Chinese (zh)
Other versions
CN113640826A (en
Inventor
皇攀凌
李留昭
周军
赵一凡
林乐彬
欧金顺
高新彪
孟广辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Alesmart Intelligent Technology Co Ltd
Original Assignee
Shandong University
Shandong Alesmart Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University, Shandong Alesmart Intelligent Technology Co Ltd filed Critical Shandong University
Priority to CN202110920554.3A priority Critical patent/CN113640826B/en
Priority to AU2021266206A priority patent/AU2021266206B1/en
Publication of CN113640826A publication Critical patent/CN113640826A/en
Application granted granted Critical
Publication of CN113640826B publication Critical patent/CN113640826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The disclosure provides an obstacle identification method and system based on 3D laser point cloud, which acquire three-dimensional environment laser point cloud data; performing ground segmentation processing on the three-dimensional environment laser point cloud data to obtain non-ground laser point cloud data; partitioning non-ground laser point cloud data, and marking obstacles of each partition by utilizing two-dimensional European clustering; packaging the clustering result by using a cuboid model to obtain the position and the size of the obstacle; the method and the device effectively improve the real-time performance of ground segmentation and point cloud clustering, and improve the recognition accuracy of the obstacle at a longer distance.

Description

Obstacle identification method and system based on 3D laser point cloud
Technical Field
The disclosure relates to the technical field of obstacle detection, in particular to an obstacle identification method and system based on a 3D laser point cloud.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the rapid development of unmanned technology and 3D laser SLAM (Simultaneous Localization And Mapping, synchronous positioning and mapping) technology, the map construction based on three-dimensional laser radar rapidly develops, and the carrier obtains the azimuth information of the obstacle by utilizing laser radar point cloud information, so that the accurate obstacle avoidance of the carrier is realized. Obstacle detection based on laser point cloud can be divided into two steps: firstly, dividing and removing the ground point cloud, so that each obstacle point cloud is separated from the ground to exist; then, the corresponding obstacle is marked by using a clustering algorithm, and the length, width and height of the obstacle and the geometric center coordinates relative to the radar are output by using a visual package.
The inventor finds that in the prior art, when the ground is segmented and the point clouds are clustered, all the point clouds are traversed, or the ground is fitted by adopting continuous iteration, so that the real-time performance is poor, the accuracy is continuously reduced along with the distance, the false detection rate of the obstacle is high, and the obstacle avoidance requirement of the indoor and outdoor three-dimensional complex environment cannot be met.
Disclosure of Invention
In order to solve the defects of the prior art, the present disclosure provides a method and a system for identifying obstacles based on 3D laser point cloud, which effectively improve the real-time performance of ground segmentation and point cloud clustering, and improve the accuracy of identifying obstacles at a far distance.
In order to achieve the above purpose, the present disclosure adopts the following technical scheme:
the first aspect of the present disclosure provides a method for identifying an obstacle based on a 3D laser point cloud.
An obstacle identification method based on 3D laser point cloud comprises the following steps:
acquiring laser point cloud data of a three-dimensional environment;
performing ground segmentation processing on the three-dimensional environment laser point cloud data to obtain non-ground laser point cloud data;
partitioning non-ground laser point cloud data, and marking obstacles of each partition by utilizing two-dimensional European clustering;
and (5) packaging the clustering result by using a cuboid model to obtain the position and the size of the obstacle.
A second aspect of the present disclosure provides an obstacle recognition system based on a 3D laser point cloud.
An obstacle recognition system based on a 3D laser point cloud, comprising:
a data acquisition module configured to: acquiring laser point cloud data of a three-dimensional environment;
a point cloud processing module configured to: performing ground segmentation processing on the three-dimensional environment laser point cloud data to obtain non-ground laser point cloud data;
a point cloud clustering module configured to: partitioning non-ground laser point cloud data, and marking obstacles of each partition by utilizing two-dimensional European clustering;
an obstacle recognition module configured to: and (5) packaging the clustering result by using a cuboid model to obtain the position and the size of the obstacle.
A third aspect of the present disclosure provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the steps in the 3D laser point cloud based obstacle identification method according to the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides an electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the steps in the 3D laser point cloud based obstacle identification method according to the first aspect of the present disclosure when the program is executed.
Compared with the prior art, the beneficial effects of the present disclosure are:
1. according to the method, the system, the medium or the electronic equipment, the point cloud is subjected to downsampling pretreatment through the voxel filter based on the radar initial position, so that the point cloud processing efficiency is effectively improved.
2. According to the method, the system, the medium or the electronic equipment, through the circumferential and radial double partition of the point cloud, the searching range of the point cloud in ground segmentation is reduced, and the point cloud segmentation efficiency is effectively improved.
3. According to the method, the system, the medium or the electronic equipment, real-time ground segmentation is realized by utilizing the relative height threshold value obtained by calculating the horizontal distance between two adjacent points in the circumferential area and the global height threshold value of the current point relative to the radar, and complete non-ground laser point cloud information is obtained.
4. According to the method, the system, the medium or the electronic equipment, the clustering point clouds are segmented into radial different distance segments, different clustering parameter thresholds are set in each region, the searching range of the point clouds during clustering is reduced, and the clustering efficiency is effectively improved.
5. According to the method, the system, the medium or the electronic equipment, the Z value of the clustering point cloud is set to be 0, then the three-dimensional point cloud information is converted into the two-dimensional point cloud information on the horizontal plane, the two-dimensional KDTree is created, the distance calculation amount between two points is reduced, and the point cloud calculation rate is effectively improved.
6. According to the method, the system, the medium or the electronic equipment, the clustered point cloud information is utilized to obtain the maximum and minimum X, Y and Z values corresponding to the clustered point cloud information, the pose of the obstacle relative to the radar is obtained through the average value of each type of point cloud X, Y and Z, and the obstacle is subjected to visual packaging processing by adopting a cuboid model, so that the accurate detection and identification of the obstacle are realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the exemplary embodiments of the disclosure and together with the description serve to explain the disclosure, and do not constitute an undue limitation on the disclosure.
Fig. 1 is a flow chart of an obstacle recognition method based on a 3D laser point cloud according to embodiment 1 of the present disclosure.
Fig. 2 is a schematic diagram of point cloud data processing provided in embodiment 1 of the present disclosure.
Fig. 3 is a ground segmentation flow chart provided in embodiment 1 of the present disclosure.
Fig. 4 is a flowchart of non-terrestrial laser point cloud clustering provided in embodiment 1 of the present disclosure.
Fig. 5 is a flowchart of a cluster visualization package process provided in embodiment 1 of the present disclosure.
Detailed Description
The disclosure is further described below with reference to the drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments in accordance with the present disclosure. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
Example 1:
as shown in fig. 1, embodiment 1 of the present disclosure provides an obstacle identifying method based on a 3D laser point cloud, including the following processes:
s1: acquiring three-dimensional environment laser point cloud data, and performing preprocessing, circumferential and radial rasterization processing based on the three-dimensional environment laser point cloud data to realize efficient real-time ground segmentation;
s2: the segmented non-ground laser point cloud is processed, and obstacle marking of each region is realized by utilizing two-dimensional European clustering;
s3: the clusters are subjected to visual encapsulation processing through the cuboid model, so that the positions and the sizes of the barriers are obtained, and the accurate recognition of the barriers is realized.
S1, specifically comprises:
s1.1: acquiring original point cloud data of a three-dimensional environment by utilizing a multi-line laser radar, wherein the number of the point clouds is very large, so that under the condition of not losing environment information, voxel downsampling with proper parameters is adopted to preprocess the original point clouds, and the subsequent processing efficiency of the point cloud data is improved; based on different environments, setting maximum and minimum point cloud height thresholds which are adaptive to the environments, and removing point clouds which are higher than the radar or lower than the bottom surface of the initial radar by a certain distance; setting maximum and minimum point cloud radius thresholds which are suitable for the environment, removing point clouds with a certain distance and a longer distance around the radar, reducing the calculated amount of the subsequent steps, and reserving the point cloud information of a useful area.
Lei Dadian cloud data processing schematic diagram, as shown in FIG. 2, wherein R 0 -R 3 Representing distance (increasing trend) from radar, R is horizontal radius, θ is horizontal angle, θ i For the circumferential zone number, γ is the elevation of the point, and P (X, Y, Z) is any point (belonging to N i Region).
S1.2: the radar circumferential resolution and the radar radial resolution are utilized to grid the point cloud, the point cloud is divided into N areas, all the point clouds are traversed, and the horizontal radius, the included angle, the radial area and the circumferential area of each point cloud are calculated according to the following formula:
where r_num is a radial region eigenvalue, θ_num is a circumferential region eigenvalue (the result is rounded down), m_pi is 57.3 °, θ_precision is a circumferential resolution (radar built-in), and r_precision is a radial resolution.
Packaging and storing the processed point clouds, and sorting the radial distance of the point clouds of each circumferential area;
traversing all point clouds of each circumferential area, calculating distance values of a current point and an upper point in a horizontal plane, calculating a relative height threshold value of the current point by using gradient threshold values between adjacent points, setting the height threshold value as a given minimum height threshold value if the height threshold value is small due to the fact that the distance between the two points is relatively short, and calculating a global height threshold value by using the radius of the current point and the gradient threshold value of the ground at a position of a distance radar;
judging whether the vertical measured value of the current point is in the neighborhood range of the relative height threshold value of the previous point, if so, judging whether the previous point is a ground point, if not, and if so, judging that the vertical measured value of the current point is in the neighborhood range of the global height threshold value relative to the ground where the radar is located, and if not, judging that the current point is a ground point, otherwise, judging that the current point is a non-ground point;
if the previous point is a ground point, the current point is a ground point; if the measured value of the vertical direction of the current point is not in the neighborhood range of the threshold value of the relative height of the previous point, judging whether the distance value of the current point relative to the previous point is larger than the set maximum distance threshold value and whether the vertical height of the current point is in the neighborhood range of the threshold value of the height of the current point relative to the initial ground, if the measured value of the vertical direction of the current point is larger than the preset maximum distance threshold value, judging that the current point is a ground point if the measured value of the vertical direction of the current point is in the neighborhood range of the threshold value, otherwise, judging that the current point is a non-ground point.
S1.3: and respectively packaging and storing the ground points and the non-ground points, and storing the point cloud indexes of the ground points and the non-ground points for subsequent point cloud clustering.
As shown in fig. 3, the ground segmentation flow chart is shown, in which h_max and h_min are maximum and minimum height thresholds, r_max and r_min are maximum and minimum radius thresholds, r_num is a radial region feature value, θ_num is a circumferential region feature value, distance is a difference in horizontal Distance between a current point and a previous point, h_t is a relative height threshold of the current point and the previous point, h_min_t is a minimum height threshold, pre_z is a vertical measurement value (Z coordinate) of the previous point, distance_max is a maximum Distance threshold of two adjacent points, and hg_t is a global height threshold of the current point and the initial ground.
S2, specifically comprises the following steps:
partitioning the non-ground laser point cloud, and improving the calculation efficiency in the clustering process;
removing point clouds larger than the set maximum distance threshold, partitioning the rest point clouds by utilizing different distances of the range radars, setting region segments which are suitable for the range clouds according to the environment and the point cloud clustering effect and the horizontal distance, setting the suitable clustering distance, the maximum and minimum clustering point number threshold in each region segment, and storing the point clouds of each region into corresponding queues.
The Z values of the point clouds are set to be 0, namely, the dimension of the point clouds of each area is reduced, so that the point clouds only have horizontal direction information, and the calculated amount of the distance during clustering is greatly reduced; then, creating a two-dimensional KDTree of the point cloud to realize nearest neighbor search of a two-dimensional space; and finally, inputting the segmented non-ground laser point cloud, performing nearest neighbor search on the point cloud by utilizing European clustering to realize classification marking of the object point cloud, and finally, storing indexes before dimension reduction of each type of point cloud of the clustering result into corresponding queues.
The flow chart of non-ground point clustering is shown in fig. 4, wherein R is the horizontal radius of the point, cluster_R_MAX is the maximum radius of the Cluster, N is the number of divided areas, and R i Is the lower limit value of the ith region, R i+1 An upper limit value of the ith region, R j For the horizontal radius value of the j-th point, indeX [ i ]]indeX X is the point cloud indeX queue of the ith area, points_Num is the point cloud quantity, and C_threshold [ i ]]For the i-th region cluster distance threshold, min_Num [ i ]]For the i-th region cluster minimum number, maX _Num [ i ]]The maximum number is clustered for the i-th region.
S3, specifically comprising:
acquiring clustered point cloud information, respectively comparing X, Y with Z values in each class, and screening out the maximum and minimum values of X, Y and Z in each class; then, constructing a cuboid model by using the six values, carrying out visual packaging processing on the clusters to obtain a space cuboid containing the point clouds in real time, obtaining the maximum length, the width and the height of the obstacle through the difference value of the maximum and minimum values of X, Y and Z, respectively solving the average value of X, Y and Z of the point clouds as the position of the obstacle, and finally obtaining the pose of the obstacle by using the position information to realize the accurate identification of the obstacle.
As shown in FIG. 4, the Cluster visualization packaging process flow is shown, wherein Cluster_Num is the number of categories of the clustering result, X_Max [ i ] are the maximum and minimum values of the ith clustering point in the X direction, Y_Max [ i ] are the maximum and minimum values of the ith clustering point in the Y direction, Z_Max [ i ] are the maximum and minimum values of the ith clustering point in the Z direction, and L, W and H are the length, width and height of a cuboid respectively.
Example 2:
embodiment 2 of the present disclosure provides an obstacle recognition system based on a 3D laser point cloud, including:
a data acquisition module configured to: acquiring laser point cloud data of a three-dimensional environment;
a point cloud processing module configured to: performing ground segmentation processing on the three-dimensional environment laser point cloud data to obtain non-ground laser point cloud data;
a point cloud clustering module configured to: partitioning non-ground laser point cloud data, and marking obstacles of each partition by utilizing two-dimensional European clustering;
an obstacle recognition module configured to: and (5) packaging the clustering result by using a cuboid model to obtain the position and the size of the obstacle.
The working method of the system is the same as the obstacle identifying method based on the 3D laser point cloud provided in embodiment 1, and will not be described here again.
Example 3:
embodiment 3 of the present disclosure provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the steps in the 3D laser point cloud-based obstacle recognition method according to embodiment 1 of the present disclosure.
Example 4:
embodiment 4 of the present disclosure provides an electronic device, including a memory, a processor, and a program stored on the memory and executable on the processor, where the processor implements steps in the obstacle identifying method based on 3D laser point clouds according to embodiment 1 of the present disclosure when executing the program.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), or the like.
The foregoing description of the preferred embodiments of the present disclosure is provided only and not intended to limit the disclosure so that various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (9)

1. An obstacle identification method based on 3D laser point cloud is characterized by comprising the following steps: the method comprises the following steps:
acquiring laser point cloud data of a three-dimensional environment;
performing ground segmentation processing on the three-dimensional environment laser point cloud data to obtain non-ground laser point cloud data;
partitioning non-ground laser point cloud data, and marking obstacles of each partition by utilizing two-dimensional European clustering;
packaging the clustering result by using a cuboid model to obtain the position and the size of the obstacle;
performing ground segmentation processing on the acquired three-dimensional environment laser point cloud data, wherein the ground segmentation processing comprises the following steps:
the method comprises the steps of rasterizing point clouds by utilizing radar circumferential resolution and radial resolution, dividing the point clouds into a plurality of areas, traversing all the point clouds, and calculating the horizontal radius, the included angle, the radial area and the circumferential area of each point cloud;
packaging and storing the processed point clouds, and sorting the radial distance of the point clouds of each circumferential area;
traversing all point clouds of each circumferential area, calculating distance values of a current point and a previous point in a horizontal plane, and calculating a relative height threshold value of the current point by using gradient threshold values between adjacent points;
if the distance between the two points is smaller than the preset value, and the height threshold value is smaller than the preset value, setting the height threshold value as a given minimum height threshold value, and calculating a global height threshold value by using the radius of the current point and the gradient threshold value of the ground at the position of the distance radar;
judging whether the vertical measured value of the current point is in the neighborhood range of the relative height threshold value of the previous point, if so, judging whether the previous point is a ground point, if not, and if so, judging that the vertical measured value of the current point is in the neighborhood range of the global height threshold value relative to the ground where the radar is located, and if not, judging that the current point is a ground point, otherwise, judging that the current point is a non-ground point;
if the previous point is a ground point, the current point is a ground point;
if the measured value of the vertical direction of the current point is not in the neighborhood range of the threshold value of the relative height of the previous point, judging whether the distance value of the current point relative to the previous point is larger than the set maximum distance threshold value and whether the vertical height of the current point is in the neighborhood range of the threshold value of the height of the current point relative to the initial ground, if the measured value of the vertical direction of the current point is larger than the preset maximum distance threshold value, judging that the current point is a ground point if the measured value of the vertical direction of the current point is in the neighborhood range of the threshold value, otherwise, judging that the current point is a non-ground point.
2. The obstacle identification method based on the 3D laser point cloud as claimed in claim 1, wherein:
performing ground segmentation processing on the acquired three-dimensional environment laser point cloud data, wherein the ground segmentation processing comprises the following steps:
performing downsampling pretreatment on the acquired three-dimensional environment laser point cloud data by adopting a voxel filter;
removing point clouds higher than a first preset distance of the radar or lower than a second preset distance of the bottom surface of the initial radar according to a preset maximum height threshold value and a preset minimum height threshold value;
and removing the point clouds in the third preset distance and outside the fourth preset distance around the radar according to the preset maximum radius threshold and the preset minimum radius threshold.
3. The obstacle identification method based on the 3D laser point cloud as claimed in claim 1, wherein:
partitioning the obtained non-ground laser point cloud data, including:
and removing the point clouds which are larger than the set maximum radius threshold value, and partitioning the rest point clouds by utilizing different distances of the range radar.
4. The obstacle recognition method based on the 3D laser point cloud as claimed in claim 3, wherein:
the obstacle marking of each partition by using the two-dimensional European cluster comprises the following steps:
setting region segments according to the horizontal distance, setting a clustering distance, a maximum clustering point number and a minimum clustering point number threshold value in each region segment, and storing point clouds of each region into a corresponding queue;
setting the Z values of the point clouds to be zero, and reducing the dimension of the point clouds of each area;
creating a two-dimensional KDTree of the point cloud;
and performing nearest neighbor search on the segmented non-ground laser point cloud by utilizing European clustering, performing classification marking on the object point cloud, and storing indexes before dimension reduction of each type of point cloud of the clustering result into corresponding queues.
5. The obstacle identification method based on the 3D laser point cloud as claimed in claim 1, wherein:
packaging the clustering result by using a cuboid model to obtain the position and the size of the obstacle, wherein the method comprises the following steps:
obtaining clustered point cloud data, and screening out the maximum and minimum values of X, Y and Z in each class by respectively comparing the X value, the Y value and the Z value in each class;
constructing a cuboid model by using the obtained six values, performing visual packaging treatment on the clusters to obtain a space cuboid containing the point clouds in real time, and respectively solving the average value of the X value, the Y value and the Z value of the point clouds as the position of an obstacle;
and obtaining the pose of the obstacle by using the position of the obstacle.
6. The obstacle identification method based on the 3D laser point cloud as claimed in claim 5, wherein:
the maximum length, maximum width and maximum height of the obstacle are obtained by the difference between the maximum and minimum values of X, Y, Z.
7. Obstacle recognition system based on 3D laser point cloud, its characterized in that: comprising the following steps:
a data acquisition module configured to: acquiring laser point cloud data of a three-dimensional environment;
a point cloud processing module configured to: performing ground segmentation processing on the three-dimensional environment laser point cloud data to obtain non-ground laser point cloud data; the method comprises the following steps:
the method comprises the steps of rasterizing point clouds by utilizing radar circumferential resolution and radial resolution, dividing the point clouds into a plurality of areas, traversing all the point clouds, and calculating the horizontal radius, the included angle, the radial area and the circumferential area of each point cloud;
packaging and storing the processed point clouds, and sorting the radial distance of the point clouds of each circumferential area;
traversing all point clouds of each circumferential area, calculating distance values of a current point and a previous point in a horizontal plane, and calculating a relative height threshold value of the current point by using gradient threshold values between adjacent points;
if the distance between the two points is smaller than the preset value, and the height threshold value is smaller than the preset value, setting the height threshold value as a given minimum height threshold value, and calculating a global height threshold value by using the radius of the current point and the gradient threshold value of the ground at the position of the distance radar;
judging whether the vertical measured value of the current point is in the neighborhood range of the relative height threshold value of the previous point, if so, judging whether the previous point is a ground point, if not, and if so, judging that the vertical measured value of the current point is in the neighborhood range of the global height threshold value relative to the ground where the radar is located, and if not, judging that the current point is a ground point, otherwise, judging that the current point is a non-ground point;
if the previous point is a ground point, the current point is a ground point;
if the measured value of the vertical direction of the current point is not in the neighborhood range of the threshold value of the relative height of the previous point, judging whether the distance value of the current point relative to the previous point is larger than the set maximum distance threshold value and whether the vertical height of the current point is in the neighborhood range of the threshold value of the height of the current point relative to the initial ground, if the measured value of the vertical direction of the current point is larger than the preset maximum distance threshold value, judging that the current point is a ground point if the measured value of the vertical direction of the current point is in the neighborhood range of the threshold value, otherwise, judging that the current point is a non-ground point;
a point cloud clustering module configured to: partitioning non-ground laser point cloud data, and marking obstacles of each partition by utilizing two-dimensional European clustering;
an obstacle recognition module configured to: and (5) packaging the clustering result by using a cuboid model to obtain the position and the size of the obstacle.
8. A computer-readable storage medium, on which a program is stored, which program, when being executed by a processor, implements the steps in the 3D laser point cloud based obstacle recognition method as claimed in any one of claims 1-6.
9. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor performs the steps in the 3D laser point cloud based obstacle recognition method as claimed in any one of claims 1-6.
CN202110920554.3A 2021-08-11 2021-08-11 Obstacle identification method and system based on 3D laser point cloud Active CN113640826B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110920554.3A CN113640826B (en) 2021-08-11 2021-08-11 Obstacle identification method and system based on 3D laser point cloud
AU2021266206A AU2021266206B1 (en) 2021-08-11 2021-11-09 Obstacle recognition method and system based on 3D laser point clouds

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110920554.3A CN113640826B (en) 2021-08-11 2021-08-11 Obstacle identification method and system based on 3D laser point cloud

Publications (2)

Publication Number Publication Date
CN113640826A CN113640826A (en) 2021-11-12
CN113640826B true CN113640826B (en) 2023-10-20

Family

ID=78420890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110920554.3A Active CN113640826B (en) 2021-08-11 2021-08-11 Obstacle identification method and system based on 3D laser point cloud

Country Status (2)

Country Link
CN (1) CN113640826B (en)
AU (1) AU2021266206B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114815821B (en) * 2022-04-19 2022-12-09 山东亚历山大智能科技有限公司 Indoor self-adaptive panoramic obstacle avoidance method and system based on multi-line laser radar
CN115618250B (en) * 2022-12-02 2023-05-02 华清瑞达(天津)科技有限公司 Radar target obstacle simulation identification method
CN116400371B (en) * 2023-06-06 2023-09-26 山东大学 Indoor reflective transparent object position identification method and system based on three-dimensional point cloud

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144072A (en) * 2018-09-30 2019-01-04 亿嘉和科技股份有限公司 A kind of intelligent robot barrier-avoiding method based on three-dimensional laser
CN109711410A (en) * 2018-11-20 2019-05-03 北方工业大学 Three-dimensional object rapid segmentation and identification method, device and system
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110163871A (en) * 2019-05-07 2019-08-23 北京易控智驾科技有限公司 A kind of ground dividing method of multi-line laser radar
CN110441791A (en) * 2019-08-14 2019-11-12 深圳无境智能机器人有限公司 A kind of ground obstacle detection method based on the 2D laser radar that leans forward
CN110674705A (en) * 2019-09-05 2020-01-10 北京智行者科技有限公司 Small-sized obstacle detection method and device based on multi-line laser radar
CN110764108A (en) * 2019-11-05 2020-02-07 畅加风行(苏州)智能科技有限公司 Obstacle detection method and device for port automatic driving scene
CN112526993A (en) * 2020-11-30 2021-03-19 广州视源电子科技股份有限公司 Grid map updating method and device, robot and storage medium
CN113191459A (en) * 2021-05-27 2021-07-30 山东高速建设管理集团有限公司 Road-side laser radar-based in-transit target classification method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109188448B (en) * 2018-09-07 2020-03-06 百度在线网络技术(北京)有限公司 Point cloud non-ground point filtering method and device and storage medium
US10872269B2 (en) * 2018-10-26 2020-12-22 Volvo Car Corporation Methods and systems for the fast estimation of three-dimensional bounding boxes and drivable surfaces using LIDAR point clouds
US20210004566A1 (en) * 2019-07-02 2021-01-07 GM Global Technology Operations LLC Method and apparatus for 3d object bounding for 2d image data
CN113050993A (en) * 2019-12-27 2021-06-29 中兴通讯股份有限公司 Laser radar-based detection method and device and computer-readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144072A (en) * 2018-09-30 2019-01-04 亿嘉和科技股份有限公司 A kind of intelligent robot barrier-avoiding method based on three-dimensional laser
CN109711410A (en) * 2018-11-20 2019-05-03 北方工业大学 Three-dimensional object rapid segmentation and identification method, device and system
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110163871A (en) * 2019-05-07 2019-08-23 北京易控智驾科技有限公司 A kind of ground dividing method of multi-line laser radar
CN110441791A (en) * 2019-08-14 2019-11-12 深圳无境智能机器人有限公司 A kind of ground obstacle detection method based on the 2D laser radar that leans forward
CN110674705A (en) * 2019-09-05 2020-01-10 北京智行者科技有限公司 Small-sized obstacle detection method and device based on multi-line laser radar
CN110764108A (en) * 2019-11-05 2020-02-07 畅加风行(苏州)智能科技有限公司 Obstacle detection method and device for port automatic driving scene
CN112526993A (en) * 2020-11-30 2021-03-19 广州视源电子科技股份有限公司 Grid map updating method and device, robot and storage medium
CN113191459A (en) * 2021-05-27 2021-07-30 山东高速建设管理集团有限公司 Road-side laser radar-based in-transit target classification method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A Fast Ground Segmentation Method for 3D Point Cloud;Chu, Phuong 等;Journal of Information Processing Systems;第13卷(第03期);全文 *
基于三维激光雷达的复杂场景中地面分割方法;梅圣明 等;激光与光电子学进展;第59卷(第10期);全文 *
基于三维激光雷达的障碍物及可通行区域实时检测;蒋剑飞 等;激光与光电子学进展;第56卷(第24期);第242801-1至242801-10页 *
基于欧几里得聚类算法的三维激光雷达障碍物检测技术;宗长富;文龙;何磊;;吉林大学学报(工学版);第50卷(第01期);全文 *
激光SLAM自主导航的线段特征提取混合算法研究;桑迪 等;机械设计与制造(第05期);全文 *
赵杰 等.智能机器人技术:安保、巡逻、处置类警用机器人研究实践.机械工业出版社,2021,第10页. *

Also Published As

Publication number Publication date
AU2021266206B1 (en) 2022-10-27
CN113640826A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN113640826B (en) Obstacle identification method and system based on 3D laser point cloud
CN106022381B (en) Automatic extraction method of street lamp pole based on vehicle-mounted laser scanning point cloud
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
CN113920134B (en) Slope ground point cloud segmentation method and system based on multi-line laser radar
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN113345008B (en) Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation
CN110349260B (en) Automatic pavement marking extraction method and device
AU2020202249A1 (en) Feature extraction from mobile lidar and imagery data
CN112070769A (en) Layered point cloud segmentation method based on DBSCAN
Cheng et al. Building boundary extraction from high resolution imagery and lidar data
CN115049700A (en) Target detection method and device
CN109215112B (en) Method for marking single-side point cloud model
CN115205690B (en) Method and device for extracting street tree in monomer mode based on MLS point cloud data
CN116524219A (en) Barrier detection method based on laser radar point cloud clustering
CN116109601A (en) Real-time target detection method based on three-dimensional laser radar point cloud
CN114325760A (en) Road tunnel quality inspection obstacle avoidance method and system based on multi-line laser radar
CN113345094A (en) Electric power corridor safety distance analysis method and system based on three-dimensional point cloud
CN116051822A (en) Concave obstacle recognition method and device, processor and electronic equipment
CN111736167B (en) Method and device for obtaining laser point cloud density
CN112232248B (en) Method and device for extracting plane features of multi-line LiDAR point cloud data
CN108345007B (en) Obstacle identification method and device
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
Sun et al. Automated segmentation of LiDAR point clouds for building rooftop extraction
CN117152172A (en) Point cloud data-based power transmission line tower and power line extraction method
CN116052099A (en) Small target detection method for unstructured road

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240424

Address after: 102, Building 9, Zhongrun Century City, No. 13777 Jingshi Road, Lixia District, Jinan City, Shandong Province, 250000

Patentee after: SHANDONG ALESMART INTELLIGENT TECHNOLOGY CO.,LTD.

Country or region after: China

Address before: 250061, No. ten, No. 17923, Lixia District, Ji'nan City, Shandong Province

Patentee before: SHANDONG University

Country or region before: China

Patentee before: SHANDONG ALESMART INTELLIGENT TECHNOLOGY CO.,LTD.