CN114638871A - Robot ground segmentation method and system based on 3d point cloud - Google Patents

Robot ground segmentation method and system based on 3d point cloud Download PDF

Info

Publication number
CN114638871A
CN114638871A CN202210281214.5A CN202210281214A CN114638871A CN 114638871 A CN114638871 A CN 114638871A CN 202210281214 A CN202210281214 A CN 202210281214A CN 114638871 A CN114638871 A CN 114638871A
Authority
CN
China
Prior art keywords
image
point cloud
distance
ground
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210281214.5A
Other languages
Chinese (zh)
Inventor
王本强
谢超
高明
王建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Original Assignee
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong New Generation Information Industry Technology Research Institute Co Ltd filed Critical Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority to CN202210281214.5A priority Critical patent/CN114638871A/en
Publication of CN114638871A publication Critical patent/CN114638871A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a robot ground segmentation method and system based on 3d point cloud, belonging to the technical field of point cloud processing, aiming at solving the technical problem of avoiding the influence of distance obstacles on ground segmentation in a complex environment and improving ground segmentation precision, and adopting the technical scheme that: the method comprises the following specific steps: acquiring point cloud generated by scanning the surrounding environment by a laser radar to obtain point cloud data; acquiring a distance image by using the point cloud data according to the number of the vertical laser beams and the range reading of 360-degree rotation; acquiring an angle image according to two distance readings in adjacent rows of the distance image; carrying out singular value filtering on the angle image through a Savitsky-Golay filter to obtain a smoothed angle image; marking the ground communication area of the smoothed image to obtain a depth map; the depth map is mapped to a point cloud.

Description

Robot ground segmentation method and system based on 3d point cloud
Technical Field
The invention relates to the technical field of point cloud processing, in particular to a robot ground segmentation method and system based on 3d point cloud.
Background
With the development of science and technology, robots are increasingly applied to the industrial fields of security monitoring, logistics transportation, electric power operation and maintenance and the like. In order to be able to carry out the corresponding work tasks, the robot needs to accurately identify the passing safe ground and to route a path to the destination by bypassing various obstacles in the area. In order to realize safe and stable operation of the intelligent mobile robot, ground segmentation is used as a key step of environment perception, and subsequent autonomous obstacle avoidance and path planning are directly influenced by the performance of the intelligent mobile robot.
The laser radar plays an important role in the environment perception technology of the mobile robot by virtue of the advantages of high ranging precision, wide field range, small influence of illumination and the like. The ground segmentation method based on the laser radar generally comprises the following two methods:
method based on grid cells:
the grid cell method uses only local point cloud information, ignores the continuity of the global ground, and is therefore susceptible to environmental noise and sensor calibration parameters.
A method based on line features:
the line feature method considers global continuity to some extent, and usually performs ground and obstacle division by scanning vertical and horizontal features of lines or connecting lines.
The method can convert the ground segmentation problem into a simple in-region straight line fitting problem, and improves the calculation efficiency. However, in a complex environment, the influence of a short-distance obstacle is easily received, and the division accuracy is low.
In summary, how to avoid that the ground segmentation is easily affected by the distance obstacle in the complex environment, and improving the ground segmentation precision is a technical problem to be solved urgently at present.
Disclosure of Invention
The technical task of the invention is to provide a robot ground segmentation method and system based on 3d point cloud, so as to solve the problem of how to avoid the problem that ground segmentation in a complex environment is easily influenced by distance obstacles and improve ground segmentation precision.
The technical task of the invention is realized in the following way, and the method for dividing the robot ground based on the 3d point cloud specifically comprises the following steps:
acquiring point cloud generated by scanning the surrounding environment by a laser radar to obtain point cloud data;
acquiring a distance image by using the point cloud data according to the number of vertical laser beams and the range reading (the number of horizontal scanning points) of 360-degree rotation;
acquiring an angle image according to two distance readings in adjacent rows of the distance image;
carrying out singular value filtering on the angle image through a Savitsky-Golay filter to obtain a smoothed angle image;
marking the ground communication area of the smoothed image to obtain a depth map;
mapping the depth map to a point cloud: the point in the point cloud and the point in the depth map have a one-to-one correspondence relationship, namely after the ground point at the division position in the depth map, the corresponding ground point is displayed in the point cloud.
Preferably, the number of lines in the range image is defined by the beam in the vertical direction, i.e. the number of lines of the Velodyne lidar sensor is 16, 32 or 64;
the number of columns in the range image is obtained from the range reading of the laser per 360 ° rotation;
each pixel in the range image stores a measured distance from the sensor to the object.
Preferably, the angle image is acquired from two distance readings in adjacent rows of the distance image as follows:
converting each column c of the distance image D into a value of an angle image;
each of the angle images represents the inclination of a line connecting the two points a and B, derived from the two distance readings Dr-1, c and Dr, c in the adjacent row r-1, r of the distance image D;
after taking two distance readings of a single, vertically continuous laser beam, the angle α is calculated using the trigonometric rule, as follows:
α=atan2(‖BC‖,‖AC‖)=atan2(Δz,Δx);
Δz=|Dr-1,c sinξa-Dr,c sinξb|;
Δx=|Dr-1,c cosξa-Dr,c cosξb|;
where ξ a and ξ b represent the vertical angles of the laser beams corresponding to rows r-1 and r.
Preferably, a Savitsky-Golay filter is applied to each column of the angular image, the Savitsky-Golay filter performing a least squares optimization to fit a local polynomial of a given window size into the data.
Preferably, the ground connected region is based on breadth-first search (BFS), and ground points and non-ground points are calculated according to a set threshold.
More preferably, the ground connected region labels are as follows:
searching directly adjacent nodes from a given point of the smoothed angle image obtained after filtering;
moving to the next adjacent node;
setting a threshold value delta a, and searching each row to obtain a ground communication area.
A robot ground segmentation system based on 3d point cloud comprises,
the acquisition module I is used for acquiring point cloud generated by scanning the surrounding environment by the laser radar to obtain point cloud data;
the acquisition module II is used for acquiring a distance image according to the number of the vertical laser beams and the range reading (the number of horizontal scanning points) of 360-degree rotation by utilizing the point cloud data;
the acquisition module III is used for acquiring an angle image according to two distance readings in adjacent rows of the distance image;
the acquisition module is used for carrying out singular value filtering on the angle image through a Savitsky-Golay filter to obtain a smoothed angle image;
the marking module is used for marking the ground communication area of the smoothed image to obtain a depth map;
a mapping module to map the depth map to a point cloud.
Preferably, the working process of the acquiring module three is as follows:
(1) converting each column c of the distance image D into a value of an angle image;
(2) each of the angle images represents the inclination of a line connecting the two points a and B, derived from the two distance readings Dr-1, c and Dr, c in the adjacent row r-1, r of the distance image D;
(3) and after two distance readings of the vertical continuous single laser beam are obtained, calculating an angle alpha by using a triangle rule, wherein the formula is as follows:
α=atan2(‖BC‖,‖AC‖)=atan2(Δz,Δx);
Δz=|Dr-1,c sinξa-Dr,c sinξb|;
Δx=|Dr-1,c cosξa-Dr,c cosξb|;
where ξ a and ξ b represent the vertical angles of the laser beams corresponding to rows r-1 and r.
Preferably, the number of lines in the range image is defined by the beam in the vertical direction, i.e. the number of lines of the Velodyne lidar sensor is 16, 32 or 64;
the number of columns in the range image is obtained from the range reading of the laser per 360 ° rotation;
each pixel in the range image stores a measured distance from the sensor to the object.
Preferably, the working process of the marking module is as follows:
(1) searching directly adjacent nodes from a given point of the smoothed angle image obtained after filtering;
(2) moving to the next adjacent node;
(3) and setting a threshold value delta a, and searching each row to obtain a ground communication area.
The robot ground segmentation method and system based on the 3d point cloud have the following advantages:
the method simply and efficiently improves the accuracy and the real-time performance of laser radar point cloud segmentation on the ground;
the method does not operate in a 3D space, but directly performs all calculations on the range images, accelerates the segmentation efficiency of single range images, and allows direct utilization of neighborhood relations;
thirdly, the 3d point cloud is converted into a 2d distance, and the ground segmentation speed is accelerated;
and fourthly, the invention ensures accurate segmentation by utilizing the field information.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a robot ground segmentation method based on 3d point cloud;
fig. 2 is an angle diagram.
Detailed Description
The method and system for robot ground segmentation based on 3d point cloud according to the present invention will be described in detail with reference to the drawings and specific embodiments.
Example 1:
as shown in fig. 1, the method for segmenting the robot ground based on the 3d point cloud of the present invention specifically comprises the following steps:
s1, acquiring point cloud generated by scanning the surrounding environment by the laser radar to obtain point cloud data;
s2, acquiring a distance image according to the number of the vertical laser beams and the range reading (the number of horizontal scanning points) of 360-degree rotation by using the point cloud data;
s3, obtaining an angle image according to two distance readings in adjacent rows of the distance image;
s4, conducting singular value filtering on the angle image through a Savitsky-Golay filter to obtain a smoothed angle image;
s5, marking the ground communication area of the smoothed image to obtain a depth map;
s6, mapping the depth map to a point cloud: the point in the point cloud and the point in the depth map have a one-to-one correspondence relationship, namely after the ground point at the division position in the depth map, the corresponding ground point is displayed in the point cloud.
The number of lines in the range image in step S2 in this embodiment is defined by the light beam in the vertical direction, i.e., the number of lines of the Velodyne lidar sensor is 16, 32, or 64;
the number of columns in the range image is obtained from the range reading of the laser per 360 ° rotation;
each pixel in the range image stores a measured distance from the sensor to the object.
The angle image obtained according to two distance readings in adjacent rows of the distance image in step S3 of this embodiment is specifically as follows:
s301, converting each column c of the distance image D into a value of an angle image;
s302, each of the angle images represents the inclination of a line connecting two points A and B, derived from two distance readings Dr-1, c and Dr, c in adjacent rows r-1, r of the distance image D;
s303, after two distance readings of the vertical continuous single laser beam are obtained, the angle alpha is calculated by using a trigonometric rule, and the formula is as follows:
α=atan2(‖BC‖,‖AC‖)=atan2(Δz,Δx);
Δz=|Dr-1,c sinξa-Dr,c sinξb|;
Δx=|Dr-1,c cosξa-Dr,c cosξb|;
where ξ a and ξ b represent the vertical angles of the laser beams corresponding to rows r-1 and r.
The Savitsky-Golay filter in this embodiment step S4 is applied to each column of the angular image, and the Savitsky-Golay filter performs least squares optimization to fit a local polynomial of a given window size into the data.
In the present embodiment, the ground connected region in step S5 is calculated based on the Breadth First Search (BFS) and based on the set threshold, the ground point and the non-ground point.
The ground connected region flag in step S5 of this embodiment is specifically as follows:
s501, searching directly adjacent nodes from a given point of the smoothed angle image obtained after filtering;
s502, moving to a next-level adjacent node;
s503, setting a threshold value delta a, and searching each row to obtain a ground communication area.
Example 2:
the robot ground segmentation system based on the 3d point cloud of the embodiment comprises,
the acquisition module I is used for acquiring point cloud generated by scanning the surrounding environment by the laser radar to obtain point cloud data;
the acquisition module II is used for acquiring a distance image according to the number of the vertical laser beams and the range reading (the number of horizontal scanning points) of 360-degree rotation by utilizing the point cloud data;
the acquisition module III is used for acquiring an angle image according to two distance readings in adjacent rows of the distance image;
the acquisition module is used for carrying out singular value filtering on the angle image through a Savitsky-Golay filter to obtain a smoothed angle image;
the marking module is used for marking the ground communication area of the smoothed image to obtain a depth map;
a mapping module to map the depth map to a point cloud.
As shown in fig. 2, the working process of the third obtaining module in this embodiment is as follows:
(1) converting each column c of the distance image D into a value of an angle image;
(2) each of the angle images represents the inclination of a line connecting the two points a and B, derived from the two distance readings Dr-1, c and Dr, c in the adjacent row r-1, r of the distance image D;
(3) after two distance readings of a single vertical continuous laser beam are obtained, the angle alpha is calculated by using a trigonometric rule, and the formula is as follows:
α=atan2(‖BC‖,‖AC‖)=atan2(Δz,Δx);
Δz=|Dr-1,c sinξa-Dr,c sinξb|;
Δx=|Dr-1,c cosξa-Dr,c cosξb|;
where ξ a and ξ b represent the vertical angles of the laser beams corresponding to rows r-1 and r.
The number of lines in the range image in this embodiment is defined by the light beam in the vertical direction, i.e. the number of lines of the Velodyne lidar sensor is 16, 32 or 64;
the number of columns in the range image is obtained from the range reading of the laser per 360 ° rotation;
each pixel in the range image stores a measured distance from the sensor to the object.
Preferably, the working process of the marking module is as follows:
(1) searching directly adjacent nodes from a given point of the smoothed angle image obtained after filtering;
(2) moving to the next adjacent node;
(3) and setting a threshold value delta a, and searching each row to obtain a ground communication area.
Example 3:
in order to identify the ground plane, there are some requirements to be observed on the installation, specifically as follows:
first, the sensor is mounted substantially horizontally on the mobile base/robot;
secondly, the curvature of the ground is very low;
third, the robot is the ground in at least some pixels from the lowest row of the image.
Most laser range scanners provide raw data in the form of a single range reading for each laser beam, with a time stamp and beam direction, which can directly convert the data into a range image. The number of lines in the image is defined by the number of beams in the vertical direction, i.e. the number of lines in the Velodyne scanner is 16, 32 or 64. This embodiment uses a 16-line Velodyne. The number of columns is given by the range reading of the laser per 360 ° rotation. Each pixel of such an image stores the measured distance from the sensor to the object. In order to speed up the calculation, it may even be considered to combine a plurality of readings in the horizontal direction into one pixel, if necessary.
Each column c of the distance image D described above is first converted into values of an angle image, wherein each of these angles represents the inclination of a line connecting two points a and B, which line is derived from two distance readings Dr-1, c and Dr, c in adjacent rows r-1, r of the distance image, as shown in fig. 2. Knowing the two distance readings of a single laser beam in vertical succession, we can calculate the angle α using the trigonometric rule as follows:
α=atan2(‖BC‖,‖AC‖)=atan2(Δz,Δx);
Δz=|Dr-1,c sinξa-Dr,c sinξb|;
Δx=|Dr-1,c cosξa-Dr,c cosξb|;
where ξ a and ξ b are the vertical angles of the laser beams corresponding to rows r-1 and r.
Since the lidar sensors such as Velodyne generate a large number of abnormal values in the distance measurement, this affects the calculation of the angle α. Therefore, a method is needed to eliminate these outliers. A Savitsky-Golay filter is applied to each column of the angular image. This filter performs a least squares optimization to fit a local polynomial of a given window size into the data.
After the filtered results are obtained, ground tagging is performed on the results, starting with entries expected to belong to the ground, and similar connected regions are tagged together using a breadth-first search. Breadth-first search (BFS) is a popular graph search or traversal algorithm. It starts with a given node of the graph, first exploring the immediate neighbors and then moving to the next level neighbors. In our method, we consider computing the difference in the angle α over the four neighborhoods on the grid to determine whether two adjacent elements of the matrix M α should be marked together by a breadth-first search. For this purpose, we select a threshold Δ a, set to 5 ° in our experiments. By searching each row, the connected region of the ground can be obtained finally.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A robot ground segmentation method based on 3d point cloud is characterized by comprising the following steps:
acquiring point cloud generated by scanning the surrounding environment by the laser radar to obtain point cloud data;
acquiring a distance image by using the point cloud data according to the number of the vertical laser beams and the range reading of 360-degree rotation;
acquiring an angle image according to two distance readings in adjacent rows of the distance image;
carrying out singular value filtering on the angle image through a Savitsky-Golay filter to obtain a smoothed angle image;
marking the ground communication area of the smoothed image to obtain a depth map;
mapping the depth map to a point cloud: the point in the point cloud and the point in the depth map have a one-to-one correspondence relationship, namely after the ground point at the division position in the depth map, the corresponding ground point is displayed in the point cloud.
2. The method of claim 1, wherein the number of rows in the range image is defined by the number of beams in the vertical direction, i.e. the number of rows of the Velodyne lidar sensor is 16, 32 or 64;
the number of columns in the range image is obtained from the range reading of the laser per 360 ° rotation;
each pixel in the range image stores a measured distance from the sensor to the object.
3. The method of claim 1, wherein the angle image is obtained from two distance readings in adjacent rows of the distance image as follows:
converting each column c of the distance image D into a value of an angle image;
each of the angle images represents the inclination of a line connecting two points A and B, derived from two distance readings Dr-1, c and Dr, c in adjacent rows r-1, r of the distance image D;
after taking two distance readings of a single, vertically continuous laser beam, the angle α is calculated using the trigonometric rule, as follows:
α=atan2(‖BC‖,‖AC‖)=atan2(Δz,Δx);
Δz=|Dr-1,c sinξa-Dr,c sinξb|;
Δx=|Dr-1,c cosξa-Dr,c cosξb|;
where ξ a and ξ b represent the vertical angles of the laser beams corresponding to rows r-1 and r.
4. The method for robotic ground segmentation based on 3-d point clouds of claim 1, wherein a Savitsky-Golay filter is applied to each column of the angular image, the Savitsky-Golay filter performing a least squares optimization to fit a local polynomial of a given window size into the data.
5. The method for robot ground segmentation based on 3d point cloud according to any one of claims 1-4, characterized in that the ground connected region is based on breadth-first search, and ground points and non-ground points are calculated according to a set threshold.
6. The robot ground segmentation method based on 3d point cloud of claim 5, wherein the ground connected region labels are as follows:
searching directly adjacent nodes from a given point of the smoothed angle image obtained after filtering;
moving to the next adjacent node;
setting a threshold value delta a, and searching each row to obtain a ground communication area.
7. A robot ground segmentation system based on 3d point cloud is characterized in that the system comprises,
the acquisition module I is used for acquiring point cloud generated by scanning the surrounding environment by the laser radar to obtain point cloud data;
the acquisition module II is used for acquiring a distance image according to the number of the vertical laser beams and the range reading of 360-degree rotation by utilizing the point cloud data;
the acquisition module III is used for acquiring an angle image according to two distance readings in adjacent rows of the distance image;
the acquisition module is used for carrying out singular value filtering on the angle image through a Savitsky-Golay filter to obtain a smoothed angle image;
the marking module is used for marking the ground communication area of the smoothed image to obtain a depth map;
a mapping module to map the depth map to a point cloud.
8. The robot ground segmentation system based on 3d point cloud of claim 7, wherein the third obtaining module specifically comprises the following working processes:
(1) converting each column c of the distance image D into a value of an angle image;
(2) each of the angle images represents the inclination of a line connecting the two points a and B, derived from the two distance readings Dr-1, c and Dr, c in the adjacent row r-1, r of the distance image D;
(3) after two distance readings of a single vertical continuous laser beam are obtained, the angle alpha is calculated by using a trigonometric rule, and the formula is as follows:
α=atan2(‖BC‖,‖AC‖)=atan2(Δz,Δx);
Δz=|Dr-1,c sinξa-Dr,c sinξb|;
Δx=|Dr-1,c cosξa-Dr,c cosξb|;
where ξ a and ξ b represent the vertical angles of the laser beams corresponding to rows r-1 and r.
9. The 3d point cloud based robotic ground segmentation system of claim 7 wherein the number of rows in the range image is defined by the light beam in the vertical direction, i.e., the number of rows of the Velodyne lidar sensor is 16, 32, or 64;
the number of columns in the range image is obtained from the range reading of the laser per 360 ° rotation;
each pixel in the range image stores a measured distance from the sensor to the object.
10. The system for robotic ground segmentation based on 3d point cloud according to any one of claims 7-9, wherein the labeling module specifically works as follows:
(1) searching directly adjacent nodes from a given point of the smoothed angle image obtained after filtering;
(2) moving to the next adjacent node;
(3) and setting a threshold value delta a, and searching each row to obtain a ground communication area.
CN202210281214.5A 2022-03-22 2022-03-22 Robot ground segmentation method and system based on 3d point cloud Pending CN114638871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210281214.5A CN114638871A (en) 2022-03-22 2022-03-22 Robot ground segmentation method and system based on 3d point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210281214.5A CN114638871A (en) 2022-03-22 2022-03-22 Robot ground segmentation method and system based on 3d point cloud

Publications (1)

Publication Number Publication Date
CN114638871A true CN114638871A (en) 2022-06-17

Family

ID=81950338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210281214.5A Pending CN114638871A (en) 2022-03-22 2022-03-22 Robot ground segmentation method and system based on 3d point cloud

Country Status (1)

Country Link
CN (1) CN114638871A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375699A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Point cloud segmentation method, mobile robot and computer-readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375699A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Point cloud segmentation method, mobile robot and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN109961440B (en) Three-dimensional laser radar point cloud target segmentation method based on depth map
CN110031824B (en) Laser radar combined calibration method and device
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN112066982B (en) Industrial mobile robot positioning method in high dynamic environment
Moghadam et al. Improving path planning and mapping based on stereo vision and lidar
US9558564B1 (en) Method for finding important changes in 3D point clouds
CN114743021A (en) Fusion method and system of power transmission line image and point cloud data
CN111856436A (en) Combined calibration device and calibration method for multi-line laser radar and infrared camera
CN111721279A (en) Tail end path navigation method suitable for power transmission inspection work
CN114638871A (en) Robot ground segmentation method and system based on 3d point cloud
Chu et al. Fast point cloud segmentation based on flood-fill algorithm
Zhou et al. Comparative analysis of SLAM algorithms for mechanical LiDAR and solid-state LiDAR
CN109636897B (en) Octmap optimization method based on improved RGB-D SLAM
CN114353799A (en) Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar
Ying et al. Anchor points based accurate fault locating in large-scale photovoltaic plants via aerial infrared videos
US11610119B2 (en) Method and system for processing spatial data
CN116338716B (en) Multi-target association method of air-ground unmanned system based on azimuth topological structure
Lee et al. A method for object detection using point cloud measurement in the sea environment
Li et al. Mobile robot map building based on laser ranging and kinect
CN114217641B (en) Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment
CN112348950B (en) Topological map node generation method based on laser point cloud distribution characteristics
CN111061273B (en) Autonomous obstacle avoidance fusion method and system for unmanned ship
CN114089376A (en) Single laser radar-based negative obstacle detection method
CN113759385A (en) Laser radar and camera fusion ranging method and system
Tamura et al. Position measurement system for cylindrical objects using laser range finder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination