CN114234976A - Improved robot positioning method and system with slam and multiple sensors fused - Google Patents

Improved robot positioning method and system with slam and multiple sensors fused Download PDF

Info

Publication number
CN114234976A
CN114234976A CN202111434276.7A CN202111434276A CN114234976A CN 114234976 A CN114234976 A CN 114234976A CN 202111434276 A CN202111434276 A CN 202111434276A CN 114234976 A CN114234976 A CN 114234976A
Authority
CN
China
Prior art keywords
ground plane
frame
filtering
point clouds
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111434276.7A
Other languages
Chinese (zh)
Inventor
陈德
冷海波
吴亮
崔敏
饶杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Hengchuang Intelligent Control Technology Co ltd
Original Assignee
Shandong Hengchuang Intelligent Control Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Hengchuang Intelligent Control Technology Co ltd filed Critical Shandong Hengchuang Intelligent Control Technology Co ltd
Priority to CN202111434276.7A priority Critical patent/CN114234976A/en
Publication of CN114234976A publication Critical patent/CN114234976A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Abstract

The invention relates to an improved slam and multi-sensor fused robot positioning method and system, wherein the method comprises the steps of collecting spatial data by utilizing a laser radar, an inertial sensor and a positioning device to obtain a multi-frame robot positioning data packet; respectively segmenting ground point clouds from each frame of robot positioning data packet, and sequentially filtering and performing plane fitting on each frame of ground point clouds to obtain multiple frames of ground plane point clouds; respectively carrying out point cloud filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane sparse point clouds; performing front-end odometer by using multiple frames of ground plane sparse point clouds to obtain the pose of each frame of ground plane sparse point cloud; extracting key frame ground plane sparse point clouds from the multi-frame ground plane sparse point clouds according to the pose, and constructing a probability map according to the key frame ground plane sparse point clouds. The invention can simultaneously integrate the gps sensor, the imu sensor and the lidar sensor and can effectively restrain the error of elevation.

Description

Improved robot positioning method and system with slam and multiple sensors fused
Technical Field
The invention relates to the field of robot positioning, in particular to an improved slam and multi-sensor fusion robot positioning method and system.
Background
The transformer substation is an important area for power grid transmission, inspection of equipment of the transformer substation is important work for guaranteeing power utilization safety, at present, manual inspection and manual recording modes are generally adopted, along with the development of economy in China, power consumption is continuously increased, the scale and the number of the transformer substation are increased, and therefore the inspection range and the workload of workers are sharply enlarged. With the development and maturity of the robot technology, the robot can be used for polling related equipment, and the robot is applied to a plurality of substations at present.
The intelligent inspection robot of the transformer substation is applied to a great extent, so that the defect of manual inspection is overcome, and the workload of workers is reduced. However, the existing inspection robot does not simultaneously integrate gps, imu and lidar sensors (gps stands for a positioning sensor, imu stands for an inertial sensor and lidar stands for a laser radar), and in the case of no gps, the elevation error of robot positioning is a main error term, and certainly, the inspection robot cannot be used when the ground is non-planar.
Disclosure of Invention
The invention aims to solve the technical problem of providing an improved slam and multi-sensor fusion robot positioning method and system, which can simultaneously fuse three sensors of gps, imu and lidar, effectively restrict the error of elevation and improve the positioning accuracy.
The technical scheme for solving the technical problems is as follows: an improved slam and multi-sensor fusion robot positioning method comprises the following steps,
s1, collecting spatial data by using a laser radar, an inertial sensor and a positioning device which are integrated on the robot to obtain a multi-frame robot positioning data packet;
s2, respectively segmenting ground point clouds from each frame of robot positioning data packet based on a point cloud detection plane method, and respectively filtering and performing plane fitting on each frame of ground point clouds in sequence to obtain multiple frames of ground plane point clouds;
s3, respectively carrying out point cloud filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane sparse point cloud;
s4, performing front-end odometer by using the multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds;
and S5, extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds according to the pose of each frame of ground plane sparse point clouds, constructing a probability map according to the key frame ground plane sparse point clouds, and positioning the robot according to the probability map.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, in S1, specifically,
continuously scanning the space by using a laser radar integrated on the robot to obtain a plurality of frames of space point clouds;
correspondingly acquiring pose data and positioning data of the robot in real time by using an inertial sensor and a positioning device integrated on the robot;
and fusing the multi-frame space point cloud of the robot with the real-time pose data and the positioning data to obtain a multi-frame robot positioning data packet.
Further, in S2, specifically,
respectively segmenting ground point clouds from the corresponding space point clouds according to the height parameters of each frame of space point cloud;
on the normal vector, filtering out points with preset distances away from a preset standard surface from each frame of ground point cloud respectively to obtain multiple frames of standard surface point clouds;
and respectively carrying out plane fitting on each frame of standard surface point cloud to obtain a plurality of frames of ground surface plane point clouds.
Further, in S3, specifically,
respectively carrying out down-sampling filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane down-sampling filtering point cloud;
and respectively carrying out filtering outlier processing on each frame of the ground plane downsampling filtering point cloud to obtain a plurality of frames of ground plane sparse point clouds.
Further, a VOXELGRID filter or an APPROX _ VOXELGRID filter or a NONE filter is used for respectively carrying out down-sampling filtering on each frame of the ground plane point cloud.
Further, the specific steps of respectively filtering the ground plane down-sampling filtering point clouds of each frame to remove outliers are as follows,
respectively counting the adjacent points of each sampling point in each frame of the ground plane down-sampling filtering point cloud to obtain an adjacent point group of each sampling point in each frame of the ground plane down-sampling filtering point cloud;
respectively carrying out Gaussian distribution calculation on the distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to each adjacent point in an adjacent point group, and correspondingly obtaining the average distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to the adjacent point group;
and respectively filtering the adjacent points of which the distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to the adjacent point in the adjacent point group exceeds the average distance to obtain a plurality of frames of ground plane sparse point clouds.
Further, the specific steps of respectively filtering the ground plane down-sampling filtering point clouds of each frame to remove outliers are as follows,
constructing a filtering sliding window with a preset size, and configuring a filtering threshold value of a preset number for the filtering sliding window; and respectively sliding the filtering sliding window in each frame of ground plane down-sampling filtering point cloud, and if the number of sampling points falling into the filtering sliding window is lower than the filtering threshold value, filtering the sampling points falling into the filtering sliding window from the corresponding ground plane down-sampling filtering point cloud to obtain the multi-frame ground plane sparse point cloud.
Further, in S4, specifically,
and performing inter-frame matching on every two frames of the ground plane sparse point clouds in the multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds.
Further, in S5, specifically,
extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds at preset distances according to the pose of each frame of ground plane sparse point clouds;
constructing a probability map by using all the keyframe ground plane sparse point clouds to obtain a probability map frame;
performing primary edge formation on the probability map frame by using pose data and positioning data of the robot corresponding to the keyframe ground plane sparse point cloud to obtain an initial probability map;
performing secondary edge formation on the initial probability map by using ground detection parameters corresponding to the key frame ground plane sparse point cloud to obtain a middle probability map;
performing closed-loop detection on the intermediate probability map by adding a preset edge, and if the detection is successful, adding the preset edge into the intermediate probability map to obtain a final probability map;
and positioning the robot according to the final probability map.
Based on the improved slam and multi-sensor fusion robot positioning method, the invention also provides an improved slam and multi-sensor fusion robot positioning system.
An improved slam and multi-sensor fused robot positioning system comprises the following modules,
the positioning data packet acquisition module is used for acquiring spatial data by utilizing a laser radar, an inertial sensor and a positioning device which are integrated on the robot to obtain a multi-frame robot positioning data packet;
the ground plane detection module is used for respectively segmenting ground point clouds from each frame of robot positioning data packet based on a point cloud detection plane method, and sequentially filtering and performing plane fitting on each frame of ground point clouds to obtain multiple frames of ground plane point clouds;
the point cloud filtering module is used for respectively carrying out point cloud filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane sparse point clouds;
the front-end odometer module is used for carrying out front-end odometer by utilizing multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds;
and the rear-end probability map construction module is used for extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds according to the poses of the ground plane sparse point clouds of each frame, constructing a probability map according to the key frame ground plane sparse point clouds, and positioning the robot according to the probability map.
The invention has the beneficial effects that: the improved slam and multi-sensor fusion robot positioning method and system provided by the invention adopt the improved slam positioning method, can simultaneously fuse three sensors of gps, imu and lidar, and can effectively restrict the error of elevation and improve the positioning accuracy through ground detection segmentation, point cloud filtering, a front-end odometer and a rear-end probability diagram construction.
Drawings
FIG. 1 is a flow chart of an improved slam and multi-sensor fusion robot positioning method of the present invention;
fig. 2 is a structural block diagram of an improved slam and multi-sensor fusion robot positioning system according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, an improved slam and multisensor fused robot positioning method includes the following steps,
s1, collecting spatial data by using a laser radar, an inertial sensor and a positioning device which are integrated on the robot to obtain a multi-frame robot positioning data packet;
s2, respectively segmenting ground point clouds from each frame of robot positioning data packet based on a point cloud detection plane method, and respectively filtering and performing plane fitting on each frame of ground point clouds in sequence to obtain multiple frames of ground plane point clouds;
s3, respectively carrying out point cloud filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane sparse point cloud;
s4, performing front-end odometer by using the multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds;
and S5, extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds according to the pose of each frame of ground plane sparse point clouds, constructing a probability map according to the key frame ground plane sparse point clouds, and positioning the robot according to the probability map.
In this particular embodiment: specifically, the step S1 is,
continuously scanning the space by using a laser radar integrated on the robot to obtain a plurality of frames of space point clouds;
correspondingly acquiring pose data and positioning data of the robot in real time by using an inertial sensor and a positioning device integrated on the robot;
and fusing the multi-frame space point cloud of the robot with the real-time pose data and the positioning data to obtain a multi-frame robot positioning data packet.
In this particular embodiment: specifically, the step S2 is,
respectively segmenting ground point clouds from the corresponding space point clouds according to the height parameters of each frame of space point cloud;
on the normal vector, filtering out points with preset distances away from a preset standard surface from each frame of ground point cloud respectively to obtain multiple frames of standard surface point clouds;
and respectively carrying out plane fitting on each frame of standard surface point cloud to obtain a plurality of frames of ground surface plane point clouds.
In S2, specifically:
specifically, a function plane _ clip is used for segmenting ground point clouds from the corresponding space point clouds; since the PlaneClipper3D can remove all points on one side of the designated point cloud area, two times of filtering can be performed when the ground is divided, and only the ground point cloud is left after the points above and below the ground height are removed twice respectively.
The normal vector corresponding to the position of each point in the ground point cloud is upward, even if some differences exist, the normal vector is not too large, otherwise, the normal vector is not the ground point; based on the above, a part of abnormal points can be removed from the ground point cloud. Here, the normal vector calculation is implemented using the function pcl:. normajestimation.
Specifically, a detect function is adopted for plane fitting.
In this particular embodiment: specifically, the step S3 is,
respectively carrying out down-sampling filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane down-sampling filtering point cloud;
and respectively carrying out filtering outlier processing on each frame of the ground plane downsampling filtering point cloud to obtain a plurality of frames of ground plane sparse point clouds.
Preferably, the ground plane point cloud of each frame is down-sampled and filtered by a VOXELGRID filter or an APPROX _ VOXELGRID filter or a NONE filter, respectively.
Preferably, the specific steps of respectively filtering the ground plane down-sampling filtering point clouds of each frame to remove outliers are,
respectively counting the adjacent points of each sampling point in each frame of the ground plane down-sampling filtering point cloud to obtain an adjacent point group of each sampling point in each frame of the ground plane down-sampling filtering point cloud;
respectively carrying out Gaussian distribution calculation on the distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to each adjacent point in an adjacent point group, and correspondingly obtaining the average distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to the adjacent point group;
and respectively filtering the adjacent points of which the distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to the adjacent point in the adjacent point group exceeds the average distance to obtain a plurality of frames of ground plane sparse point clouds.
In the preferred embodiment, the principle of filtering outliers is to perform a statistical analysis of the neighborhood of each point and prune away those points that do not meet the criteria. The outlier filtering method in the preferred embodiment is based on the calculation of the distance distribution from the point to the nearby point in the input data. For each point, its average distance to all its nearby points is calculated. Assuming that the result is a gaussian distribution whose shape is determined by the mean and standard deviation, points whose mean distance is outside the standard range (defined by the global distance mean and variance) can be defined as outliers and removed from the data set.
In addition to the above-described method of filtering outliers, outliers can also be filtered by the following method.
In other embodiments, preferably, the specific steps of respectively performing the processing of filtering outliers on each frame of the ground plane downsampled filtered point cloud are,
constructing a filtering sliding window with a preset size, and configuring a filtering threshold value of a preset number for the filtering sliding window; and respectively sliding the filtering sliding window in each frame of ground plane down-sampling filtering point cloud, and if the number of sampling points falling into the filtering sliding window is lower than the filtering threshold value, filtering the sampling points falling into the filtering sliding window from the corresponding ground plane down-sampling filtering point cloud to obtain the multi-frame ground plane sparse point cloud.
In this particular embodiment: specifically, the step S4 is,
and performing inter-frame matching on every two frames of the ground plane sparse point clouds in the multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds.
In this particular embodiment: specifically, the step S5 is,
extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds at preset distances according to the pose of each frame of ground plane sparse point clouds;
constructing a probability map by using all the keyframe ground plane sparse point clouds to obtain a probability map frame;
performing primary edge formation on the probability map frame by using pose data and positioning data of the robot corresponding to the keyframe ground plane sparse point cloud to obtain an initial probability map;
performing secondary edge formation on the initial probability map by using ground detection parameters corresponding to the key frame ground plane sparse point cloud to obtain a middle probability map;
performing closed-loop detection on the intermediate probability map by adding a preset edge, and if the detection is successful, adding the preset edge into the intermediate probability map to obtain a final probability map;
and positioning the robot according to the final probability map.
Based on the improved slam and multi-sensor fusion robot positioning method, the invention also provides an improved slam and multi-sensor fusion robot positioning system.
As shown in fig. 2, an improved slam and multisensor fused robotic positioning system includes the following modules,
the positioning data packet acquisition module is used for acquiring spatial data by utilizing a laser radar, an inertial sensor and a positioning device which are integrated on the robot to obtain a multi-frame robot positioning data packet;
the ground plane detection module is used for respectively segmenting ground point clouds from each frame of robot positioning data packet based on a point cloud detection plane method, and sequentially filtering and performing plane fitting on each frame of ground point clouds to obtain multiple frames of ground plane point clouds;
the point cloud filtering module is used for respectively carrying out point cloud filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane sparse point clouds;
the front-end odometer module is used for carrying out front-end odometer by utilizing multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds;
and the rear-end probability map construction module is used for extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds according to the poses of the ground plane sparse point clouds of each frame, constructing a probability map according to the key frame ground plane sparse point clouds, and positioning the robot according to the probability map.
In the improved slam and multi-sensor fusion robot positioning system of the invention, specific functions of each module refer to specific steps in the improved slam and multi-sensor fusion robot positioning method of the invention, and are not described herein again.
Although the laser slam is relatively simple, in the current open-source algorithm, three sensors of gps, imu and lidar are not fused at the same time; the improved slam and multi-sensor fusion robot positioning method and system provided by the invention adopt the improved slam positioning method, can simultaneously fuse three sensors of gps, imu and lidar, and can effectively restrict the error of elevation and improve the positioning accuracy through ground detection segmentation, point cloud filtering, a front-end odometer and a rear-end probability diagram construction.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. The utility model provides an improved generation slam and multisensor fuse robot positioning method which characterized in that: comprises the following steps of (a) carrying out,
s1, collecting spatial data by using a laser radar, an inertial sensor and a positioning device which are integrated on the robot to obtain a multi-frame robot positioning data packet;
s2, respectively segmenting ground point clouds from each frame of robot positioning data packet based on a point cloud detection plane method, and respectively filtering and performing plane fitting on each frame of ground point clouds in sequence to obtain multiple frames of ground plane point clouds;
s3, respectively carrying out point cloud filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane sparse point cloud;
s4, performing front-end odometer by using the multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds;
and S5, extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds according to the pose of each frame of ground plane sparse point clouds, constructing a probability map according to the key frame ground plane sparse point clouds, and positioning the robot according to the probability map.
2. The improved slam and multi-sensor fused robot positioning method of claim 1, wherein: specifically, the step S1 is,
continuously scanning the space by using a laser radar integrated on the robot to obtain a plurality of frames of space point clouds;
correspondingly acquiring pose data and positioning data of the robot in real time by using an inertial sensor and a positioning device integrated on the robot;
and fusing the multi-frame space point cloud of the robot with the real-time pose data and the positioning data to obtain a multi-frame robot positioning data packet.
3. The improved slam and multi-sensor fused robot positioning method of claim 2, wherein: specifically, the step S2 is,
respectively segmenting ground point clouds from the corresponding space point clouds according to the height parameters of each frame of space point cloud;
on the normal vector, filtering out points with preset distances away from a preset standard surface from each frame of ground point cloud respectively to obtain multiple frames of standard surface point clouds;
and respectively carrying out plane fitting on each frame of standard surface point cloud to obtain a plurality of frames of ground surface plane point clouds.
4. The improved slam and multi-sensor fused robot positioning method of claim 1, wherein: specifically, the step S3 is,
respectively carrying out down-sampling filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane down-sampling filtering point cloud;
and respectively carrying out filtering outlier processing on each frame of the ground plane downsampling filtering point cloud to obtain a plurality of frames of ground plane sparse point clouds.
5. The improved slam and multi-sensor fused robot positioning method of claim 4, wherein: and respectively carrying out down-sampling filtering on each frame of ground plane point cloud by using a VOXELGRID filter or an APPROX _ VOXELGRID filter or a NONE filter.
6. The improved slam and multi-sensor fused robot positioning method of claim 4, wherein: the specific steps of respectively filtering the ground plane down-sampling filtering point clouds to remove outliers are as follows,
respectively counting the adjacent points of each sampling point in each frame of the ground plane down-sampling filtering point cloud to obtain an adjacent point group of each sampling point in each frame of the ground plane down-sampling filtering point cloud;
respectively carrying out Gaussian distribution calculation on the distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to each adjacent point in an adjacent point group, and correspondingly obtaining the average distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to the adjacent point group;
and respectively filtering the adjacent points of which the distance from each sampling point in each frame of ground plane down-sampling filtering point cloud to the adjacent point in the adjacent point group exceeds the average distance to obtain a plurality of frames of ground plane sparse point clouds.
7. The improved slam and multi-sensor fused robot positioning method of claim 4, wherein: the specific steps of respectively filtering the ground plane down-sampling filtering point clouds to remove outliers are as follows,
constructing a filtering sliding window with a preset size, and configuring a filtering threshold value of a preset number for the filtering sliding window; and respectively sliding the filtering sliding window in each frame of ground plane down-sampling filtering point cloud, and if the number of sampling points falling into the filtering sliding window is lower than the filtering threshold value, filtering the sampling points falling into the filtering sliding window from the corresponding ground plane down-sampling filtering point cloud to obtain the multi-frame ground plane sparse point cloud.
8. The improved slam and multi-sensor fused robot positioning method of claim 1, wherein: specifically, the step S4 is,
and performing inter-frame matching on every two frames of the ground plane sparse point clouds in the multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds.
9. The improved slam and multi-sensor fused robot positioning method of claim 3, wherein: specifically, the step S5 is,
extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds at preset distances according to the pose of each frame of ground plane sparse point clouds;
constructing a probability map by using all the keyframe ground plane sparse point clouds to obtain a probability map frame;
performing primary edge formation on the probability map frame by using pose data and positioning data of the robot corresponding to the keyframe ground plane sparse point cloud to obtain an initial probability map;
performing secondary edge formation on the initial probability map by using ground detection parameters corresponding to the key frame ground plane sparse point cloud to obtain a middle probability map;
performing closed-loop detection on the intermediate probability map by adding a preset edge, and if the detection is successful, adding the preset edge into the intermediate probability map to obtain a final probability map;
and positioning the robot according to the final probability map.
10. The utility model provides an improved generation slam and multisensor fuse's robot positioning system which characterized in that: comprises the following modules which are used for realizing the functions of the system,
the positioning data packet acquisition module is used for acquiring spatial data by utilizing a laser radar, an inertial sensor and a positioning device which are integrated on the robot to obtain a multi-frame robot positioning data packet;
the ground plane detection module is used for respectively segmenting ground point clouds from each frame of robot positioning data packet based on a point cloud detection plane method, and sequentially filtering and performing plane fitting on each frame of ground point clouds to obtain multiple frames of ground plane point clouds;
the point cloud filtering module is used for respectively carrying out point cloud filtering on each frame of ground plane point cloud to obtain a plurality of frames of ground plane sparse point clouds;
the front-end odometer module is used for carrying out front-end odometer by utilizing multiple frames of the ground plane sparse point clouds to obtain the pose of each frame of the ground plane sparse point clouds;
and the rear-end probability map construction module is used for extracting key frame ground plane sparse point clouds from the multiple frames of ground plane sparse point clouds according to the poses of the ground plane sparse point clouds of each frame, constructing a probability map according to the key frame ground plane sparse point clouds, and positioning the robot according to the probability map.
CN202111434276.7A 2021-11-29 2021-11-29 Improved robot positioning method and system with slam and multiple sensors fused Pending CN114234976A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111434276.7A CN114234976A (en) 2021-11-29 2021-11-29 Improved robot positioning method and system with slam and multiple sensors fused

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111434276.7A CN114234976A (en) 2021-11-29 2021-11-29 Improved robot positioning method and system with slam and multiple sensors fused

Publications (1)

Publication Number Publication Date
CN114234976A true CN114234976A (en) 2022-03-25

Family

ID=80751852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111434276.7A Pending CN114234976A (en) 2021-11-29 2021-11-29 Improved robot positioning method and system with slam and multiple sensors fused

Country Status (1)

Country Link
CN (1) CN114234976A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021128297A1 (en) * 2019-12-27 2021-07-01 深圳市大疆创新科技有限公司 Method, system and device for constructing three-dimensional point cloud map
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021128297A1 (en) * 2019-12-27 2021-07-01 深圳市大疆创新科技有限公司 Method, system and device for constructing three-dimensional point cloud map
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SEINLVAN: "hdl_graph_slam", Retrieved from the Internet <URL:《https://blog.csdn.net/Seinlvan/article/details/119115908》> *
向运川等: "《化探资料应用技术要求》", 31 August 2010, 地质出版社, pages: 31 *
张得沛: "管道无人机三维信息感知关键技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 8, 15 August 2020 (2020-08-15), pages 031 - 118 *
李英冰等: "《测绘程序设计 下册》", 31 July 2020, 武汉大学出版社, pages: 225 *

Similar Documents

Publication Publication Date Title
CN112014857B (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN110244321B (en) Road passable area detection method based on three-dimensional laser radar
CN103500322B (en) Automatic lane line identification method based on low latitude Aerial Images
CN107356933B (en) Unstructured road detection method based on four-line laser radar
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
CN102435174B (en) Method and device for detecting barrier based on hybrid binocular vision
CN103546726B (en) Method for automatically discovering illegal land use
CN105426905A (en) Robot barrier identification method based on gradient histogram and support vector machine
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN107993488A (en) A kind of parking stall recognition methods, system and medium based on fisheye camera
CN115240047A (en) Laser SLAM method and system fusing visual loopback detection
CN107885224A (en) Unmanned plane barrier-avoiding method based on tri-item stereo vision
CN110426037A (en) A kind of pedestrian movement track real time acquiring method under enclosed environment
CN103625649A (en) Aircraft autonomous landing region judging method based on self adaptive region division and window communication
CN110083099B (en) Automatic driving architecture system meeting automobile function safety standard and working method
CN111295666A (en) Lane line detection method, device, control equipment and storage medium
CN114782626A (en) Transformer substation scene mapping and positioning optimization method based on laser and vision fusion
CN113469178A (en) Electric power meter identification method based on deep learning
CN115139303A (en) Grid well lid detection method, device, equipment and storage medium
CN115761682A (en) Method and device for identifying travelable area based on laser perception and intelligent mine card
CN112578405B (en) Method and system for removing ground based on laser radar point cloud data
CN114234976A (en) Improved robot positioning method and system with slam and multiple sensors fused
CN112085101A (en) High-performance and high-reliability environment fusion sensing method and system
CN112258641A (en) Automatic configuration system and method for inspection point, storage medium, equipment and robot
CN114779794B (en) Street obstacle identification method based on unmanned patrol vehicle system in typhoon scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination