CN110675307B - Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM - Google Patents

Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM Download PDF

Info

Publication number
CN110675307B
CN110675307B CN201910764913.3A CN201910764913A CN110675307B CN 110675307 B CN110675307 B CN 110675307B CN 201910764913 A CN201910764913 A CN 201910764913A CN 110675307 B CN110675307 B CN 110675307B
Authority
CN
China
Prior art keywords
point cloud
map
image
algorithm
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910764913.3A
Other languages
Chinese (zh)
Other versions
CN110675307A (en
Inventor
胡莉英
闫泽昊
孙玲玲
李郑慧
刘新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910764913.3A priority Critical patent/CN110675307B/en
Publication of CN110675307A publication Critical patent/CN110675307A/en
Application granted granted Critical
Publication of CN110675307B publication Critical patent/CN110675307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/067
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a method for realizing a 3D sparse point cloud to 2D grid graph based on VSLAM, wherein visual SLAM processes image information acquired by a camera to generate a 3D sparse point cloud; dividing the image into a plane area and a height area through visual segmentation based on colors, wherein the plane area is used for mapping map points, a Bresenham straight line drawing algorithm is adopted for all received key frames and map points, and a counter is added for optimizing a mapping unit; the height area is used for obstacle detection, and a gradient threshold value and a Canny edge detection algorithm are adopted; finally, autonomous navigation is carried out on the converted 2D raster image which can be used for navigation and obstacle avoidance, and the global planning algorithm of Dijkstra and the local path planning algorithm of DWA are adopted.

Description

Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
Technical Field
The invention belongs to the field of computer vision and mobile robot navigation, and relates to a method for realizing 3D sparse point cloud to 2D grid map based on vision SLAM generation.
Background
Meanwhile, positioning and map construction (SLAM) are an important research field of autonomous navigation of robots. The robot carries a specific sensor, moves without environment prior information, constructs an environment map and simultaneously estimates the motion process of the robot. SLAM technology is obviously applied in the fields of indoor mobile robots, autopilot, augmented Reality (AR), etc. The indoor mobile robot injects new power for industries such as logistics transportation, warehouse checking, port transportation, indoor rescue, commercial super service robots and the like.
The mature simultaneous localization and mapping scheme in the current market is laser SLAM based on Lidar, such as a Kowa robot which is known to the public, and an automatic driving automobile in the future, hundred degrees and the like. However, multi-line lasers are expensive; single-line lidar detection distance is limited and lacks semantic information. For most markets, however, cost effective and stable products are needed.
Disclosure of Invention
Aiming at the problems that a two-dimensional map is needed for positioning and navigating an indoor mobile robot, the cost for using a laser radar is high and the like, the invention provides a method for converting 3D sparse point cloud generated based on visual SLAM into a 2D grid navigation map.
The invention discloses a method for realizing a 3D sparse point cloud-to-2D grid graph based on VSLAM, which comprises the following steps:
step 1, sparse point cloud generation
Sparse 3D point cloud images are generated by a visual SLAM algorithm. Installing a binocular camera on the robot to acquire indoor environment information; calibrating the image by a kalibr calibration method to realize the de-distortion operation of the image; then extracting Harris angular points of the image, and tracking adjacent frames by using an optical flow method; pure vision estimation of pose and 3D point inverse depth of all frames in a sliding window is carried out through SfM, and initialization parameters are solved; after the initialization is successful, the back-end sliding window optimization is carried out to solve PVQ and Bias of all frames in the sliding window; and performing closed loop detection, repositioning after the detection is successful, and performing global graph optimization on the whole camera track. And finally, saving the generated 3D point cloud image.
Step 2, point cloud data optimization
Firstly, loading the 3D point cloud image stored in the step 1, then performing color-based visual segmentation on the point cloud, dividing the point cloud into a plane area and a height area, wherein the plane area is used for generating a 2D grid image, and the height area is used for establishing an obstacle.
Step 3, obtaining the image key frame and map points
The start-up file of the SLAM program is modified and each key frame is added to the map. Two nodes are created through the ROS package, mynteyepub detects when to execute the loop closure, and issues the 3D pose of each key frame and all visible map points in each key frame for Mynteyesub subscription.
Step 4, map mapping
And after all the key frames are received, removing the Y coordinates, projecting the camera pose and all the visible map points onto an XZ plane, and storing the image pose and all the visible map points into a dictionary structure. Processing one key frame at a time, and projecting rays from the camera position to all visible points by using a Bresenham straight line drawing algorithm for all map points visible in each key frame; an access counter is incremented for each point on the ray and an occupancy counter is incremented for the endpoint corresponding to the location of the mapped point. The access and occupancy counter stores a two-dimensional matrix in proportion to the XZ mapping plane, the value of each row of the matrix representing a cell location. After all the key frames are calculated, the unoccupied probability calculation formula of each cell in the grid graph is as follows:
Figure BDA0002171625100000021
wherein, the occupied (i, j) and the visible (i, j) represent the occupied and accessed times of a certain idle cell, p free(i,j) Representing the probability that the cell is unoccupied. The cells are divided into three cases by setting two thresholds: (1) If p is free(i,j) Above a certain free threshold free_threshold, then consider as an idle cell; (2) If p is free(i,j) Less than a certain occupancy threshold, then considered an occupancy unit; (3) otherwise, treating as an unknown unit;
step 5, optimizing the mapping unit
In the 3D to 2D map mapping process of step 4, a global counter is used to count all key frames received. However, in the actual mapping process, a case where a plurality of points are collinear in the 2D map may occur; the Bresenham's straight line drawing algorithm will increment the access counter count number, reduce the occupancy rate of all points on the line, so that occupied cells are replaced by idle cells, and misjudgment is generated. Based on the method, a local counter and a global counter are combined, the local counter is used for receiving the key frames for the places with the local feature points larger than the set threshold value, and then sum is carried out on the key frames counted by the global counter through AND operation, so that counting optimization is realized, and mapping quality is improved.
Step 6, obstacle detection
Obstacle detection and mapping is achieved using a grade threshold. To better distinguish between obstacles, two different thresholds are used: a 15-degree threshold value is adopted for the plane area after vision segmentation; for the height region, an 80 ° threshold is employed. To eliminate false obstructions, two constraints are added to the detection algorithm: (1) consider only points within a certain height from the mapping plane; (2) And (3) carrying out threshold processing on the slope of the plane in which the estimated points are located, and applying verticality constraint to each point. On the vertical line containing this point, the number of all pixels needs to be greater than the set threshold.
The grade threshold may cause false elimination of a real obstacle, with the incorrectly eliminated obstacle being located mostly at the boundary between the free cell and the unknown cell. Based on the above, a canny boundary detection algorithm is adopted to find the boundary between the idle cell and the unknown cell, a certain fixed threshold value is set, and then all pixels corresponding to the edges detected in the probability map are set to be occupied; a 2D raster image for navigation and obstacle avoidance is obtained. Through the two thresholds, the obstacle can be better identified, and the navigation map accuracy is improved.
Step 7, autonomous navigation: through the above steps we get a 2D grid map that can be used for navigation and obstacle avoidance. The invention performs navigation on the basis of completing visual SLAM mapping, and completes indoor autonomous movement of the robot by adopting a Dijkstra global planning algorithm and a DWA local path planning algorithm. The method comprises the following steps of (1) planning a global path to find the fastest path to a target point on a map according to an environment map recorded in advance by a robot and combining the current pose of the robot and the position of the target point of a task; (2) The local path planning is to adjust on the basis of the original global planning in real time according to the environment in the process of driving to the end point.
The invention has the advantages that the invention is equivalent to the prior art:
1. at present, the indoor positioning modes which are popular in the market include Wi-Fi, bluetooth, RFID, UWB, infrared technology, ultrasonic wave and the like, however, the positioning modes have a plurality of problems such as low positioning accuracy, easiness in being influenced by obstacles, large shake during movement and the like. The invention adopts the visual SLAM to realize indoor accurate positioning and overcomes the influence of media and the like.
2. The positioning scheme of the visual SLAM is provided, and the problems that the manufacturing cost of Lidar is high, the map lacks semantic information and the like in a laser SLAM positioning mode are solved.
3. At present, a case of converting a 3D dense point cloud into a 2D grid map exists in the market, but the generation of the dense point cloud has high requirements on a platform, and the CUDA is usually combined to accelerate image processing. According to the scheme, the sparse point cloud is converted, the problems of high energy consumption and high delay in the generation process of the dense point cloud are solved, and the operation efficiency is improved.
4. The method adopts a visual segmentation algorithm based on color and adopts a global counter and local counter mode to realize more accurate and rapid map point mapping. Meanwhile, obstacle detection is realized through a threshold algorithm, and finally a semantic map is obtained.
Drawings
FIG. 1 is a system overall frame;
FIG. 2 is a schematic diagram of sliding window optimization;
FIG. 3 is a flow chart of 3D sparse point cloud to 2D grid map conversion;
fig. 4 is a flow chart of Bresenham straight line drawing algorithm.
Detailed Description
The invention is further described below with reference to the drawings and the detailed description.
Table 1 shows system platform parameters. The hardware environment for implementation is Jetson TX2, the invention is carried out under a Linux system, the experimental means comprise a data set test and an on-site test, the on-site test uses MYNTEYE for data acquisition, and performance evaluation of different illumination conditions of different scenes is completed in experimental corridor and roof balcony.
Parameters (parameters) Conditions of implementation
System hardware platform Jetson TX2
Visual sensor MYNTEYE S1030
Running environment Ubuntu 16.04
Programming language C++、Python
Test environment Experiment building (30 x 20m 2)
TABLE 1
As shown in FIG. 1, the method for realizing the 3D sparse point cloud to 2D raster image based on visual SLAM generation comprises three parts of 3D point cloud image generation, 2D raster image conversion, path planning and autonomous navigation. The core steps in these three parts are described in detail below.
1. Generating a 3D point cloud image:
a) In order to obtain better robustness, a visual and inertial navigation fusion SLAM algorithm is adopted, a MYNTEYE camera is carried on a robot, a six-axis IMU is built in the camera, and hardware synchronization of the camera and the IMU is realized. The camera and IMU are calibrated offline through a joint calibration tool kalibr, the time stamps are synchronized, and then the MYNTEYE camera is used for collecting environmental information.
B) The conventional visual odometer method in the market at present comprises a characteristic point method, a direct method and an optical flow method, and the method adopts the optical flow method to estimate the motion of pixels in images at different moments. The rear end uses a sliding window method to solve for position, speed, rotation, bias, etc., and fig. 2 is a schematic diagram of sliding window optimization.
C) And finally, converting the pose points acquired by the camera into point cloud information through an SLAM algorithm, generating a depth image from the point cloud by utilizing projection and an orthographic theorem, and storing the point cloud data through PCL for later map conversion.
2. Conversion of 3D point cloud to 2D grid:
a) The 3D point cloud to 2D grid map conversion process is shown in fig. 3. The method for converting the 3D point cloud image into the 2D grid image is quite many, and the octomap provided by the ROS is used for completing the generation and conversion of the grid image, and hopes that the point cloud data are as much as possible, and for the conversion of a sparse map, the association degree between map points is small, mismatching is easy to cause, and the map conversion accuracy is low, so that the method is more suitable for the conversion of quasi-dense and dense maps. Another more common approach is to use the ROS packet of pointgroup_to_laser to convert the point cloud data to a simulated Lidar scan, and then use the resulting data as input to the Gmapping algorithm to generate a 2D raster image from the Lidar scan. However, this approach may introduce additional uncertainty in the map, as the mapping algorithm always assumes that Lidar data is noisy, while the visually generated point cloud contains only reliable points. According to the invention, key frame gestures and all map points of a sparse point cloud image are created and released through ROS, point cloud data are mapped to idle cells in the grid image through a visual segmentation algorithm and a Bresenham algorithm, and the 2D grid image which can be used for navigation and obstacle avoidance is finally generated through optimization processing of slope threshold, gaussian smoothing and other algorithms.
B) In the mapping process of the point cloud pose and map points, a Bresenham straight line drawing algorithm is adopted to project rays from the camera position to all visible points. FIG. 4 is a flow chart of the Bresenham algorithm, which is used to draw a straight line determined by two points, and calculate the closest point of a line segment on an n-dimensional grating.
C) For the generated 2D grid graph, the real road condition graph in the experimental corridor is acquired, two maps are aligned through a homographic matrix, and then accuracy measurement and integrity scoring are used for accuracy assessment.
Integrity = known number of cells in map/total number of cells in map
Accuracy = number of cells known to be correct in the map/total number of cells known in the map
The accuracy measurement shows the similarity between the generated map and the actual road condition map; the integrity score represents the probability that the algorithm will generate a real map, whether or not the map is correct.
3. Path planning:
the path planning is widely applied to the fields of autonomous collision-free movement of robots, vehicle resource allocation of logistics management and the like. The path planning algorithms commonly used in the market are: dijkstra algorithm, a search algorithm, simulated annealing algorithm, ant colony algorithm, genetic algorithm, particle swarm algorithm, floyd algorithm, fallback algorithm, and the like.
Path planning can be classified into global path planning based on prior information and local path planning based on sensor information according to the degree of knowledge of the environmental information. The global planning adopts Dijkstra algorithm, is used for calculating the shortest path from one node to other nodes, and follows breadth-first search concept, namely, the global planning is expanded outwards layer by taking the starting point as the center until the shortest path is expanded to the final point; the local path planning adopts a DWA algorithm, and the principle is that a plurality of groups of speeds are sampled in a speed space (v, w), motion tracks of all the speeds in a certain time are simulated, and finally all the tracks are scored through an evaluation function, and the optimal speeds are selected and transmitted back to the robot, so that the local optimal path planning is realized.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and color changes may be made by those skilled in the art without departing from the spirit of the present invention, and these modifications and color changes should also be considered as being within the scope of the present invention.

Claims (1)

1. The method for realizing the 2D grid graph from the 3D sparse point cloud based on the VSLAM is characterized by comprising the following steps of:
step 1, sparse point cloud generation
Generating a sparse 3D point cloud image through a visual SLAM algorithm; installing a binocular camera on the robot to acquire indoor environment information; calibrating the image by a kalibr calibration method to realize the de-distortion operation of the image; then extracting Harris angular points of the image, and tracking adjacent frames by using an optical flow method; pure vision estimation of pose and 3D point inverse depth of all frames in a sliding window is carried out through SfM, and initialization parameters are solved; after the initialization is successful, the back-end sliding window optimization is carried out to solve PVQ and Bias of all frames in the sliding window; performing closed loop detection, repositioning after the detection is successful, and performing global map optimization on the whole camera track; finally, the generated 3D point cloud image is stored;
step 2, point cloud data optimization
Firstly, loading the 3D point cloud image stored in the step 1, then performing color-based visual segmentation on the point cloud, dividing the point cloud into a plane area and a height area, wherein the plane area is used for generating a 2D grid image, and the height area is used for establishing an obstacle;
step 3, obtaining the image key frame and map points
Modifying a start file of the SLAM program, and adding each key frame into a map; creating two nodes through the ROS package, and detecting when to execute a cyclic closure by Mynteyepub, and simultaneously publishing a 3D gesture of each key frame and all visible map points in each key frame for Mynteyesub subscription;
step 4, map mapping
After all key frames are received, removing the Y coordinates, projecting the camera pose and all visible map points onto an XZ plane, and storing the camera pose and all visible map points into a dictionary structure; processing one key frame at a time, and projecting rays from the camera position to all visible points by using a Bresenham straight line drawing algorithm for all map points visible in each key frame; increasing an access counter for each point on the ray and an occupancy counter for the endpoint corresponding to the mapped point location; the access and occupation counter stores a two-dimensional matrix in the same proportion as the XZ mapping plane, and the numerical value of each row and column of the matrix represents the position of a cell; after all the key frames are calculated, the unoccupied probability calculation formula of each cell in the grid graph is as follows:
Figure FDA0002171625090000011
wherein, the occupied (i, j) and the visible (i, j) represent the occupied and accessed times of a certain idle cell, p free(i,j) Representing a probability that the cell is unoccupied; the cells are divided into three cases by setting two thresholds: (1) If p is free(i,j) Above a certain free threshold free_threshold, then consider as an idle cell; (2) If p is free(i,j) Less than a certain occupancy threshold, then considered an occupancy unit; (3) otherwise, treating as an unknown unit;
step 5, optimizing the mapping unit
In the mapping process from 3D to 2D in the step 4, counting all received key frames by adopting a global counter; aiming at the places with the local feature points larger than the set threshold value, a local counter is used for receiving key frames, and then the key frames are summed with all key frames counted by a global counter through AND operation, so that counting optimization is realized;
step 6, obstacle detection
Adopting a gradient threshold value to realize obstacle detection and mapping; to better distinguish between obstacles, two different thresholds are used: a 15-degree threshold value is adopted for the plane area after vision segmentation; for the height region, an 80 ° threshold is employed; to eliminate false obstructions, two constraints are added to the detection algorithm: (1) consider only points within a certain height from the mapping plane; (2) Threshold processing is carried out on the slope of the plane where the estimated point is located, and perpendicularity constraint is applied to each point; on the vertical line containing the point, the number of all pixels needs to be greater than a set threshold;
searching the boundary between the idle cell and the unknown cell by using a canny boundary detection algorithm, setting a certain fixed threshold value, and setting all pixels corresponding to the edges detected in the probability map as occupied; a 2D raster image for navigation and obstacle avoidance is obtained.
CN201910764913.3A 2019-08-19 2019-08-19 Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM Active CN110675307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910764913.3A CN110675307B (en) 2019-08-19 2019-08-19 Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910764913.3A CN110675307B (en) 2019-08-19 2019-08-19 Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM

Publications (2)

Publication Number Publication Date
CN110675307A CN110675307A (en) 2020-01-10
CN110675307B true CN110675307B (en) 2023-06-06

Family

ID=69075411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910764913.3A Active CN110675307B (en) 2019-08-19 2019-08-19 Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM

Country Status (1)

Country Link
CN (1) CN110675307B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3722991A1 (en) * 2019-04-10 2020-10-14 Axis AB Method, system, and device for detecting an object in a distorted image
CN111415417B (en) * 2020-04-14 2023-09-05 大连理工江苏研究院有限公司 Mobile robot topology experience map construction method integrating sparse point cloud
CN111664843A (en) * 2020-05-22 2020-09-15 杭州电子科技大学 SLAM-based intelligent storage checking method
CN112013845B (en) * 2020-08-10 2022-04-22 北京轩宇空间科技有限公司 Fast map updating method, device and storage medium adapting to unknown dynamic space
CN112025729B (en) * 2020-08-31 2022-02-15 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS
CN112114966A (en) * 2020-09-15 2020-12-22 杭州未名信科科技有限公司 Light beam adjustment calculation method of visual SLAM
CN112162560A (en) * 2020-10-10 2021-01-01 金陵科技学院 Regression error anti-interference navigation control system based on nonlinear dictionary
CN112415522A (en) * 2020-10-27 2021-02-26 衡阳市智谷科技发展有限公司 Semantic SLAM method for real-time and perception of dynamic object information
CN112509056B (en) * 2020-11-30 2022-12-20 中国人民解放军32181部队 Dynamic battlefield environment real-time path planning system and method
CN112650242A (en) * 2020-12-22 2021-04-13 天津理工大学 Mobile robot path planning method based on hybrid algorithm
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN113405547B (en) * 2021-05-21 2022-03-22 杭州电子科技大学 Unmanned aerial vehicle navigation method based on semantic VSLAM
CN113963335B (en) * 2021-12-21 2022-03-22 山东融瓴科技集团有限公司 Road surface obstacle detection method based on image and point cloud data
CN114474061B (en) * 2022-02-17 2023-08-04 新疆大学 Cloud service-based multi-sensor fusion positioning navigation system and method for robot
CN114280583B (en) * 2022-03-02 2022-06-17 武汉理工大学 Laser radar positioning accuracy verification method and system without GPS signal
CN114690769A (en) * 2022-03-07 2022-07-01 美的集团(上海)有限公司 Path planning method, electronic device, storage medium and computer program product
CN116840820B (en) * 2023-08-29 2023-11-24 上海仙工智能科技有限公司 Method and system for detecting 2D laser positioning loss and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933708A (en) * 2015-06-07 2015-09-23 浙江大学 Barrier detection method in vegetation environment based on multispectral and 3D feature fusion
CN106530380A (en) * 2016-09-20 2017-03-22 长安大学 Ground point cloud segmentation method based on three-dimensional laser radar
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107240104A (en) * 2017-04-28 2017-10-10 深圳市速腾聚创科技有限公司 Point cloud data segmentation method and terminal
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN109556598A (en) * 2018-11-23 2019-04-02 西安交通大学 It is a kind of that figure and navigation locating method are independently built based on array of ultrasonic sensors
CN109689173A (en) * 2016-04-26 2019-04-26 奇跃公司 It is tracked using the electromagnetism of augmented reality system
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102647351B1 (en) * 2017-01-26 2024-03-13 삼성전자주식회사 Modeling method and modeling apparatus using 3d point cloud

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933708A (en) * 2015-06-07 2015-09-23 浙江大学 Barrier detection method in vegetation environment based on multispectral and 3D feature fusion
CN109689173A (en) * 2016-04-26 2019-04-26 奇跃公司 It is tracked using the electromagnetism of augmented reality system
CN106530380A (en) * 2016-09-20 2017-03-22 长安大学 Ground point cloud segmentation method based on three-dimensional laser radar
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107240104A (en) * 2017-04-28 2017-10-10 深圳市速腾聚创科技有限公司 Point cloud data segmentation method and terminal
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN109556598A (en) * 2018-11-23 2019-04-02 西安交通大学 It is a kind of that figure and navigation locating method are independently built based on array of ultrasonic sensors
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓信.基于自主导航的自然人机交互系统及应用.《中国优秀硕士学位论文全文数据库信息科技辑》.2019,(第06期),I140-147. *

Also Published As

Publication number Publication date
CN110675307A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
CN110675307B (en) Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
CN108983781B (en) Environment detection method in unmanned vehicle target search system
Pomerleau et al. Long-term 3D map maintenance in dynamic environments
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
US9990736B2 (en) Robust anytime tracking combining 3D shape, color, and motion with annealed dynamic histograms
US8605998B2 (en) Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
EP2948927B1 (en) A method of detecting structural parts of a scene
Huang Review on LiDAR-based SLAM techniques
CN110874100A (en) System and method for autonomous navigation using visual sparse maps
US11703334B2 (en) Mobile robots to generate reference maps for localization
Erbs et al. Moving vehicle detection by optimal segmentation of the dynamic stixel world
CN112171675B (en) Obstacle avoidance method and device for mobile robot, robot and storage medium
CN114387576A (en) Lane line identification method, system, medium, device and information processing terminal
Deng et al. Improved closed-loop detection and Octomap algorithm based on RGB-D SLAM
Guo et al. Robust road boundary estimation for intelligent vehicles in challenging scenarios based on a semantic graph
Hu et al. A novel lidar inertial odometry with moving object detection for dynamic scenes
Yan et al. RH-Map: Online Map Construction Framework of Dynamic Object Removal Based on 3D Region-wise Hash Map Structure
He et al. Feature extraction from 2D laser range data for indoor navigation of aerial robot
CN113554705A (en) Robust positioning method for laser radar in changing scene
Pang et al. FLAME: Feature-likelihood based mapping and localization for autonomous vehicles
Vatavu et al. Real-time environment representation based on occupancy grid temporal analysis using a dense stereo-vision system
Bichsel et al. Discrete-continuous clustering for obstacle detection using stereo vision
Burger et al. Map-Aware SLAM with Sparse Map Features
Azim 3D perception of outdoor and dynamic environment using laser scanner
Han et al. Novel cartographer using an OAK-D smart camera for indoor robots location and navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant