CN113108773A - Grid map construction method integrating laser and visual sensor - Google Patents

Grid map construction method integrating laser and visual sensor Download PDF

Info

Publication number
CN113108773A
CN113108773A CN202110433100.3A CN202110433100A CN113108773A CN 113108773 A CN113108773 A CN 113108773A CN 202110433100 A CN202110433100 A CN 202110433100A CN 113108773 A CN113108773 A CN 113108773A
Authority
CN
China
Prior art keywords
map
grid
laser
algorithm
grid map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110433100.3A
Other languages
Chinese (zh)
Inventor
张天勇
王野鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202110433100.3A priority Critical patent/CN113108773A/en
Publication of CN113108773A publication Critical patent/CN113108773A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention is suitable for the technical field of synchronous positioning and mapping of a robot, and provides a method for combining laser and vision aiming at the problems that a single laser sensor cannot effectively map a hollowed-out barrier when creating a map of a surrounding environment, a vision sensor cannot work in a non-textured area and the like, so that the robot can effectively map the environment with the hollowed-out barrier or the non-textured barrier in the process of creating the map, a grid map obtained by Gmipping-SLAM is taken as a laser grid map, three-dimensional point cloud data extracted by an ORB-SLAM2 algorithm is converted by 3D-2D to obtain a vision grid map, and finally, a Bayesian estimation method is used for carrying out data fusion on the two grid maps, and the accuracy of the robot for creating the hollowed-out environment and the non-textured area is effectively improved by using the method of combining the laser and the vision, and obtaining a grid map which can reflect physical environment information better.

Description

Grid map construction method integrating laser and visual sensor
Technical Field
The invention mainly relates to the technical field of synchronous positioning and mapping of robots, in particular to a grid map construction method integrating laser and a visual sensor.
Background
In the research of a plurality of technologies of a mobile robot, the synchronous positioning and mapping of the robot is one of important parts in the technical research, and is also a hotspot in the research field of the robot, the map creation and positioning of the robot by using a laser sensor are commonly used in the current mobile robot research, however, in the process of creating the map, when a local area is full of sundries or hollowed-out obstacles exist in the process of creating the map by using the laser sensor, the map information of the laser sensor is lost, a method for fusing two-dimensional map data of the laser sensor and visual information of an RGB camera is provided, the problems that the environment with the hollowed-out obstacles is difficult to create by using the laser sensor alone and the tracking point of the visual sensor is easy to lose in a non-texture area are solved by fusing 2D laser information and 3D depth information, therefore, a more accurate map is constructed, the positioning effect of the robot is improved, the stability and accuracy of indoor autonomous navigation are further improved, and a foundation is laid for the subsequent map creation of indoor and outdoor more complex environments.
Disclosure of Invention
The invention provides a raster map construction method fusing laser and a visual sensor, which aims at the problems that when a single laser sensor creates a map of the surrounding environment, a hollowed-out barrier cannot be effectively constructed, the visual sensor cannot work in a non-textured area and the like, provides a method combining laser and vision so as to ensure that a robot can effectively construct the map of the environment with the hollowed-out barrier or the non-textured barrier in the process of creating the map, takes a raster map obtained by Gmiping-SLAM as the laser raster map, extracts three-dimensional point cloud data obtained by an ORB-SLAM2 algorithm, converts the three-dimensional point cloud data into 3D-2D to obtain the visual raster map, finally performs data fusion on the two raster maps by utilizing a Bayesian estimation method, and can effectively improve the accuracy of the robot in creating the hollowed-out environment and the non-textured area by utilizing the method combining laser and vision, and obtaining a grid map which can reflect physical environment information better.
The invention is realized in such a way that a grid map construction method integrating laser and a visual sensor is characterized by comprising the following steps:
1. a grid map construction method fusing laser and a visual sensor is characterized by specifically comprising the following steps:
s1, drawing of the laser radar is carried out by utilizing a Gmapping-SLAM algorithm;
s2, constructing a map of the visual sensor by utilizing an ORB-SLAM algorithm;
s3, carrying out Bayesian method and grid map representation and sensor data fusion;
and S4, analyzing and simulating an experimental result.
2. As described above, step S1 is specifically as follows:
the Gmapping algorithm is based on a particle filter SLAM framework, on the basis of the particle filter algorithm of RBPF, the Gamma algorithm is utilized to position the robot per se and then establish an environment map, a resampling particle processing method is utilized, the particle dissipation problem and the characteristic that particle weight is gradually updated and converged are fully considered, and therefore the uncertainty of the robot per se positioning is reduced, the robot mileage information is obtained, the quality of a laser beam dot matrix is greatly improved, the established map is more complete and accurate, and based on laser radar data, the next particle filter sampling is carried out with the currently estimated pose of the robot.
3. As described above, step S2 is specifically as follows:
the specific process of ORB-SLAM is as follows: firstly, ORB characteristics of a current frame are extracted and matched to obtain a rotation matrix R, a translation vector t and a 3D point cloud; then tracking the pose of the camera by adopting a motion model tracking mode and a reference key frame tracking mode, and if the tracking fails, adopting global repositioning until the initial pose estimation is completed; then matching the current frame with local map points, optimizing the pose and the map points of the current frame again according to the matching result, finally obtaining the optimized pose of the current frame and determining whether the current frame is a key frame, then calculating the BoW vector of the feature points, inserting the key frame into the map after processing, then eliminating the newly added map points which do not meet the requirement, and finding out the new map points as the last frame in the queue by a triangulation method according to the relation between the current key frame and the common view key frame thereof, then performing local clustering adjustment after adjacent search, and finally eliminating redundant key frames.
4. As described above, step S3 is specifically as follows:
updating the result of the fusion of the local laser grid map and the local visual grid map data to establish a grid map for navigation, wherein the updating of the sensor data refers to the combination of the latest probability and the sensor model
Figure BDA0003032134670000021
To re-estimate the state of the storage unit according to the grid map unit Ci,jState information setting r oft=(rt,rt-n,…,r0) Combining with recursive Bayesian rule to obtain the above-mentioned data updated estimation value PoIn the initialization of the grid map, let Ci,jAre equal a priori, i.e.
Figure BDA0003032134670000022
When a recurrence formula is applied to the representation of the occupancy grid, the fused update formula can be simplified to the following:
Figure BDA0003032134670000023
in the formula:
Figure BDA0003032134670000024
the conditional probability that the grid cell measured by the sensor is in an occupied state is referred to;
Figure BDA0003032134670000025
is the prior probability of being in an occupied state cell in the prior grid map; p0Is an updated estimate of the sensor measurement distance R or the conditional probability of the inverse sensor model occupying the grid cell,
the mapping method comprises the steps that a Gmapping algorithm generates a two-dimensional laser local map by using laser data, an ORB algorithm generates a three-dimensional local point cloud map by using RGB-D data, an octomap _ server function based on an octomap in ROS is used for transforming the three-dimensional point cloud map through projection to generate a projected two-dimensional grid map, a grid map part generated by the ORB algorithm is sequentially added to update the Gmapping map part according to a Bayesian rule, a coordinate mapping relation corresponding to grids between two maps is formed based on comparison of origin coordinates in grid map data, the state of a grid in an occupied state is considered for a grid in an adjacent grid, if the number of the grids in the occupied state is larger than a certain threshold value, the grid is judged to be in the occupied state, otherwise, the grid is judged to be in an idle state, and then the laser and the grids in the same position in a visual map are fused,
based on a Bayesian method: o denotes an obstacle event, i.e. an occupied grid,
Figure BDA0003032134670000031
a grid representing an idle state, E representing an obstacle presence event, according to a conditional probability P (E | O) and
Figure BDA0003032134670000032
global observation information is obtained so that the map can be updated using a probability formula as follows:
Figure BDA0003032134670000033
Figure BDA0003032134670000034
in the formula: p (E) denotes the prior probability, P (O | E) and
Figure BDA0003032134670000035
representing an observation model.
The invention has the following beneficial effects:
the invention provides a raster map construction method fusing laser and a visual sensor, wherein a Gmapping algorithm separates the self-positioning process and the environment mapping process of a robot, the uncertainty of the self-positioning of the robot is reduced, the acquisition of odometer information is realized, the quality of a laser dot matrix is greatly improved, an ORB algorithm has stronger interference resistance and low power consumption, the object detection speed based on characteristics can be accelerated, the accuracy of the map construction of the robot on a hollowed-out environment and a non-texture area can be effectively improved after the data of a local laser raster map and a local visual raster map are fused by utilizing a Bayesian method, and the raster map which can better reflect physical environment information is obtained.
Drawings
FIG. 1 is a flow chart of the Gmapping algorithm;
FIG. 2 is a visual SLAM framework;
FIG. 3 is a process framework for fusing laser data with visual data;
FIG. 4 is an experimental simulation environment;
FIG. 5 is a grid map obtained by the Gmapping algorithm;
FIG. 6 is a grid map obtained by the ORB algorithm;
FIG. 7 is a map after fusion of laser data and visual data;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the accompanying drawings.
S1, a Gamma algorithm is based on a particle filter SLAM frame and is a common open-source laser SLAM algorithm, on the basis of the RBPF particle filter algorithm, the Gamma algorithm positions the robot and then carries out environment mapping, the method applies a resampling particle processing method, fully considers the particle dissipation problem and the characteristic that the particle weight is gradually updated and converged, and reduces the uncertainty of the robot self positioning, so that the robot mileage information is obtained, the quality of a laser beam lattice is greatly improved, the established map is more complete and accurate, the Gamma algorithm flow is shown in figure 1, and the next step of particle filter sampling is carried out with the currently estimated pose of the robot based on laser radar data.
S2, VSLAM frame is composed of five modules of visual information, visual odometer, rear end optimization, loop detection and map construction, the frame is as shown in figure 2, ORB algorithm adopts binary descriptor BRIEF, speed of whole image feature extraction link is effectively improved, rapidity of ORB algorithm is improved, and requirement of visual SLAM on real-time property is met, and the specific flow of ORB-SLAM is as follows: firstly, ORB characteristics of a current frame are extracted and matched to obtain a rotation matrix R, a translation vector t and a 3D point cloud; then tracking the pose of the camera by adopting a motion model tracking mode and a reference key frame tracking mode, and if the tracking fails, adopting global repositioning until the initial pose estimation is completed; then matching the current frame with local map points, optimizing the pose and the map points of the current frame again according to the matching result, finally obtaining the optimized pose of the current frame and determining whether the current frame is a key frame, then calculating the BoW vector of the feature points, inserting the key frame into the map after processing, then eliminating the newly added map points which do not meet the requirement, and finding out the new map points as the last frame in the queue by a triangulation method according to the relation between the current key frame and the common view key frame thereof, then performing local clustering adjustment after adjacent search, and finally eliminating redundant key frames.
S3, bayesian method is a statistical fusion algorithm for estimating an unknown state vector "X" of dimension n based on conditional or a posteriori probabilities, where the observed or measured known state vector is denoted by "Z", where Z contains probability information about X, which is described by a probability density function P (Z | X), called likelihood function or sensor model, which is an observation-based sensor-related objective function, the likelihood function being related to the extent to which the posteriori probability may vary, and evaluated by off-line experiments or using available information about problems, and if information about state X is available before any observation is made, the likelihood function can be adjusted to provide more accurate results, and this a priori information about X, called prior probability, is not based on observed data and therefore subjective, the Bayesian method provides a method for calculating posterior probability distribution, assuming that the probability at K is XKKnowing K sets of measurements ZK={Z1,…,ZKThe prior distributions are as follows:
Figure BDA0003032134670000051
in the above formula: p (Z)K|XK) Is a likelihood function based on a sensor model; p (X)K|ZK-1) Is a prior distribution function based on a given conversion system model; p (Z)K|ZK-1) The function of normalizing probability density function is realized, in order to simplify the model, the light beam is approximated to a ray with determined length to mark the grid unit, and the unit point in the grid G is Gi,jIf 1 is not more than i ═ Xgrid≤Xmax,1≤j=ygrid≤ymaxIn the ray projection model, r is the farthest measurement range of the sensor, g is the error of the measurement range, U is the minimum measurement value, δ represents the distance between the sensor and the grid cell,
Figure BDA0003032134670000052
representing a grid cell Gi,jThe probability of being empty (O),
Figure BDA0003032134670000053
representing the probability that a grid cell is occupied with (e),
Figure BDA0003032134670000054
and
Figure BDA0003032134670000055
for fixed probabilities:
Figure BDA0003032134670000056
δ∈[μ,r-ε]
Figure BDA0003032134670000057
δ∈[r-ε,r+ε]
for local laser gratingUpdating the result of the fusion of the grid map and the local visual grid map data to establish a grid map for navigation, wherein the updating of the sensor data refers to the combination of the latest probability and the sensor model
Figure BDA0003032134670000058
To re-estimate the state of the storage unit according to the grid map unit Ci,jState information setting r oft=(rt,rt-n,…,r0) Combining with recursive Bayesian rule to obtain the above-mentioned data updated estimation value PoIn the initialization of the grid map, let Ci,jAre equal a priori, i.e.
Figure BDA0003032134670000059
When a recurrence formula is applied to the representation of the occupancy grid, the fused update formula can be simplified to the following:
Figure BDA00030321346700000510
in the formula:
Figure BDA00030321346700000511
the conditional probability that the grid cell measured by the sensor is in an occupied state is referred to;
Figure BDA00030321346700000512
is the prior probability of being in an occupied state cell in the prior grid map; p0Which is an updated estimate of the sensor measurement distance R or the conditional probability of the inverse sensor model occupying the grid cell, the fusion flow framework is shown in figure 3,
the mapping method comprises the steps that a Gmapping algorithm generates a two-dimensional laser local map by using laser data, an ORB algorithm generates a three-dimensional local point cloud map by using RGB-D data, an octomap _ server function based on an octomap in ROS is used for transforming the three-dimensional point cloud map through projection to generate a projected two-dimensional grid map, a grid map part generated by the ORB algorithm is sequentially added to update the Gmapping map part according to a Bayesian rule, a coordinate mapping relation corresponding to grids between two maps is formed based on comparison of origin coordinates in grid map data, the state of a grid in an occupied state is considered for a grid in an adjacent grid, if the number of the grids in the occupied state is larger than a certain threshold value, the grid is judged to be in the occupied state, otherwise, the grid is judged to be in an idle state, and then the laser and the grids in the same position in a visual map are fused,
based on a Bayesian method: o denotes an obstacle event, i.e. an occupied grid,
Figure BDA0003032134670000061
a grid representing an idle state, E representing an obstacle presence event, according to a conditional probability P (E | O) and
Figure BDA0003032134670000062
global observation information is obtained so that the map can be updated using a probability formula as follows:
Figure BDA0003032134670000063
Figure BDA0003032134670000064
in the formula: p (E) denotes the prior probability, P (O | E) and
Figure BDA0003032134670000065
representing an observation model.
S4, the simulation experiment is carried out on a Gazebo platform, the model of the selected robot is XBot-U, the platform system is provided with a motion control system, two-dimensional laser radar point cloud ranging, ultrasonic ranging, infrared ranging, a lifting platform controller, a high-definition camera (RGB, depth selectable) and a robot vision servo pan-tilt, the experimental simulation environment is shown as figure 4, the size is 10m by 8m, no texture barrier and hollow barriers exist in the experimental simulation environment, an experiment is carried out based on the experimental simulation environment to obtain a Gmaping grid map (figure 5) and an ORB grid map (figure 6) obtained by projecting an ORB point cloud map, and the Gmaping grid map can accurately reflect most simulation environment information outside the hollow barriers, while the ORB grid map can roughly reflect texture areas in the simulation environment, then, the two obtained grid maps are fused by applying the fusion method (figure 7), from the analysis of the fusion effect, the fused grid map increases the construction of hollowed-out obstacles relative to the Gmapping grid map, increases the construction of a non-texture area relative to the ORB grid map, can reflect the environmental information more truly, and has important value for the application of the robot in navigation, obstacle avoidance and the like in indoor and outdoor complex environments.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (4)

1. A grid map construction method fusing laser and a visual sensor is characterized by specifically comprising the following steps:
s1, drawing is built by utilizing a Gmapping-SLAM algorithm and a laser radar;
s2, constructing a map of the visual sensor by utilizing an ORB-SLAM algorithm;
s3, carrying out Bayesian method and grid map representation and sensor data fusion;
and S4, analyzing and simulating an experimental result.
2. The method for constructing a grid map by fusing a laser and a visual sensor according to claim 1, wherein the step S1 is as follows:
the Gmapping algorithm is based on a particle filter SLAM framework and is a common open-source laser SLAM algorithm, on the basis of the particle filter algorithm of RBPF, the Gamma algorithm positions the robot per se and then carries out environment mapping, a resampling particle processing method is used, the problem of particle dissipation and the characteristic that particle weight is gradually updated and converged are fully considered, the uncertainty of the robot per se positioning is reduced, the robot mileage information is obtained, the quality of a laser beam dot matrix is greatly improved, and the built map is more complete and accurate.
3. The method for constructing a grid map by fusing a laser and a visual sensor according to claim 1, wherein the step S2 is as follows:
compared with the traditional SIFT, the ORB (ordered FAST and Rotated BRIEF) has stronger matching capability to feature points, simultaneously has smaller calculated amount and stronger anti-interference capability, can meet the requirement of real-time operation, such as the realization of low-power-consumption image tracking and panoramic connection or the acceleration of object detection speed based on features, and in order to make up for the defect that a FAST detector does not have directionality, the ORB algorithm adopts a binary descriptor BRIEF, thereby effectively improving the speed of the whole image feature extraction link, improving the rapidity of the ORB algorithm and just meeting the requirement of visual SLAM on the instantaneity, and the specific flow of the ORB-SLAM is as follows: firstly, ORB characteristics of a current frame are extracted and matched to obtain a rotation matrix R, a translation vector t and a 3D point cloud; then tracking the pose of the camera by adopting a motion model tracking mode and a reference key frame tracking mode, and if the tracking fails, adopting global repositioning until the initial pose estimation is completed; then matching the current frame with local map points, optimizing the pose and the map points of the current frame again according to the matching result, finally obtaining the optimized pose of the current frame and determining whether the current frame is a key frame, then calculating the BoW vector of the feature points, inserting the key frame into the map after processing, then eliminating the newly added map points which do not meet the requirement, and finding out the new map points as the last frame in the queue by a triangulation method according to the relation between the current key frame and the common view key frame thereof, then performing local clustering adjustment after adjacent search, and finally eliminating redundant key frames.
4. The method for constructing a grid map by fusing a laser and a visual sensor according to claim 1, wherein the step S3 is as follows:
updating the result of the fusion of the local laser grid map and the local visual grid map data to establish a grid map for navigation, wherein the updating of the sensor data refers to the combination of the latest probability and the sensor model
Figure FDA0003032134660000021
To re-estimate the state of the storage unit according to the grid map unit Ci,jState information setting r oft=(rt,rt-n,…,r0) Combining with recursive Bayesian rule to obtain the above-mentioned data updated estimation value PoIn the initialization of the grid map, let Ci,jAre equal a priori, i.e.
Figure FDA0003032134660000022
When a recurrence formula is applied to the representation of the occupancy grid, the fused update formula can be simplified to the following:
Figure FDA0003032134660000023
in the formula:
Figure FDA0003032134660000024
the conditional probability that the grid cell measured by the sensor is in an occupied state is referred to;
Figure FDA0003032134660000025
is occupied in a priori grid mapA prior probability according to the state unit; p0The method comprises the steps that a sensor measures an updated estimated value of a distance R or the conditional probability of an inverse sensor model occupying a grid unit, a Gmapping algorithm utilizes laser data to generate a two-dimensional laser local map, an ORB algorithm utilizes RGB-D data to generate a three-dimensional local point cloud map, the three-dimensional point cloud map is subjected to projection conversion by utilizing an oct _ server function based on oct in ROS to generate a projected two-dimensional grid map, according to the Bayesian rule, a gridding map part generated by an ORB algorithm is sequentially added to update a gridding map part, and a coordinate mapping relation of corresponding grids between two maps is formed based on the comparison of origin coordinates in grid map data, considering the state of the adjacent grids for the grids in the occupied state, if the number of the grids in the occupied state is larger than a certain threshold value, and judging that the grid is in an occupied state, otherwise, judging that the grid is in an idle state, and fusing the laser and the grid at the same position in the visual map.
CN202110433100.3A 2021-04-22 2021-04-22 Grid map construction method integrating laser and visual sensor Pending CN113108773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110433100.3A CN113108773A (en) 2021-04-22 2021-04-22 Grid map construction method integrating laser and visual sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110433100.3A CN113108773A (en) 2021-04-22 2021-04-22 Grid map construction method integrating laser and visual sensor

Publications (1)

Publication Number Publication Date
CN113108773A true CN113108773A (en) 2021-07-13

Family

ID=76719320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110433100.3A Pending CN113108773A (en) 2021-04-22 2021-04-22 Grid map construction method integrating laser and visual sensor

Country Status (1)

Country Link
CN (1) CN113108773A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763548A (en) * 2021-08-17 2021-12-07 同济大学 Poor texture tunnel modeling method and system based on vision-laser radar coupling
CN113777615A (en) * 2021-07-19 2021-12-10 派特纳(上海)机器人科技有限公司 Positioning method and system of indoor robot and cleaning robot
CN113804182A (en) * 2021-09-16 2021-12-17 重庆邮电大学 Grid map creating method based on information fusion
CN113807795A (en) * 2021-10-19 2021-12-17 上海擎朗智能科技有限公司 Method for identifying congestion in robot distribution scene, robot and distribution system
CN114167866A (en) * 2021-12-02 2022-03-11 桂林电子科技大学 Intelligent logistics robot and control method
CN114202567A (en) * 2021-12-03 2022-03-18 江苏集萃智能制造技术研究所有限公司 Point cloud processing obstacle avoidance method based on vision
CN115222905A (en) * 2022-07-05 2022-10-21 苏州大学 Air-ground multi-robot map fusion method based on visual features
CN115546348A (en) * 2022-11-24 2022-12-30 上海擎朗智能科技有限公司 Robot mapping method and device, robot and storage medium
CN116879870A (en) * 2023-06-08 2023-10-13 哈尔滨理工大学 Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
CN111476286A (en) * 2020-04-02 2020-07-31 哈尔滨工程大学 Map construction method for mobile robot
CN112486178A (en) * 2020-12-03 2021-03-12 哈尔滨理工大学 Dynamic path planning method based on directed D (delta) algorithm
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
CN111476286A (en) * 2020-04-02 2020-07-31 哈尔滨工程大学 Map construction method for mobile robot
CN112486178A (en) * 2020-12-03 2021-03-12 哈尔滨理工大学 Dynamic path planning method based on directed D (delta) algorithm
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾键: "一种融合激光与视觉传感器的栅格地图构建方法", 《工业控制计算机》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113777615A (en) * 2021-07-19 2021-12-10 派特纳(上海)机器人科技有限公司 Positioning method and system of indoor robot and cleaning robot
CN113777615B (en) * 2021-07-19 2024-03-29 派特纳(上海)机器人科技有限公司 Positioning method and system of indoor robot and cleaning robot
CN113763548A (en) * 2021-08-17 2021-12-07 同济大学 Poor texture tunnel modeling method and system based on vision-laser radar coupling
CN113763548B (en) * 2021-08-17 2024-02-27 同济大学 Vision-laser radar coupling-based lean texture tunnel modeling method and system
CN113804182B (en) * 2021-09-16 2023-09-29 重庆邮电大学 Grid map creation method based on information fusion
CN113804182A (en) * 2021-09-16 2021-12-17 重庆邮电大学 Grid map creating method based on information fusion
CN113807795A (en) * 2021-10-19 2021-12-17 上海擎朗智能科技有限公司 Method for identifying congestion in robot distribution scene, robot and distribution system
CN114167866A (en) * 2021-12-02 2022-03-11 桂林电子科技大学 Intelligent logistics robot and control method
CN114167866B (en) * 2021-12-02 2024-04-12 桂林电子科技大学 Intelligent logistics robot and control method
CN114202567A (en) * 2021-12-03 2022-03-18 江苏集萃智能制造技术研究所有限公司 Point cloud processing obstacle avoidance method based on vision
CN115222905A (en) * 2022-07-05 2022-10-21 苏州大学 Air-ground multi-robot map fusion method based on visual features
CN115222905B (en) * 2022-07-05 2024-03-22 苏州大学 Air-ground multi-robot map fusion method based on visual features
CN115546348A (en) * 2022-11-24 2022-12-30 上海擎朗智能科技有限公司 Robot mapping method and device, robot and storage medium
CN115546348B (en) * 2022-11-24 2023-03-24 上海擎朗智能科技有限公司 Robot mapping method and device, robot and storage medium
CN116879870A (en) * 2023-06-08 2023-10-13 哈尔滨理工大学 Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar

Similar Documents

Publication Publication Date Title
CN113108773A (en) Grid map construction method integrating laser and visual sensor
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN111429574B (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
CN105865449B (en) Hybrid positioning method of mobile robot based on laser and vision
CN110645974B (en) Mobile robot indoor map construction method fusing multiple sensors
CN105843223B (en) A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
CN112525202A (en) SLAM positioning and navigation method and system based on multi-sensor fusion
CN110675307A (en) Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
Mu et al. Research on SLAM algorithm of mobile robot based on the fusion of 2D LiDAR and depth camera
CN111476286A (en) Map construction method for mobile robot
CN112444246B (en) Laser fusion positioning method in high-precision digital twin scene
CN113483747A (en) Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
CN108564600B (en) Moving object posture tracking method and device
CN103901891A (en) Dynamic particle tree SLAM algorithm based on hierarchical structure
CN114549738A (en) Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
CN111982124B (en) Deep learning-based three-dimensional laser radar navigation method and device in glass scene
Dhawale et al. Fast monte-carlo localization on aerial vehicles using approximate continuous belief representations
US20220168900A1 (en) Visual positioning method and system based on gaussian process, and storage medium
CN112132950B (en) Three-dimensional point cloud scene updating method based on crowdsourcing image
Buck et al. Capturing uncertainty in monocular depth estimation: Towards fuzzy voxel maps
Liu et al. A localizability estimation method for mobile robots based on 3d point cloud feature
CN114047766B (en) Mobile robot data acquisition system and method for long-term application of indoor and outdoor scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210713