CN113804182A - Grid map creating method based on information fusion - Google Patents

Grid map creating method based on information fusion Download PDF

Info

Publication number
CN113804182A
CN113804182A CN202111087246.3A CN202111087246A CN113804182A CN 113804182 A CN113804182 A CN 113804182A CN 202111087246 A CN202111087246 A CN 202111087246A CN 113804182 A CN113804182 A CN 113804182A
Authority
CN
China
Prior art keywords
laser
data
point cloud
grid
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111087246.3A
Other languages
Chinese (zh)
Other versions
CN113804182B (en
Inventor
李艳生
匡衡阳
万勇
张毅
杨美美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111087246.3A priority Critical patent/CN113804182B/en
Publication of CN113804182A publication Critical patent/CN113804182A/en
Application granted granted Critical
Publication of CN113804182B publication Critical patent/CN113804182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a grid map creating method based on information fusion, which belongs to the field of indoor mobile robot map creation and comprises the following steps: s1: acquiring laser and image data by using a mobile robot platform; s2: utilizing tiny-yolov4 to obtain object class label information and a depth image; s3: converting pixel coordinates corresponding to the class labels of the objects into a laser coordinate system by utilizing a coordinate conversion relation; s4: fitting the laser point cloud data according to the mapped coordinate points of the step S3; s5: according to the fitting result of the step S4, point cloud data classification and identification are carried out, and dynamic laser point cloud data are filtered; s6: and creating a two-dimensional grid map by using the filtered point cloud data. The method improves the robustness of the map, marks semantic labels in the map, and improves the perception capability of the robot to the environment and the positioning capability of the robot in a complex environment.

Description

Grid map creating method based on information fusion
Technical Field
The invention belongs to the field of map building, semantic understanding and obstacle removing of indoor mobile robots, and relates to a grid map creating method based on information fusion.
Background
With the development of scientific technology, mobile robots have also met with significant development and are becoming more and more widely used in commerce and industry. For the key problem of mobile robot positioning and mapping, SLAM technology is currently the most promising solution. SLAM is namely positioning and mapping at the same time, and the problem of positioning and mapping of the robot in an unknown environment is solved. At present, the SLAM technology is widely used, such as indoor scene floor sweeping robots, logistics storage AGV robots and the like, outdoor scene unmanned aerial vehicle navigation, unmanned vehicles, unmanned ships and the like.
Nowadays, the SLAM technology has been widely applied to the service industry, the industry, and the like, but the conventional SLAM technology has revealed some disadvantages. The laser SLAM based on the laser sensor has high distance measurement precision of the laser radar, and the map precision of data used for construction is also high; laser is characterized by less computing resources consumed in the mapping stage, lower requirements on hardware and the like compared with pure vision, and therefore, laser SLAM is the current mainstream solution. However, the grid map constructed by the laser radar has a single information amount and relatively weak environment perception capability; on the other hand, the grid map constructed by the laser radar contains dynamic obstacles, so that the reality degree of the map and the robustness of the map are reduced, and the grid map occupied by the single laser profile information is poor in positioning effect of the robot in a large scene. In recent years, with the rapid development of deep learning, SLAM also introduces related technologies, and starts to probe and combine target recognition and the like.
Disclosure of Invention
In view of this, the invention aims to provide a grid map creation method based on information fusion, which proposes to combine with a deep learning technology, and firstly removes dynamic radar data noise points and improves the robustness of a map; semantic label information is marked, the environment information occupying the grid is enriched, and the perception capability of the robot to the environment is improved; and thirdly, improving the positioning capability of the robot in the large environment according to the semantic grid map.
In order to achieve the purpose, the invention provides the following technical scheme:
a grid map creating method based on information fusion comprises the following steps:
s1: acquiring laser and image data by using a mobile robot platform;
s2: utilizing tiny-yolov4 to obtain object class label information and a depth image;
s3: converting pixel coordinates corresponding to the class labels of the objects into a laser coordinate system by utilizing a coordinate conversion relation;
s4: and fitting the laser point cloud data according to the mapped coordinate points in the step S3.
S5: and classifying and identifying the point cloud data according to the fitting result of the step S4, and filtering the dynamic laser point cloud data.
S6: and creating a two-dimensional grid map by using the filtered point cloud data.
Further, the step S1 specifically includes the following steps:
s11: building a mobile robot platform, acquiring laser point cloud data by using a laser radar, acquiring image data by using a camera, acquiring mileage information by using a milemeter, acquiring motion position information by using a gyroscope, and performing overall control operation by using an operation processor;
s12: and acquiring original data through the mobile robot platform.
Further, step S2 specifically includes the following steps:
s21: utilizing the RGB image and the depth image captured by the camera to carry out image output alignment operation on the two images;
s22: inputting the RGB image topics into a tiny-yolov4 network for detection, and marking category labels of environmental objects under image coordinates.
Further, step S3 specifically includes the following steps:
s31: calculating a central pixel point of the detection frame, and acquiring coordinates (u, v) of a corresponding depth image according to the aligned depth image;
s32: converting the depth pixel corresponding to the coordinate into a camera coordinate system according to the formula (1), and then converting the depth pixel into a laser coordinate system according to the camera coordinate system;
Figure BDA0003266233320000021
where f is the camera focal length and Zc is the projection of the target on the Z-axis of the camera coordinate system.
Further, in step S4, fitting the discrete point cloud data to classify the laser scanning point clouds on the same object into the same class, when the projected points cannot be matched with the laser, calculating the euclidean distance to fit and mark the point cloud data, if the projected points are the dynamic labels, performing error amplification processing, and considering that the laser data in the circle with the projected points as the center of the circle and the radius threshold value θ, that is, the AB segment, is unreliable and marking the laser data.
Further, in the step S5, according to the fitting result of the step S4, the radar point cloud data that has been marked is classified as whether it has dynamics: static, semi-static and dynamic, filtering the dynamic laser point cloud data, and reserving the static and semi-static data as the data for drawing in the subsequent steps.
Further, in the step S6, a map is built by using an improved gmaping algorithm according to the radar data transmitted in the step S5 and the odometer data transmitted in the step S1; the Gmapping algorithm is based on an RBPF particle filter algorithm, the proposed distribution of the robot in the state estimation process stage is improved, a resampling threshold value is set, and resampling is carried out when the change rate of particle weight exceeds the threshold value;
utilizing tiny _ yolov4 for identification, obtaining corresponding semantic point cloud data through coordinate conversion and fitting research, dividing the original laser data into three types, and transmitting the static and semi-static laser data into weight updating and map updating of a Gmapping algorithm;
the created grid map is a binary map, P (s ═ 0) represents the probability of an idle state, and P (s ═ 1) represents the probability of an occupied state; the unknown trellis is represented using a uniform fixed probability value, and the ratio of the incoming states represents the probability value odd(s) of the trellis state:
Figure BDA0003266233320000031
when the observation result of the sensor measurement value is (z: {0,1}), the probability value odd(s) of the corresponding grid state is updated, and the updating expression is as follows:
Figure BDA0003266233320000032
according to a Bayes conditional probability formula, the method comprises the following steps:
Figure BDA0003266233320000033
Figure BDA0003266233320000034
substituting formula (4) and formula (5) into formula (3) and taking logarithm at two sides simultaneously:
Figure BDA0003266233320000035
in the formula (6)
Figure BDA0003266233320000036
Is a model of the measured value, is a fixed value, and the value is related to the sensor, and is the value when the observed value of the sensor to the grid is occupied
Figure BDA0003266233320000037
When the observed value of the sensor to the grid is idle
Figure BDA0003266233320000038
The state S of the grid S is represented by log odd (S), S+And S-Representing the state of the grid s after the measurement and before the measurement, respectively; equation (6) reduces to:
Figure BDA0003266233320000039
Figure BDA00032662333200000310
the probability of the default grid idle and the default grid occupied in the initial state of each grid is equal;
according to equation (7), when calculating the occupation probability of each grid, S is performed if the corresponding radar data is static or semi-static+Is refreshed by-Is set as SinitAnd semi-static data is set to be less weighted than static data.
The invention has the beneficial effects that: the method realizes the combination of the deep learning technology and the creation of the traditional grid occupation map, and firstly, the method of combining target identification and the traditional map building algorithm is used for classifying the laser data, so that the method of updating the occupation rate of the grid occupation map is improved, the effect of removing dynamic obstacles is achieved, and the robustness of the map is improved; and secondly, the category labels are associated with the laser data, so that the laser data contain semantic information, the environment information occupying grids is enriched, and the perception capability of the robot to the environment is improved.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is camera data, FIG. 1(a) is an RGB image; FIG. 1(b) depth image;
FIG. 2 is a laser point cloud fitting diagram;
FIG. 3 is a laser point cloud image, and FIG. 3(a) is semantic point cloud data; FIG. 3(b) is static, semi-static point cloud data;
FIG. 4 is a flow chart of map creation single frame data processing;
FIG. 5 is a diagram of an example of a map, FIG. 5(a) a map of the Gmapping algorithm; fig. 5(b) is a graph of the improved gmaping algorithm.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Referring to fig. 1 to 5, a grid map creating method based on information fusion according to the present invention includes the following steps:
step one, a mobile robot platform is built, wherein YDLIDARX4 laser radar is used for acquiring laser point cloud data. TX2 is used as an arithmetic processor, an odometer adopts a wheel type encoder and a gyroscope to obtain accurate motion position information through EKF. Wherein the odometry information will be used in the mapping algorithm of the subsequent step six.
Step two, acquiring an RGB image and a depth image by using an Intel RealSense D435i camera as shown in fig. 1(a) and 1 (b). And acquiring an RGB image according to the RGB/image _ raw topic and a depth/image _ raw topic to acquire a depth image, and carrying out output alignment on the RGB image and the depth image. Inputting the RGB image topics into the receiving topics corresponding to the tiny-yolov4, performing object category identification, marking out object category frames, and completing the labeling of category labels.
And step three, calculating a central pixel point of the detection frame according to the class detection frame obtained in the step two, obtaining (u, v) of the corresponding depth image according to the aligned depth image, converting an image coordinate system into a camera coordinate system according to the formula (1), and converting into a laser coordinate system according to the camera coordinate system.
Figure BDA0003266233320000051
Where f is the camera focal length and Zc is the projection of the target on the Z-axis of the camera coordinate system.
Step four, according to the result converted in the step three, the method provides a new point cloud fitting method as shown in fig. 2, when the point F projected in the step three cannot be matched with the laser, the Euclidean distance is calculated to fit and mark the point cloud data, if the point F is a projection point obtained by a dynamic label, error amplification processing is required, and the laser data, namely an AB section, in a circle with the point F as the center of the circle and the radius threshold value of theta is considered to be unreliable and is marked.
Step five, according to the fitting result of the step four, the marked radar point cloud data is divided into the following points according to whether the marked radar point cloud data has dynamics: static, semi-static, and dynamic. Meanwhile, the dynamic point cloud data is filtered out, as shown in fig. 3. Static and semi-static data is retained as data for subsequent steps of mapping.
And step six, according to the radar data transmitted in the step five and the odometer data transmitted in the step one, establishing a map by using an improved Gmapping algorithm. The algorithm is based on an RBPF particle filter algorithm, two improvements are provided, firstly, the proposed distribution of the robot in the state estimation process stage is improved, and the purpose of reducing the number of particles is achieved; and secondly, a selective resampling method is provided, and the method solves the problem of particle degradation by setting a resampling threshold and resampling after the change rate of the particle weight exceeds the threshold. As shown in fig. 4, on the basis, the invention proposes to use tiny _ yolov4 for identification, obtain corresponding semantic point cloud data through coordinate transformation and fitting research, divide the original laser data into three types, and transmit the static and semi-static laser data into weight update and map update of the gmaping algorithm. The point cloud data is classified into three types, namely static, semi-static and dynamic, wherein, because the dynamic point cloud data is generated for dynamic obstacles, such as walking people, moving pets, etc., as shown in fig. 3(a), the obstacles in this type have the characteristic of movement and should not be used as a basis for map updating, so that the static and semi-static laser point cloud data used for map updating is needed to create a map with high robustness, as shown in fig. 3 (b).
Further, the created grid map is a binary map, P (s ═ 0) represents the probability of an idle state, and P (s ═ 1) represents the probability of an occupied state. The unknown grid is typically represented using a uniform fixed probability value. The ratio of the incoming states thus represents the probability value odd(s) of the grid state.
Figure BDA0003266233320000061
When the observation result of the sensor measurement value is (z: {0,1}), the probability value odd(s) of the corresponding grid state needs to be updated, and the updating expression is as follows:
Figure BDA0003266233320000062
according to a Bayes conditional probability formula, the method comprises the following steps:
Figure BDA0003266233320000063
Figure BDA0003266233320000064
substituting formula (4) and formula (5) into formula (3) and taking logarithm at two sides simultaneously:
Figure BDA0003266233320000065
in the formula (6)
Figure BDA0003266233320000066
Is a model of the measured value, is a fixed value, and the value is related to the sensor, and is the value when the observed value of the sensor to the grid is occupied
Figure BDA0003266233320000067
When the observed value of the sensor to the grid is idle
Figure BDA0003266233320000068
The state S of the grid S is represented by log odd (S), S+And S-Representing the state of the grid s after the measurement and before the measurement, respectively. Equation (6) can be simplified as:
Figure BDA0003266233320000069
in addition to this, the present invention is,
Figure BDA00032662333200000610
so the initial state of each gridThe probability of the default grid being free and the default grid being occupied is equal. As shown in fig. 4, the original radar data is divided into three states, and according to equation 7, when the occupation probability of each grid is calculated, S is performed if the corresponding radar data is static or semi-static+Is refreshed by-Is set as SinitAnd semi-static data is set to be less weighted than static data. Therefore, dynamic radar data is well reduced to participate in the map refreshing process, the effect of removing dynamic noise is achieved, and the robustness of the map is improved, as shown in fig. 5(a) and 5 (b). And finishing the data processing of one frame of the grid map with the semantic information.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (7)

1. A grid map creating method based on information fusion is characterized in that: the method comprises the following steps:
s1: acquiring laser and image data by using a mobile robot platform;
s2: utilizing tiny-yolov4 to obtain object class label information and a depth image;
s3: converting pixel coordinates corresponding to the class labels of the objects into a laser coordinate system by utilizing a coordinate conversion relation;
s4: fitting the laser point cloud data according to the mapped coordinate points of the step S3;
s5: according to the fitting result of the step S4, point cloud data classification and identification are carried out, and dynamic laser point cloud data are filtered;
s6: and creating a two-dimensional grid map by using the filtered point cloud data.
2. The grid map creation method based on information fusion according to claim 1, characterized in that: the step S1 specifically includes the following steps:
s11: building a mobile robot platform, acquiring laser point cloud data by using a laser radar, acquiring image data by using a camera, acquiring mileage information by using a milemeter, acquiring motion position information by using a gyroscope, and performing overall control operation by using an operation processor;
s12: and acquiring original data through the mobile robot platform.
3. The grid map creation method based on information fusion according to claim 1, characterized in that: step S2 specifically includes the following steps:
s21: utilizing the RGB image and the depth image captured by the camera to carry out image output alignment operation on the two images;
s22: inputting the RGB image topics into a tiny-yolov4 network for detection, and marking category labels of environmental objects under image coordinates.
4. The grid map creation method based on information fusion according to claim 1, characterized in that: step S3 specifically includes the following steps:
s31: calculating a central pixel point of the detection frame, and acquiring coordinates (u, v) of a corresponding depth image according to the aligned depth image;
s32: converting the depth pixel corresponding to the coordinate into a camera coordinate system according to the formula (1), and then converting the depth pixel into a laser coordinate system according to the camera coordinate system;
Figure FDA0003266233310000011
where f is the camera focal length and Zc is the projection of the target on the Z-axis of the camera coordinate system.
5. The grid map creation method based on information fusion according to claim 1, characterized in that: in step S4, the discrete point cloud data is fitted toThe laser scanning point clouds on the same object are divided into the same class, when the projected points can not be matched with the laser, the Euclidean distance is calculated to fit and mark the point cloud data, if the projected points are obtained by dynamic labels, error amplification processing is required, the projected points are considered to be positioned around the projected points, the radius threshold value is set as the circle center, and the point cloud data are subjected to error amplification processing
Figure FDA0003266233310000027
The laser data, i.e. the AB segment, within the circle of (a) is unreliable and marked.
6. The grid map creation method based on information fusion according to claim 1, characterized in that: in the step S5, according to the fitting result of the step S4, the radar point cloud data that has been marked is classified according to whether it has dynamics: static, semi-static and dynamic, filtering the dynamic laser point cloud data, and reserving the static and semi-static data as the data for drawing in the subsequent steps.
7. The grid map creation method based on information fusion according to claim 1, characterized in that: in the step S6, a map is built by using a modified gmaping algorithm according to the radar data transmitted in the step S5 and the odometer data transmitted in the step S1; the Gmapping algorithm is based on an RBPF particle filter algorithm, the proposed distribution of the robot in the state estimation process stage is improved, a resampling threshold value is set, and resampling is carried out when the change rate of particle weight exceeds the threshold value;
utilizing tiny _ yolov4 for identification, obtaining corresponding semantic point cloud data through coordinate conversion and fitting research, dividing the original laser data into three types, and transmitting the static and semi-static laser data into weight updating and map updating of a Gmapping algorithm;
the created grid map is a binary map, P (s ═ 0) represents the probability of an idle state, and P (s ═ 1) represents the probability of an occupied state; the unknown trellis is represented using a uniform fixed probability value, and the ratio of the incoming states represents the probability value odd(s) of the trellis state:
Figure FDA0003266233310000021
when the observation result of the sensor measurement value is (z: {0,1}), the probability value odd(s) of the corresponding grid state is updated, and the updating expression is as follows:
Figure FDA0003266233310000022
according to a Bayes conditional probability formula, the method comprises the following steps:
Figure FDA0003266233310000023
Figure FDA0003266233310000024
substituting formula (4) and formula (5) into formula (3) and taking logarithm at two sides simultaneously:
Figure FDA0003266233310000025
in the formula (6)
Figure FDA0003266233310000026
Is a model of the measured value, is a fixed value, and the value is related to the sensor, and is the value when the observed value of the sensor to the grid is occupied
Figure FDA0003266233310000031
When the observed value of the sensor to the grid is idle
Figure FDA0003266233310000032
The state S of the grid S is denoted by log odd (S),with S+And S-Representing the state of the grid s after the measurement and before the measurement, respectively; equation (6) reduces to:
Figure FDA0003266233310000033
Figure FDA0003266233310000034
the probability of the default grid idle and the default grid occupied in the initial state of each grid is equal;
according to equation (7), when calculating the occupation probability of each grid, S is performed if the corresponding radar data is static or semi-static+Is refreshed by-Is set as SinitAnd semi-static data is set to be less weighted than static data.
CN202111087246.3A 2021-09-16 2021-09-16 Grid map creation method based on information fusion Active CN113804182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111087246.3A CN113804182B (en) 2021-09-16 2021-09-16 Grid map creation method based on information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111087246.3A CN113804182B (en) 2021-09-16 2021-09-16 Grid map creation method based on information fusion

Publications (2)

Publication Number Publication Date
CN113804182A true CN113804182A (en) 2021-12-17
CN113804182B CN113804182B (en) 2023-09-29

Family

ID=78941326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111087246.3A Active CN113804182B (en) 2021-09-16 2021-09-16 Grid map creation method based on information fusion

Country Status (1)

Country Link
CN (1) CN113804182B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253273A (en) * 2021-12-23 2022-03-29 南京世泽科技有限公司 Obstacle avoidance method based on multi-line laser radar
CN114397638A (en) * 2022-01-22 2022-04-26 深圳市神州云海智能科技有限公司 Method and system for filtering dynamic data in laser radar data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105865449A (en) * 2016-04-01 2016-08-17 深圳杉川科技有限公司 Laser and vision-based hybrid location method for mobile robot
CN108885106A (en) * 2016-01-08 2018-11-23 国际智能技术公司 It is controlled using the vehicle part of map
CN109641538A (en) * 2016-07-21 2019-04-16 国际智能技术公司 It is created using vehicle, updates the system and method for map
CN111024100A (en) * 2019-12-20 2020-04-17 深圳市优必选科技股份有限公司 Navigation map updating method and device, readable storage medium and robot
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
US20200183011A1 (en) * 2018-12-11 2020-06-11 Shanghai Tusen Weilai Artificial Intelligence Technology Co., Ltd. Method for creating occupancy grid map and processing apparatus
CN111486855A (en) * 2020-04-28 2020-08-04 武汉科技大学 Indoor two-dimensional semantic grid map construction method with object navigation points
CN111928862A (en) * 2020-08-10 2020-11-13 廊坊和易生活网络科技股份有限公司 Method for constructing semantic map on line by fusing laser radar and visual sensor
CN112731371A (en) * 2020-12-18 2021-04-30 重庆邮电大学 Laser radar and vision fused integrated target tracking system and method
CN113108773A (en) * 2021-04-22 2021-07-13 哈尔滨理工大学 Grid map construction method integrating laser and visual sensor

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885106A (en) * 2016-01-08 2018-11-23 国际智能技术公司 It is controlled using the vehicle part of map
US20210199437A1 (en) * 2016-01-08 2021-07-01 Intelligent Technologies International, Inc. Vehicular component control using maps
CN105865449A (en) * 2016-04-01 2016-08-17 深圳杉川科技有限公司 Laser and vision-based hybrid location method for mobile robot
CN109641538A (en) * 2016-07-21 2019-04-16 国际智能技术公司 It is created using vehicle, updates the system and method for map
US20190271550A1 (en) * 2016-07-21 2019-09-05 Intelligent Technologies International, Inc. System and Method for Creating, Updating, and Using Maps Generated by Probe Vehicles
US20200183011A1 (en) * 2018-12-11 2020-06-11 Shanghai Tusen Weilai Artificial Intelligence Technology Co., Ltd. Method for creating occupancy grid map and processing apparatus
CN111024100A (en) * 2019-12-20 2020-04-17 深圳市优必选科技股份有限公司 Navigation map updating method and device, readable storage medium and robot
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
CN111486855A (en) * 2020-04-28 2020-08-04 武汉科技大学 Indoor two-dimensional semantic grid map construction method with object navigation points
CN111928862A (en) * 2020-08-10 2020-11-13 廊坊和易生活网络科技股份有限公司 Method for constructing semantic map on line by fusing laser radar and visual sensor
CN112731371A (en) * 2020-12-18 2021-04-30 重庆邮电大学 Laser radar and vision fused integrated target tracking system and method
CN113108773A (en) * 2021-04-22 2021-07-13 哈尔滨理工大学 Grid map construction method integrating laser and visual sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曾键 等: "一种融合激光与视觉传感器的栅格地图构建方法", 工业控制计算机, vol. 33, no. 9 *
李星河: "基于激光扫描与视觉融合的地形估计与属性认知研究", 中国优秀硕士学位论文全文数据库 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253273A (en) * 2021-12-23 2022-03-29 南京世泽科技有限公司 Obstacle avoidance method based on multi-line laser radar
CN114253273B (en) * 2021-12-23 2024-04-12 南京世泽科技有限公司 Obstacle avoidance method based on multi-line laser radar
CN114397638A (en) * 2022-01-22 2022-04-26 深圳市神州云海智能科技有限公司 Method and system for filtering dynamic data in laser radar data

Also Published As

Publication number Publication date
CN113804182B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
US11798169B2 (en) Sensor data segmentation
CN111210518B (en) Topological map generation method based on visual fusion landmark
CN110007675B (en) Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle
Cui et al. 3D semantic map construction using improved ORB-SLAM2 for mobile robot in edge computing environment
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
Jeong et al. Multimodal sensor-based semantic 3D mapping for a large-scale environment
CN113804182A (en) Grid map creating method based on information fusion
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
Tang et al. Onboard detection-tracking-localization
Gwak et al. A review of intelligent self-driving vehicle software research
de Paula Veronese et al. Evaluating the limits of a LiDAR for an autonomous driving localization
Ren et al. Lightweight semantic-aided localization with spinning LiDAR sensor
Chen et al. RSPMP: Real-time semantic perception and motion planning for autonomous navigation of unmanned ground vehicle in off-road environments
Xu et al. Dynamic vehicle pose estimation and tracking based on motion feedback for LiDARs
CN110780325A (en) Method and device for positioning moving object and electronic equipment
CN113034504A (en) Plane feature fusion method in SLAM mapping process
CN112907625A (en) Target following method and system applied to four-footed bionic robot
EP4148599A1 (en) Systems and methods for providing and using confidence estimations for semantic labeling
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in automatic driving scene
CN115661341A (en) Real-time dynamic semantic mapping method and system based on multi-sensor fusion
CN112747752B (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer
Abdelhafid et al. Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars.
CN113554705A (en) Robust positioning method for laser radar in changing scene
CN114384486A (en) Data processing method and device
CN112419461A (en) Collaborative unmanned system joint semantic mapping method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant