CN111207753A - Method for simultaneously positioning and establishing picture under multi-glass partition environment - Google Patents

Method for simultaneously positioning and establishing picture under multi-glass partition environment Download PDF

Info

Publication number
CN111207753A
CN111207753A CN202010090142.7A CN202010090142A CN111207753A CN 111207753 A CN111207753 A CN 111207753A CN 202010090142 A CN202010090142 A CN 202010090142A CN 111207753 A CN111207753 A CN 111207753A
Authority
CN
China
Prior art keywords
data
pose
grid map
mapping
new frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010090142.7A
Other languages
Chinese (zh)
Inventor
杨信田
任子武
刘强龙
姜瑞卿
杨凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202010090142.7A priority Critical patent/CN111207753A/en
Publication of CN111207753A publication Critical patent/CN111207753A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the invention discloses a method for simultaneously positioning and establishing a drawing under a multi-glass partition environment. The embodiment of the invention adopts various position data to carry out fusion processing to form basic data, then utilizes the basic data to carry out scanning and map building and carries out loop optimization on the newly-built map, thereby effectively identifying the glass, leading the built map to be accurate and comprehensive and being beneficial to realizing autonomous navigation functions of subsequent path planning and the like of the mobile robot.

Description

Method for simultaneously positioning and establishing picture under multi-glass partition environment
Technical Field
The invention relates to the technical field of mobile robots, in particular to a method for simultaneously positioning and establishing a drawing under a multi-glass partition environment.
Background
The synchronous positioning and Mapping (SLAM) method is that a mobile robot moves from an unknown position in an unknown environment, self-positioning is carried out according to position estimation and the surrounding environment in the moving process, and meanwhile, an incremental map is built on the basis of self-positioning, and finally, the autonomous positioning and navigation of the mobile robot can be realized.
At present, the existing SLAM method only uses optical sensors such as laser radar or a camera to sense the environment, and is difficult to accurately identify surrounding transparent obstacles such as transparent glass or transparent plastic. Many buildings adopt a glass partition mode to isolate spaces (such as glass doors, curtain walls and the like), and because the existing SLAM method cannot identify transparent obstacles, maps established by the existing SLAM method are incomplete, and difficulties are brought to autonomous navigation functions of a mobile robot, such as subsequent path planning and the like.
Therefore, in order to solve the above technical problems, it is necessary to provide a method for simultaneously positioning and mapping in a multi-glass partition environment.
Disclosure of Invention
In view of this, aiming at the problem that the SLAM method in the prior art cannot identify the transparent obstacle and the constructed map has defects, the embodiment of the invention adopts multiple kinds of position data to perform fusion processing to form basic data, then utilizes the basic data to perform scanning and map building, and performs loop optimization on the newly-built map, thereby effectively identifying the transparent obstacle, enabling the constructed map to be accurate and comprehensive, and being beneficial to the realization of autonomous navigation functions such as subsequent path planning of the mobile robot.
In order to achieve the above object, an embodiment of the present invention provides the following technical solutions: a method for simultaneously positioning and mapping in a multi-glass partition environment comprises the steps of S1: acquiring position data by adopting a plurality of position information sensors, and processing the position data to obtain basic data for establishing a map; s2: constructing a probability grid map by using basic data with a preset frame number; matching the basic data of the new frame with the probability grid map, and providing a pose initial value by a pose resolver to obtain the optimal pose of the basic data of the new frame relative to the probability grid map; inputting the optimal pose into a motion comparator, judging the change size of the pose at this time and determining whether to update the probability grid map according to the judgment result; s3: and performing loop detection on the probability grid map processed in the step S2 and the basic data of the new frame in the step S2, providing a pose initial value by a pose resolver to judge whether a part with point cloud matching exceeding a preset matching degree exists between the probability grid map and the basic data of the new frame, and optimizing the pose according to a judgment result.
As a further improvement of the present invention, the plurality of position information sensors include an ultrasonic ranging sensor, a lidar sensor, and a pose sensor.
As a further improvement of the present invention, the optimum pose obtained in step S2 is used as an input quantity to the pose resolver at the same time as the data obtained by the pose sensor.
As a further improvement of the present invention, the processing of the position data includes data preprocessing and fusion processing of data obtained after the data preprocessing.
As a further improvement of the present invention, the data obtained by the ultrasonic ranging sensor is preprocessed to convert the data into a format.
As a further improvement of the invention, the data obtained by the laser radar sensor is preprocessed to carry out motion distortion correction and filtering denoising on the data.
As a further improvement of the invention, the data obtained by the pose sensor is preprocessed to carry out data correction and pose fusion on the data.
As a further improvement of the present invention, step S2 includes ignoring the basic data of the new frame if the pose change size is within the preset range; and if the pose change size exceeds a preset range, inserting the basic data of the new frame into the probability grid map to update the probability grid map.
As a further improvement of the present invention, if the number of the data of the fused point cloud is equal to or greater than the set number threshold when the basic data of the new frame is inserted, a new probability grid map is constructed.
As a further improvement of the present invention, step S3 includes generating a loop constraint and obtaining the pose of the base data of the new frame with respect to the probability grid map if there is a portion where the point cloud matching between the probability grid map and the base data of the new frame exceeds a preset matching degree.
The invention has the following advantages:
the embodiment of the invention adopts various position data to carry out fusion processing to form basic data, then utilizes the basic data to carry out scanning and map building and carries out loop optimization on the newly-built map, thereby effectively identifying the transparent barrier, leading the built map to be accurate and comprehensive and being beneficial to realizing autonomous navigation functions of subsequent path planning and the like of the mobile robot. Furthermore, the simultaneous positioning and mapping method provided by the embodiment of the invention does not need wheel odometer data, can be applied to various (such as wheeled, crawler-type and foot-type) indoor mobile robots, and has strong universality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for simultaneous positioning and mapping in a multiple glass partition environment according to an embodiment of the present invention;
FIG. 2 is a schematic representation of another representation of the embodiment shown in FIG. 1;
FIG. 3 is a schematic diagram of a data transformation model of ultrasonic ranging data;
FIG. 4 is a perspective view of a corridor of an experimental environment;
FIG. 5 is a close-up view of the corridor of the experimental environment of FIG. 4;
FIG. 6 is a schematic map diagram of the experimental environment shown in FIG. 4 established by the prior SLAM method;
fig. 7 is a schematic map diagram of the experimental environment shown in fig. 4 established by using the SLAM method according to the embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of a simultaneous localization and mapping method in a multiple glass partition environment according to an embodiment of the present invention is provided. Referring to fig. 1 and 2, the method for simultaneously locating and mapping according to the embodiment of the present invention generally includes three steps, and the details of each step are described in detail below.
Step S1: the method comprises the steps of collecting position data by adopting various position information sensors, and processing the position data to obtain basic data for drawing. As shown in fig. 2, this step may be simply referred to as data processing. The multiple position information sensors include ultrasonic ranging sensors, laser ranging sensors and pose sensors. The plurality of ultrasonic ranging sensors are respectively distributed on each side surface of the mobile robot (such as a hexagonal prism shape), so that the distance between the mobile robot and a front obstacle in a measuring range can be obtained. The laser ranging sensor can obtain distance information (i.e., a profile) of the environment (360 °) around the position where the mobile robot is located. The pose sensor is fixed in the center of the mobile robot and can output three-axis acceleration, angular velocity, magnetic field intensity and air pressure height information of the mobile robot at high frequency. The multiple position sensors provide environment and position information of the mobile robot in all directions, and the problem that single type of sensor cannot provide effective resolution transparent barrier data due to incomplete information is avoided.
The position data processing comprises data preprocessing and fusion processing of data obtained after the data preprocessing. The data preprocessing of the data obtained by the different sensors comprises the following specific steps: preprocessing data obtained by an ultrasonic ranging sensor to convert the format of the data; preprocessing data obtained by a laser sensor to perform motion distortion correction and filtering denoising on the data; and preprocessing the data acquired by the pose sensor to perform data correction and attitude fusion on the data.
And converting data (ultrasonic ranging data for short) obtained by the ultrasonic ranging sensors, converting the obtained distance value into a discrete point distance value with an angle according to the installation position of each ultrasonic ranging sensor, and keeping the same format as the data obtained by the laser sensors for subsequent point cloud fusion. As shown in fig. 3, the data conversion model of the ultrasonic ranging data is obtained according to the calibration of the ultrasonic ranging probe used, and the specific calculation expression is shown in formula 1:
Figure BDA0002383418340000051
wherein the content of the first and second substances,
Figure BDA0002383418340000052
the distance value is the distance value corresponding to the theta angle after the ultrasonic ranging data is converted; duIs the original value of the ultrasonic ranging data; theta corresponds to the angle of each point relative to the centerline at the time of data conversion.
The data obtained by the laser sensor (usually a frame of scanning point cloud data, referred to as laser ranging data for short) is subjected to motion distortion correction and filtering (necessary for low frame rate single line scanning ranging laser radar). The filtering comprises boundary filtering and voxel filtering, wherein the boundary filtering mainly removes point cloud data and noise point data which exceed the measuring range; and sampling the data by voxel filtering to obtain point cloud samples of different resolution ratios of one frame of data.
A calculation formula for performing point cloud fusion on the ultrasonic ranging data and the laser ranging data is shown as formula 2:
Figure BDA0002383418340000053
wherein the content of the first and second substances,
Figure BDA0002383418340000054
is the distance corresponding to the theta angle after the ultrasonic ranging data is convertedA value;
Figure BDA0002383418340000055
the distance value is the distance value corresponding to the theta angle after the laser ranging data is corrected and filtered; dθAnd representing the distance value corresponding to the theta angle in the fused point cloud.
The data (attitude data for short) obtained by the attitude sensor is preprocessed, mainly used for data correction and attitude fusion, and used for motion distortion correction of the laser ranging data and pose calculation of a subsequent pose resolver.
S2: constructing a probability grid map (subgraph for short) from basic data with preset frame number; matching the basic data of the new frame with the probability grid map, and providing a pose initial value by a pose resolver to obtain the optimal pose of the basic data of the new frame relative to the probability grid map; and inputting the optimal pose into a motion comparator, judging the change size of the pose at this time, and determining whether to update the probability grid map according to the judgment result. The range of the preset frame number is 10 frames to 50 frames, and the specific frame number is adjusted according to parameters such as the mapping area, the performance of the sensor and the like. This step may be referred to as scan mapping, as shown in fig. 2.
Preferably, the optimal pose obtained in the step and data obtained by the pose sensor are simultaneously used as input quantities of the pose resolver, and a more appropriate pose initial value can be obtained through closed-loop feedback.
If the pose change size is in a preset range, ignoring the basic data of the new frame; and if the pose change size exceeds a preset range, inserting the basic data of the new frame into the probability grid map to update the probability grid map. The pose changes include displacement changes and rotation angle changes. The preset range is that the displacement change is less than or equal to 0.2m, and the rotation angle change is less than or equal to 1 rad. And according to the drawing precision and the specific preset range value of the pose change size, fine adjustment can be carried out. And if the data quantity of the fused point cloud is equal to or greater than a set quantity threshold value when the basic data of the new frame is inserted, establishing a new probability grid map. The number threshold is consistent with the range of the preset frame number, and the number threshold is 10 frames to 50 frames.
The scanning matching step in the step specifically adopts a nonlinear least square method, and a specific calculation expression is shown as a formula 3:
Figure BDA0002383418340000061
wherein p (x, y, theta) is the pose of the frame of fused point cloud relative to the current sub-image; dk={dk}k=1,…,KIs a fused point cloud; t ispFor coordinate transformation, the point cloud position is transformed into a corresponding sub-image coordinate system, and the transformation formula is shown as formula 4:
Figure BDA0002383418340000062
Msmoothfor the grid probability value smoothing function, a bicubic interpolation method is used to improve the precision, and a specific expression is shown in formula 5:
Figure BDA0002383418340000071
wherein W (x) is a bicubic interpolation basis function; m (x, y) is the probability value of the corresponding grid at coordinate (x, y) in the current subgraph.
In this step, the sub-graph is represented by an occupancy grid map, and the probability value is calculated as shown in equation 6:
Figure BDA0002383418340000072
M(x,y)=clamp{S-1[S(Mold(x,y))+S(p)]} (equation 6)
Wherein M isold(x, y), wherein M (x, y) is the probability values before and after updating of the corresponding grids at the coordinates (x, y) in the current subgraph respectively; in S (p), p is generally poccu(occupied), pfree(idle), pinit(initial value); clamp x is an interval-defining function of M (x, y),the value range is [ p ]min,pmax]。
S3: and performing loop detection on the probability grid map processed in the step S2 and the basic data of the new frame in the step S2, providing a pose initial value by a pose resolver to judge whether a part with point cloud matching exceeding a preset matching degree exists between the probability grid map and the basic data of the new frame, and optimizing the pose according to a judgment result. As shown in fig. 2, this step may be referred to simply as loop-back optimization. In one embodiment, the predetermined matching degree is 0.8 k.
Step S3 includes generating a loop constraint and obtaining a pose of the base data of the new frame with respect to the probability grid map if there is a portion where the point cloud matching between the probability grid map and the base data of the new frame exceeds a preset matching degree. And generating loop constrained fused point cloud and pose of the corresponding sub-image and relative pose calculation constraint (namely pose error value) between the fused point cloud and the pose and the relative pose calculation constraint between the fused point cloud and the pose and optimizing the pose of all the completed sub-images in a mode of minimizing the pose error value, so that all the pose constraints are most reasonable.
Detection and scan matching in loop optimization are similar, and a specific calculation expression is shown in formula 7:
Figure BDA0002383418340000081
w is a search space, and in order to improve the search efficiency, the method adopts a branch and bound method of depth-first search.
The pose optimization process can be constructed as a nonlinear least square method, and the specific expression is shown in formula 8:
Figure BDA0002383418340000082
wherein the content of the first and second substances,
Figure BDA0002383418340000083
representing m frames of fused point clouds generating loop constraints under the global coordinate systemA set of poses;
Figure BDA0002383418340000084
representing a set of poses of n sub-graphs corresponding to the loop constraint in a global coordinate system; p is a radical ofi,jIs the pose, sigma of the fused point cloud (i) generating the loop constraint in its corresponding sub-graph (j)i,jIs its covariance matrix; rho is a loss function, so that the influence of wrong loop constraint is avoided; the error function E is defined as shown in equation 9:
Figure BDA0002383418340000085
Figure BDA0002383418340000086
wherein the content of the first and second substances,
Figure BDA0002383418340000087
is a pose transformation matrix of the global coordinate system relative to the sub-graph (j) coordinate system.
The embodiment of the invention adopts various position data to carry out fusion processing to form basic data, then utilizes the basic data to carry out scanning and map building and carries out loop optimization on the newly-built map, thereby effectively identifying the transparent barrier, leading the built map to be accurate and comprehensive and being beneficial to realizing autonomous navigation functions of subsequent path planning and the like of the mobile robot.
The method for simultaneously positioning and establishing the map does not need to calculate the mileage, can be applied to various (such as wheeled, crawler-type and foot-type) indoor mobile robots, and has strong universality. Furthermore, the method for simultaneously positioning and mapping provided by the embodiment of the invention has low requirements on hardware (a sensor and a processor), and is favorable for reducing the cost of the whole mobile robot.
The experiment of the simultaneous positioning and mapping method provided by the embodiment of the invention in the actual indoor multi-glass space environment is as follows. As shown in fig. 4 and 5, the corridor of the experimental environment has discontinuous glass doors/partition walls (the lower half is made of transparent glass) and solid walls on both sides. Fig. 6 is a map created using a conventional SLAM method, from which fig. 6 can be found: only solid walls and parts of glass fixing columns are identified at two sides of the corridor, and parts of indoor environments are directly scanned through the glass, and the conditions of the floor windows at the ends of the corridor are similar. Fig. 7 is a map created by using the simultaneous localization and mapping method according to the embodiment of the present invention, and it can be found from fig. 7 that: the environment conditions of two sides and the end of the corridor are really described, the influence of the transparent glass on the map building effect is avoided, and the map is favorable for directly using the map to carry out autonomous navigation functions such as path planning and the like.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (10)

1. A method for simultaneously positioning and mapping in a multi-glass partition environment is characterized by comprising the following steps:
s1: acquiring position data by adopting a plurality of position information sensors, and processing the position data to obtain basic data for establishing a map;
s2: constructing a probability grid map by using basic data with a preset frame number; matching the basic data of the new frame with the probability grid map, and providing a pose initial value by a pose resolver to obtain the optimal pose of the basic data of the new frame relative to the probability grid map; inputting the optimal pose into a motion comparator, judging the change size of the pose at this time and determining whether to update the probability grid map according to the judgment result;
s3: and performing loop detection on the probability grid map processed in the step S2 and the basic data of the new frame in the step S2, providing a pose initial value by a pose resolver to judge whether a part with point cloud matching exceeding a preset matching degree exists between the probability grid map and the basic data of the new frame, and optimizing the pose according to a judgment result.
2. The method for simultaneous localization and mapping in a multiple glass isolation environment of claim 1, wherein said plurality of position information sensors comprises ultrasonic ranging sensors, lidar sensors, and pose sensors.
3. The method for simultaneous localization and mapping in a multiple glass isolation environment according to claim 2, wherein the optimal pose obtained in step S2 and the data obtained by the pose sensor are used as input quantities of the pose resolver at the same time.
4. The method for simultaneous localization and mapping in a multiple glass isolation environment as claimed in claim 2, wherein said processing of location data comprises data preprocessing and fusion processing of data obtained after said data preprocessing.
5. The method for simultaneous localization and mapping in a multiple glass isolation environment of claim 4, wherein said data obtained by said ultrasonic ranging sensors is preprocessed to format convert said data.
6. The method for simultaneous localization and mapping in a multiple glass partition environment of claim 4, wherein said lidar sensor data is preprocessed by motion distortion correction and filtering for de-noising.
7. The method for simultaneous localization and mapping in a multiple glass isolation environment of claim 4, wherein the data obtained by said pose sensors is preprocessed to data correction and pose fusion of said data.
8. The method for simultaneous localization and mapping in a multiple glass partition environment according to claim 1, wherein step S2 includes ignoring the basic data of the new frame if the pose change size is within a preset range; and if the pose change size exceeds a preset range, inserting the basic data of the new frame into the probability grid map to update the probability grid map.
9. The method for simultaneous localization and mapping in a multiple glass partition environment according to claim 8, wherein a new probability grid map is constructed if the number of data of the fusion point cloud is equal to or greater than a predetermined number threshold when the basic data of the new frame is inserted.
10. The method of claim 1, wherein step S3 includes generating a loop constraint and obtaining the pose of the base data of the new frame with respect to the probability grid map if there is a portion between the probability grid map and the base data of the new frame where the point cloud match exceeds a preset match.
CN202010090142.7A 2020-02-13 2020-02-13 Method for simultaneously positioning and establishing picture under multi-glass partition environment Pending CN111207753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010090142.7A CN111207753A (en) 2020-02-13 2020-02-13 Method for simultaneously positioning and establishing picture under multi-glass partition environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010090142.7A CN111207753A (en) 2020-02-13 2020-02-13 Method for simultaneously positioning and establishing picture under multi-glass partition environment

Publications (1)

Publication Number Publication Date
CN111207753A true CN111207753A (en) 2020-05-29

Family

ID=70786850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010090142.7A Pending CN111207753A (en) 2020-02-13 2020-02-13 Method for simultaneously positioning and establishing picture under multi-glass partition environment

Country Status (1)

Country Link
CN (1) CN111207753A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111795687A (en) * 2020-06-29 2020-10-20 深圳市优必选科技股份有限公司 Robot map updating method and device, readable storage medium and robot
CN111982124A (en) * 2020-08-27 2020-11-24 华中科技大学 Deep learning-based three-dimensional laser radar navigation method and device in glass scene
CN112414408A (en) * 2020-11-05 2021-02-26 苏州三六零机器人科技有限公司 Sweeper navigation method and device, sweeper and storage medium
CN113203409A (en) * 2021-07-05 2021-08-03 北京航空航天大学 Method for constructing navigation map of mobile robot in complex indoor environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2015116049A (en) * 2015-04-27 2016-11-20 Открытое акционерное общество "Специальное конструкторское бюро приборостроения и автоматики" MULTIFUNCTIONAL NAVIGATION SYSTEM FOR MOBILE GROUND OBJECTS
CN108247647A (en) * 2018-01-24 2018-07-06 速感科技(北京)有限公司 A kind of clean robot
CN109839118A (en) * 2017-11-24 2019-06-04 北京京东尚科信息技术有限公司 Paths planning method, system, robot and computer readable storage medium
CN110361010A (en) * 2019-08-13 2019-10-22 中山大学 It is a kind of based on occupy grating map and combine imu method for positioning mobile robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2015116049A (en) * 2015-04-27 2016-11-20 Открытое акционерное общество "Специальное конструкторское бюро приборостроения и автоматики" MULTIFUNCTIONAL NAVIGATION SYSTEM FOR MOBILE GROUND OBJECTS
CN109839118A (en) * 2017-11-24 2019-06-04 北京京东尚科信息技术有限公司 Paths planning method, system, robot and computer readable storage medium
CN108247647A (en) * 2018-01-24 2018-07-06 速感科技(北京)有限公司 A kind of clean robot
CN110361010A (en) * 2019-08-13 2019-10-22 中山大学 It is a kind of based on occupy grating map and combine imu method for positioning mobile robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WOLFGANG HESS等: "Real-time loop closure in 2D LIDAR SLAM", 《2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》 *
杨记周: "一种室内移动机器人定位和路径规划的算法优化", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
梦凝小筑: "Cartographer源码解析1.1—算法整体结构", 《HTTPS://BLOG.CSDN.NET/WEIXIN_36976685/ARTICLE/DETAILS/89204034》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111795687A (en) * 2020-06-29 2020-10-20 深圳市优必选科技股份有限公司 Robot map updating method and device, readable storage medium and robot
CN111982124A (en) * 2020-08-27 2020-11-24 华中科技大学 Deep learning-based three-dimensional laser radar navigation method and device in glass scene
CN111982124B (en) * 2020-08-27 2022-11-01 华中科技大学 Deep learning-based three-dimensional laser radar navigation method and device in glass scene
CN112414408A (en) * 2020-11-05 2021-02-26 苏州三六零机器人科技有限公司 Sweeper navigation method and device, sweeper and storage medium
CN112414408B (en) * 2020-11-05 2023-03-14 苏州三六零机器人科技有限公司 Sweeper navigation method and device, sweeper and storage medium
CN113203409A (en) * 2021-07-05 2021-08-03 北京航空航天大学 Method for constructing navigation map of mobile robot in complex indoor environment

Similar Documents

Publication Publication Date Title
CN110596683B (en) Multi-group laser radar external parameter calibration system and method thereof
CN111207753A (en) Method for simultaneously positioning and establishing picture under multi-glass partition environment
CN113269837B (en) Positioning navigation method suitable for complex three-dimensional environment
CN109993780B (en) Three-dimensional high-precision map generation method and device
CN110832279B (en) Alignment of data captured by autonomous vehicles to generate high definition maps
CN107967473B (en) Robot autonomous positioning and navigation based on image-text recognition and semantics
Mei et al. Calibration between a central catadioptric camera and a laser range finder for robotic applications
CN110570449B (en) Positioning and mapping method based on millimeter wave radar and visual SLAM
CN114018248B (en) Mileage metering method and image building method integrating code wheel and laser radar
CN112380312B (en) Laser map updating method based on grid detection, terminal and computer equipment
CN112799096B (en) Map construction method based on low-cost vehicle-mounted two-dimensional laser radar
CN110243380A (en) A kind of map-matching method based on multi-sensor data and angle character identification
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN110749895B (en) Laser radar point cloud data-based positioning method
CN115290097B (en) BIM-based real-time accurate map construction method, terminal and storage medium
KR20200034869A (en) Real-Time Modeling System and Method for Geo-Spatial Information Using 3D Scanner of Excavator
CN109146990B (en) Building outline calculation method
CN114119886A (en) High-precision map point cloud reconstruction method and device, vehicle, equipment and storage medium
CN115439621A (en) Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot
CN112486172A (en) Road edge detection method and robot
CN117367412A (en) Tightly-coupled laser inertial navigation odometer integrating bundle set adjustment and map building method
CN116659500A (en) Mobile robot positioning method and system based on laser radar scanning information
CN116380039A (en) Mobile robot navigation system based on solid-state laser radar and point cloud map
CN113790726B (en) Robot indoor positioning method integrating camera, wheel speed meter and single UWB information
CN113554705B (en) Laser radar robust positioning method under changing scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200529