CN103278170B - Based on mobile robot's cascade map creating method that remarkable scene point detects - Google Patents
Based on mobile robot's cascade map creating method that remarkable scene point detects Download PDFInfo
- Publication number
- CN103278170B CN103278170B CN201310183577.6A CN201310183577A CN103278170B CN 103278170 B CN103278170 B CN 103278170B CN 201310183577 A CN201310183577 A CN 201310183577A CN 103278170 B CN103278170 B CN 103278170B
- Authority
- CN
- China
- Prior art keywords
- map
- robot
- scene
- node
- mobile robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 81
- 238000001514 detection method Methods 0.000 claims abstract description 35
- 230000000007 visual effect Effects 0.000 claims description 45
- 238000004364 calculation method Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 8
- 239000002245 particle Substances 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 5
- 230000007246 mechanism Effects 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 230000008520 organization Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims 1
- 230000007613 environmental effect Effects 0.000 description 17
- 238000009826 distribution Methods 0.000 description 8
- 238000010276 construction Methods 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 7
- 241000282414 Homo sapiens Species 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000004438 eyesight Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
本发明涉及移动机器人导航技术领域。本发明公开了一种基于显著场景点检测的移动机器人级联地图创建方法。包括如下步骤:1)根据移动机器人传感器采集的图像数据,在线检测显著场景对应的自然场景路标,生成全局地图中的拓扑节点;2)更新移动机器人位姿和局部栅格子地图;3)以显著场景点作为拓扑节点创建全局拓扑地图结构,在机器人轨迹闭合检测的基础上,引入加权扫描匹配法和松弛法对拓扑结构进行优化,确保拓扑地图的全局一致性。该发明适用于各类移动机器人在包含多个房间、走廊等区域的室内大范围环境中进行自主路径规划和导航应用。
The invention relates to the technical field of mobile robot navigation. The invention discloses a mobile robot cascade map creation method based on salient scene point detection. The method includes the following steps: 1) according to the image data collected by the mobile robot sensor, online detection of natural scene landmarks corresponding to the salient scene, and generating topological nodes in the global map; 2) updating the mobile robot pose and local grid sub-map; 3) using Salient scene points are used as topological nodes to create a global topological map structure. On the basis of robot trajectory closure detection, weighted scan matching and relaxation methods are introduced to optimize the topological structure to ensure the global consistency of the topological map. The invention is suitable for autonomous path planning and navigation applications of various mobile robots in an indoor large-scale environment including multiple rooms, corridors and other areas.
Description
技术领域technical field
本发明涉及移动机器人导航技术领域,特别是涉及一种基于显著场景点检测的移动机器人级联地图创建方法。The invention relates to the technical field of mobile robot navigation, in particular to a mobile robot cascade map creation method based on salient scene point detection.
背景技术Background technique
智能机器人对其工作空间环境的深层次理解和利用是机器人实现自主智能行为的重要前提。移动机器人只有获取了环境地图,才能进行路径规划、轨迹跟踪、全局定位等,从而顺利达到目的地。目前,移动机器人常用的自主地图创建方法主要分为以下三类:The in-depth understanding and utilization of the intelligent robot's workspace environment is an important prerequisite for the robot to realize autonomous intelligent behavior. Only when the mobile robot obtains the environmental map can it carry out path planning, trajectory tracking, global positioning, etc., so as to reach the destination smoothly. At present, the commonly used autonomous map creation methods for mobile robots are mainly divided into the following three categories:
1、机器人在环境中自主探索导航,通过激光传感器扫描环境信息,里程计等传感器创建栅格地图(参见“GrisettiG.ImprovedTechniquesforGridMappingwithRao-BlackwellizedParticleFilters.IEEETransactiononRobotics,2007,23(1):34-46”),该类地图容易维护并用于定位计算,但是度量信息的准确性依赖于里程计精度以及测距传感器的不确定性处理程度,存储和维护的数据量大。1. The robot independently explores and navigates in the environment, scans environmental information through laser sensors, and creates grid maps with sensors such as odometers (see "GrisettiG. Class maps are easy to maintain and used for positioning calculations, but the accuracy of measurement information depends on the accuracy of odometers and the degree of uncertainty processing of ranging sensors, and the amount of data stored and maintained is large.
2、机器人在环境中根据一定规则自主导航,在一段时间内通过声纳和里程计等传感器进行拓扑定位,并将移动路径连通成为拓扑结构,形成拓扑地图(参见“B.Kuipers.LocalMetricalandGlobalTopologicalMapsintheHybridSpatialSemanticHierarchy.IEEEInternationalConferenceonRoboticsandAutomation,2004,4845-4851”)。拓扑地图着重于时空坐标上对象的定量描述,优点是精确且表述简约,但是无法提供环境几何信息的精确描述。2. The robot navigates autonomously in the environment according to certain rules, performs topological positioning through sensors such as sonar and odometer within a period of time, and connects the moving path into a topological structure to form a topological map (see "B. Kuipers. Local Metrical and Global Topological Maps in the Hybrid Spatial Semantic Hierarchy. IEEE International Conference on Robotics and Automation , 2004, 4845-4851”). Topological maps focus on the quantitative description of objects on space-time coordinates, which have the advantages of accuracy and concise expression, but cannot provide precise descriptions of environmental geometric information.
3、机器人在操作人员控制下运动,利用带云台的激光传感器或立体摄像机获得深度图像,通过点云拼接重建环境的三维深度点云地图(参见“P.Henry.RGB-Dmapping:UsingKinect-styledepthcamerasfordense3Dmodelingofindoorenvironments,TheInternationalJournalofRoboticsResearch,2012”)。三维点云地图能够描述障碍物高度信息,但是创建过程计算量巨大,大范围环境中沉重的计算负担难以满足实时要求。3. The robot moves under the control of the operator, using a laser sensor with a gimbal or a stereo camera to obtain a depth image, and reconstructing a 3D depth point cloud map of the environment through point cloud stitching (see "P.Henry.RGB-Dmapping: Using Kinect-styledepth camerasfordense3Dmodelingofindoorenvironments , The International Journal of Robotics Research, 2012"). The 3D point cloud map can describe the height information of obstacles, but the calculation is huge in the creation process, and the heavy calculation burden in a large-scale environment is difficult to meet the real-time requirements.
传统基于栅格的同时定位与地图环境建模方法虽然能够在机器人自身位姿与环境特征耦合不确定的条件下,解决在线栅格地图创建的联合概率估计问题,但是随着环境规模的增大、环境复杂程度以及未知性的提高,该方法存在环境描述过于简单、计算效率低下的问题,制约了机器人对环境的深层次理解能力。如何从人类理解自然环境场景的特性出发,不借助人工路标手段,利用机器人车载视觉、测距等多传感器信息融合,建立高效的环境描述模型,并在解决同步机器人定位与在线地图创建问题,具有重要的意义。Although the traditional grid-based simultaneous positioning and map environment modeling method can solve the joint probability estimation problem of online grid map creation under the condition of uncertain coupling between the robot's own pose and environmental features, but with the increase of the scale of the environment , the complexity of the environment and the increase in the unknown, this method has the problems of too simple environment description and low calculation efficiency, which restricts the deep understanding ability of the robot to the environment. How to start from the characteristics of human beings to understand the natural environment scene, without the help of artificial road signs, use the fusion of multi-sensor information such as robot vehicle vision and distance measurement, to establish an efficient environment description model, and solve the problem of synchronous robot positioning and online map creation. Significance.
当需要描述大规模复杂环境的路标、地点、栅格、环境传感器位姿等混合信息时,应建立混合形式的环境模型,例如拓扑/栅格混合地图、栅格/特征混合地图等(参见“Z.Lin.Recognition-basedIndoorTopologicalNavigationUsingRobustInvariantFeatures.IEEE/RSJInternationalConferenceonIntelligentRobotsandSystems.2005,3975-3980”)。此类环境描述下的同时机器人定位与地图创建(SLAM)方法一般遵循层次化(Hierarchical)基本思路,也称为子地图方法。一般该种方法需要依赖某种地图分割原则,利用测距传感器建立若干相互独立的子地图,再利用子地图之间的共享信息对子地图进行拼接融合(参见“J.Tardós.Robustmappingandlocalizationinindoorenvironmentsusingsonardata,TheInternationalJournalofRoboticsResearch,2002,21(4):311-330.”)。层次化SLAM方法的关键难点包括栅格子地图中的同时定位与地图创建、子地图的自动分割、子地图间拓扑关联,以及全局层面全局一致性等问题,是决定地图创建结果精度和可靠性的关键。When it is necessary to describe mixed information such as landmarks, locations, grids, and environmental sensor poses in a large-scale and complex environment, a mixed form of environment model should be established, such as a topology/raster hybrid map, a grid/feature hybrid map, etc. (see " Z. Lin. Recognition-based Indoor Topological Navigation Using Robust Invariant Features. IEEE/RSJ International Conference on Intelligent Robots and Systems. 2005, 3975-3980"). The simultaneous robot localization and map creation (SLAM) method under this kind of environment description generally follows the basic idea of hierarchical (Hierarchical), also known as the sub-map method. Generally, this method needs to rely on some kind of map segmentation principle, using ranging sensors to establish several independent sub-maps, and then using the shared information between the sub-maps to stitch and fuse the sub-maps (see "J. 2002, 21(4):311-330.”). The key difficulties of the hierarchical SLAM method include simultaneous positioning and map creation in the grid sub-map, automatic segmentation of sub-maps, topological association between sub-maps, and global consistency at the global level, which determine the accuracy and reliability of map creation results. key.
栅格子地图中的同时定位与地图创建(SLAM)一般采用以下几种方法:扩展卡尔曼滤波(ExtendedKalmanFilter,EKF)、Rao-Blackwellized粒子滤波(Rao-BlackwellizedParticleFilter,RBPF)、稀疏扩展信息滤波(SparseExtendedInformationFilter,SEIF)。EKF众所周知的缺点是,当估计值与真实值偏离较大时,对非线性模型进行一阶线性近似的方式会引入较大的线性化误差。RBPF算法在处理复杂环境地图建立问题比较有效,但是由于每一个粒子都对应保存了一张完整的环境地图,所以较大规模环境下使用RBPF进行单纯栅格地图创建的计算复杂度很高,较难以实施。稀疏扩展信息滤波(SEIF)在实际应用中也存在缺点,每次将信息向量和信息矩阵转换为状态向量的估计均值和方差计算复杂度很高,也较难以在大规模环境中实施。Simultaneous positioning and map creation (SLAM) in grid submaps generally adopts the following methods: Extended Kalman Filter (ExtendedKalmanFilter, EKF), Rao-Blackwellized Particle Filter (Rao-BlackwellizedParticleFilter, RBPF), Sparse Extended Information Filter (SparseExtendedInformationFilter , SEIF). The well-known disadvantage of EKF is that when the estimated value deviates greatly from the true value, the way of first-order linear approximation to the nonlinear model will introduce a large linearization error. The RBPF algorithm is more effective in dealing with the establishment of complex environmental maps, but since each particle corresponds to a complete environmental map, the computational complexity of using RBPF to create a simple grid map in a large-scale environment is very high. Difficult to implement. Sparse extended information filtering (SEIF) also has shortcomings in practical applications. The estimated mean and variance of each conversion of information vectors and information matrices into state vectors is highly computationally complex, and it is difficult to implement in a large-scale environment.
子地图的自动分割是级联地图创建的难点之一。目前的各种子地图方法,基本都是借助识别人工路标来使得机器人自动分割子地图。也有方法利用事先人工训练场景特征库,却不能实现机器人对关键路标的自动获取(参见“SchleicherD.Real-timehierarchicalstereoVisualSLAMinlarge-scaleenvironments.RoboticsandAutonomousSystems,2010,58:991-1002.”)。还有一些方法使用环境几何变化信息识别自然场景路标,实现子地图自动分割,例如Voronoi单元之间的交叉点等,但是该方法结果会得到大量路标,而损失了拓扑结构的简约优势。Automatic segmentation of submaps is one of the difficulties in cascading map creation. The current various sub-map methods basically rely on the recognition of artificial landmarks to make the robot automatically segment the sub-map. There is also a method to use the manual training scene feature library in advance, but it cannot realize the automatic acquisition of key landmarks by the robot (see "SchleicherD. Real-timehierarchicalstereoVisualSLA Minlarge-scaleenvironments. Robotics and Autonomous Systems, 2010, 58:991-1002."). There are also some methods that use environmental geometric change information to identify natural scene landmarks and realize automatic segmentation of submaps, such as intersections between Voronoi units, etc., but this method will result in a large number of landmarks, and loses the simplicity of the topology.
基于视觉显著性的自然场景检测为子地图自动分割提供了一种解决思路,也成为移动机器人在未知环境中理解环境的重要手段。它绕过图像目标检测问题,提取有意义的场景描述作为拓扑地图的路标,同时也带来了新的问题,即如何选择有意义的场景兼顾存储需求及路标匹配问题。一些研究人员在人类视觉关注机制(VisualAttention)的启发下,提出了多种用于图像感兴趣区域检测的视觉感知模型。其基本原理在于,生物体观察周围环境时,往往由于行为目的需求或局部景象线索如突出物体将注意力有选择地集中在某个或某些景物上,将这样的景物作为路标可以有效区分和代表各种环境。近年来新出现了一种场景视觉特征奇异变化的BayesianSurprise模型,它相比视觉显著性(Saliency)和Shannon熵具有更优性能(参见“A.Ranganathan.BayesianSurpriseandLandmarkDetection,IEEEInternationalConferenceonRoboticsandAutomation,2009”)。BayesianSurprise模型在视频异常事件监控、自然路标检测等领域取得了初步成功应用,展现了其对场景显著变化的定向能力。但至目前,尚未有文献报道从BayesianSurprise显著性检测的角度讨论级联环境地图模型及其创建问题。Natural scene detection based on visual saliency provides a solution for automatic segmentation of submaps, and has also become an important means for mobile robots to understand the environment in unknown environments. It bypasses the problem of image object detection and extracts meaningful scene descriptions as landmarks of topological maps, but it also brings new problems, that is, how to select meaningful scenes while taking into account storage requirements and landmark matching problems. Inspired by the human visual attention mechanism (Visual Attention), some researchers have proposed a variety of visual perception models for image region of interest detection. The basic principle is that when organisms observe the surrounding environment, they often selectively focus on one or some scenes due to behavioral purpose requirements or local scene cues such as prominent objects. Using such scenes as landmarks can effectively distinguish and Represents various environments. In recent years, a Bayesian Surprise model with singular changes in scene visual features has emerged, which has better performance than visual saliency (Saliency) and Shannon entropy (see "A. Ranganathan. Bayesian Surprise and Landmark Detection, IEEE International Conference on Robotics and Automation, 2009"). The Bayesian Surprise model has achieved initial success in the fields of video anomaly event monitoring and natural landmark detection, demonstrating its ability to orientate significant changes in the scene. But so far, there is no literature report discussing the cascaded environment map model and its creation from the perspective of Bayesian Surprise saliency detection.
另外,如何解决子地图间拓扑关联,并确保拓扑节点的全局一致性,是级联地图创建问题的另一个难点。在栅格/拓扑级联地图中,如果为了降低级联地图计算与存储的复杂度,而忽略路标与全局层面之间的相关性,会引入全局地图的不一致性,导致所创建出来的拓扑结构无法闭合成环。In addition, how to solve the topological association between sub-maps and ensure the global consistency of topological nodes is another difficulty in the creation of cascaded maps. In the grid/topological cascaded map, if the correlation between landmarks and the global level is ignored in order to reduce the complexity of cascaded map calculation and storage, the inconsistency of the global map will be introduced, resulting in the created topological structure Unable to close the loop.
经专利检索查新,熊蓉等人申请了中国发明专利第200610053690.2号,名称为“移动机器人在未知环境中同时定位与地图构建的方法”。公开了一种移动机器人在未知环境中同时定位与地图构建的方法,利用测距传感器获得的数据,构建局部线段特征地图和栅格地图,利用当前机器人位姿估计结果对局部地图进行坐标变换,从而更新全局线段特征地图和全局栅格地图。该方法基于扩展卡尔曼滤波器,对于较复杂环境的鲁棒性不够高。另外,该方法同时创建了环境几何线段特征地图和占有栅格地图两种地图,而非采用层次化级联地图策略。在大规模环境下,由于机器人定位容易出现错误,这会导致创建的两种地图出现明显的拼接误差;此外,若对大规模环境创建一个完整的栅格地图,会导致庞大的计算量和存储量,实际难以满足。After patent search and novelty search, Xiong Rong et al. applied for Chinese Invention Patent No. 200610053690.2, titled "Method for Simultaneous Positioning and Map Construction of Mobile Robot in Unknown Environment". Disclosed is a method for simultaneous localization and map construction of a mobile robot in an unknown environment. The data obtained by the ranging sensor is used to construct a local line segment feature map and a grid map, and the current robot pose estimation result is used to perform coordinate transformation on the local map. Thus updating the global line feature map and the global grid map. This method is based on the extended Kalman filter, which is not robust enough for more complex environments. In addition, this method simultaneously creates two types of maps, the feature map of the environmental geometric line segments and the occupancy grid map, instead of adopting a hierarchical cascading map strategy. In a large-scale environment, due to the error-prone robot positioning, this will lead to obvious stitching errors in the two created maps; in addition, if a complete grid map is created for a large-scale environment, it will result in a huge amount of calculation and storage. The quantity is actually difficult to satisfy.
温丰等人申请了中国发明专利第201110376468.7号,名称为“一种实现移动机器人同时定位与地图构建的方法”。该方法根据航位推测里程计及路标观测数据,采用“强跟踪滤波器”来提高机器人定位与地图创建的精度。但是该发明方法所创建的地图属于特征地图,而非栅格或拓扑地图,并不能直接用于机器人路径规划和导航。另外,该方法也仅用于室内较小规模环境和人工设置的路标物体,这也都限制了该方法的应用范围。Wen Feng and others applied for Chinese Invention Patent No. 201110376468.7, titled "A Method for Simultaneous Positioning and Map Construction of Mobile Robots". Based on dead reckoning odometry and landmark observation data, the method uses a "strong tracking filter" to improve the accuracy of robot positioning and map creation. However, the map created by the inventive method is a feature map rather than a grid or topological map, and cannot be directly used for robot path planning and navigation. In addition, this method is only used in indoor small-scale environments and artificially set landmark objects, which also limit the application range of this method.
梁化为等人申请了中国发明专利第200710019784.2号,名称为“一种移动机器人地图创建系统与地图创建方法”。该发明公开了一种基于无线传感器网络的移动机器人地图创建系统和方法。该方法利用形成的无线传感器网络数据来创建含有多元信息的环境地图。这属于一种借助外部传感器的机器人地图创建方法,然而由于无线传感器节点测量精度较低,只能对机器人进行粗略的定位,所创建的环境地图也达不到精确导航的需求。另外,该方法需要将无线传感器网络节点布撒于监控区域内,在不具备此条件的室内环境下该方法并不适用。Liang Huawei and others applied for Chinese Invention Patent No. 200710019784.2, titled "A Mobile Robot Map Creation System and Map Creation Method". The invention discloses a wireless sensor network-based mobile robot map creation system and method. The method utilizes the formed wireless sensor network data to create an environmental map containing multivariate information. This is a method of creating a robot map with the help of external sensors. However, due to the low measurement accuracy of wireless sensor nodes, the robot can only be roughly positioned, and the created environmental map cannot meet the needs of precise navigation. In addition, this method needs to spread the wireless sensor network nodes in the monitoring area, which is not suitable for indoor environments that do not have this condition.
发明内容Contents of the invention
技术问题:针对级联地图创建中存在的两个主要问题,即如何选择有意义的场景兼顾存储需求及路标匹配问题,实现子地图自动分割;如何解决子地图间拓扑关联,并确保拓扑节点的全局一致性;本发明提出了一种基于显著场景点的移动机器人级联地图创建方法。Technical issues: Aiming at the two main problems in the creation of cascaded maps, namely, how to select meaningful scenarios while taking into account storage requirements and landmark matching issues, and realize automatic sub-map segmentation; how to solve the topological association between sub-maps and ensure the topology nodes Global consistency; the present invention proposes a mobile robot cascade map creation method based on salient scene points.
技术方案:基于显著场景点检测的移动机器人级联地图创建方法,所述方法包括:Technical solution: A mobile robot cascade map creation method based on salient scene point detection, the method comprising:
根据移动机器人的传感器采集的图像数据,进行在线显著场景点检测,生成全局拓扑节点;According to the image data collected by the sensor of the mobile robot, online salient scene point detection is performed to generate global topological nodes;
更新移动机器人位姿和局部栅格子地图:根据激光传感器数据、航位推算传感器、上一周期对移动机器人的位姿估计和前期构建得到的局部栅格子地图,估计当前周期移动机器人的局部位姿,并更新栅格子地图局部坐标框架;Update the mobile robot pose and local grid sub-map: estimate the current cycle mobile robot’s local grid sub-map based on laser sensor data, dead reckoning sensors, the previous cycle’s pose estimation of the mobile robot, and the local grid sub-map obtained in the previous period. Part pose, and update the local coordinate frame of the grid submap;
以显著场景点作为拓扑节点创建全局拓扑地图结构,在机器人轨迹闭合检测的基础上,引入加权扫描匹配法和松弛法对拓扑结构进行优化,修正全局拓扑地图的一致性。A global topological map structure is created with salient scene points as topological nodes. On the basis of robot trajectory closure detection, weighted scan matching and relaxation methods are introduced to optimize the topological structure and correct the consistency of the global topological map.
其中,显著场景点检测方法包括场景图像特征提取与场景显著性计算,步骤如下:Among them, the salient scene point detection method includes scene image feature extraction and scene saliency calculation, and the steps are as follows:
1)通过视觉注意机制引导SURF特征采样聚集到显著区域,剔除大量无利于表征地点特征的信息;1) Guide the SURF feature sampling to the salient area through the visual attention mechanism, and eliminate a large amount of information that is not conducive to characterizing the characteristics of the location;
2)通过K-mean聚类算法对图像的SURF特征向量集合进行聚类,并采用融合空间关系的空间词袋模型构造场景表观特征描述,将场景图像描述为一种融合空间关系的视觉单词直方图;2) Cluster the SURF feature vector set of the image through the K-mean clustering algorithm, and use the spatial bag-of-words model to construct the scene appearance feature description, and describe the scene image as a visual word that integrates the spatial relationship histogram;
3)利用视觉单词直方图建立基于该特征描述的地点MultivariatePolya模型,并通过计算期望BayesianSurprise阈值,判断当前机器人所处的地点是否为显著场景点。3) Use the histogram of visual words to establish a MultivariatePolya model based on the feature description, and calculate the expected BayesianSurprise threshold to determine whether the current location of the robot is a salient scene point.
所述的局部栅格子地图中包含障碍物占有栅格、场景地点特征等混合特征的精确描述;局部栅格子地图采用以当前周期移动机器人位置为原点、以当前周期移动机器人正方向为X轴的坐标系;The local grid sub-map contains accurate descriptions of hybrid features such as obstacle occupancy grids and scene location features; the local grid sub-map uses the position of the current cycle mobile robot as the origin, and the positive direction of the current cycle mobile robot as X the coordinate system of the axes;
所述的局部栅格子地图的同时机器人定位与地图创建采用Rao-Blackwellized粒子滤波算法,完成初始定位后创建首个全局拓扑节点,并将当前创建的栅格地图保存为所述首个全局拓扑节点对应的子地图。The simultaneous robot positioning and map creation of the local grid sub-map uses the Rao-Blackwellized particle filter algorithm. After the initial positioning is completed, the first global topology node is created, and the currently created grid map is saved as the first global topology The submap corresponding to the node.
所述的全局拓扑地图是采用以显著场景点为全局拓扑节点、以连接边连接了相邻的拓扑节点构成的图结构;每个节点与一个局部占有栅格子地图相关联,每条边对应了相邻子地图坐标框架的变换关系。The global topological map is a graph structure composed of salient scene points as global topological nodes and adjacent topological nodes connected by connecting edges; each node is associated with a local occupancy grid sub-map, and each edge corresponds to The transformation relationship of adjacent submap coordinate frames is defined.
所述全局拓扑地图创建步骤为:The steps for creating the global topology map are:
1)机器人采集视觉传感器数据,并进行场景显著性检测,当检测到新的显著场景点时在全局拓扑地图中创建新的拓扑节点,以机器人在上一个节点中的结束位姿作为新子地图的坐标原点,即新节点的基坐标,另外将当前场景点的特征加入历史数据;1) The robot collects visual sensor data and performs scene saliency detection. When a new salient scene point is detected, a new topology node is created in the global topology map, and the end pose of the robot in the previous node is used as a new submap The coordinate origin of the new node is the base coordinate of the new node, and the characteristics of the current scene point are added to the historical data;
2)迭代使用couplingsummation公式对机器人全局位置方差进行在线估计和更新,将相邻节点之间的基座标变换关系附加在节点间的连接边上,实现节点间相对位置传递计算;2) Iteratively use the couplingsummation formula to estimate and update the global position variance of the robot online, and attach the transformation relationship between the base coordinates between adjacent nodes to the connecting edges between nodes to realize the calculation of relative position transfer between nodes;
3)每次创建新的拓扑节点,均与拓扑节点的历史数据进行匹配,从而判断移动机器人是否重新到达了已访问过的地点;其中采用场景图像SURF特征模型的相似距离匹配法计算当前场景地点和历史场景地点的相似度概率;3) Every time a new topological node is created, it is matched with the historical data of the topological node to determine whether the mobile robot has re-arrived at the location it has visited; the similar distance matching method of the SURF feature model of the scene image is used to calculate the current scene location The probability of similarity with historical scene locations;
4)当检测出机器人访问地点轨迹闭合后,对已创建的拓扑节点按各节点的连接顺序,采用加权扫描匹配法计算相邻节点对应子地图基坐标之间的相对位移变换量。4) After detecting that the trajectory of the robot’s visiting location is closed, the weighted scanning matching method is used to calculate the relative displacement transformation between the corresponding sub-map base coordinates of the adjacent nodes according to the connection sequence of the created topological nodes.
所述的全局一致性修正方法是,将由机器人传感器观测量获取的节点间相互关系视作节点之间一种位置约束,采用松弛法求解满足约束而最优的拓扑组织结构,步骤如下:The global consistency correction method is to regard the inter-node relationship obtained by the robot sensor observations as a position constraint between the nodes, and use the relaxation method to solve the optimal topology organization structure that satisfies the constraints. The steps are as follows:
1)针对某节点i,利用所述节点的每个邻节点j位置计算所述节点的估算位置和方差;1) For a certain node i, calculate the estimated position and variance of the node by using the position of each adjacent node j of the node;
2)再根据所有邻节点所得到的关于节点i的估算量,加权平均得到关于i的新坐标;2) According to the estimate of node i obtained by all neighboring nodes, the weighted average is obtained to obtain the new coordinate of i;
3)重复以上步骤,直到整个地图中所有节点的位置在先后两次迭代中误差的和小于某个给定阈值,或者迭代超过一定总次数时,结束该松弛算法过程。3) Repeat the above steps until the sum of the errors of the positions of all nodes in the entire map in two successive iterations is less than a given threshold, or when the number of iterations exceeds a certain total number of times, the relaxation algorithm process ends.
优选的,所述的传感器包括单目摄像头和激光测距仪。所述的激光测距仪获得的数据是激光在测距高度35cm平面上扫描得到的环境中障碍物上各个点相对于移动机器人的距离和角度,在0°~180°范围内每1°分辨率获得一个激光束数据,共计181个激光束。Preferably, the sensor includes a monocular camera and a laser rangefinder. The data obtained by the laser rangefinder is the distance and angle of each point on the obstacle in the environment relative to the mobile robot obtained by scanning the laser on a plane with a distance measuring height of 35cm, and it can be resolved every 1° within the range of 0° to 180° A laser beam data is obtained at a high rate, and a total of 181 laser beams are obtained.
本发明采用上述技术方案,具有以下有益效果:The present invention adopts the above-mentioned technical scheme, and has the following beneficial effects:
1、针对移动机器人工作所处的较大规模复杂环境,建立了包含局部栅格子地图与全局拓扑地图的两层级联地图结构,相比常规的地图模型,有效地兼顾了栅格子地图对环境障碍物的精确描述,和拓扑地图子地图间相对位置关系的简约表述这两个优点。针对障碍物、移动机器人自身位姿均未知的情况,实现了包含局部栅格子地图与全局拓扑地图的两层级联地图在线创建,并解决了地图创建过程中机器人位姿不确定性与地图不确定性相互耦合关联、相互影响的问题。1. In view of the large-scale and complex environment where the mobile robot works, a two-layer cascaded map structure including a local grid submap and a global topological map is established. Compared with the conventional map model, it effectively takes into account the impact of the grid submap. The precise description of environmental obstacles and the concise expression of the relative positional relationship between topological map submaps are two advantages. For the situation where the obstacles and the mobile robot's own pose are unknown, the online creation of a two-layer cascaded map including the local grid submap and the global topological map is realized, and the uncertainty of the robot's pose and the map's inconsistency in the map creation process are solved. Deterministic issues of mutual coupling and mutual influence.
2、在无人工路标物的自然场景情况下,提出了通过机器人自主检测环境中的显著场景点来作为子地图之间的分割。引入人类视觉注意模型,采用一种基于显著场景BayesianSurprise的自然路标检测方法,在大规模复杂室内环境中具有较低的漏检率和误检率,从而解决了环境级联地图创建中的子地图自动分割关键问题。2. In the case of natural scenes without artificial landmarks, a robot is proposed to autonomously detect salient scene points in the environment as the segmentation between sub-maps. Introducing the human visual attention model, adopting a natural landmark detection method based on salient scene Bayesian Surprise, which has a low missed detection rate and false detection rate in large-scale complex indoor environments, thus solving the problem of submaps in the creation of environmental cascade maps Automatically segment critical issues.
3、在全局拓扑地图创建过程中,动态推算节点间相对坐标关系,并在机器人轨迹闭合检测的基础上,引入加权扫描匹配法和松弛法对全局地图的拓扑结构进行优化,确保其全局一致性。最终由机器人创建的级联地图所占存储空间和计算资源较少,可以实现较大规模复杂未知环境的地图在线创建,所得到的级联地图适于各类机器人进行路径规划和导航应用。3. In the process of creating the global topological map, the relative coordinate relationship between nodes is dynamically calculated, and on the basis of robot trajectory closure detection, the weighted scan matching method and relaxation method are introduced to optimize the topological structure of the global map to ensure its global consistency . Finally, the cascaded map created by the robot occupies less storage space and computing resources, and can realize online creation of large-scale and complex unknown environment maps. The obtained cascaded map is suitable for path planning and navigation applications of various robots.
附图说明Description of drawings
图1为本发明实施例的初始定位及首个全局拓扑节点流程图;Fig. 1 is the flowchart of the initial positioning and the first global topology node of the embodiment of the present invention;
图2为本发明实施例的环境混合级联地图模型结构示意图,包括局部栅格子地图和全局拓扑地图;FIG. 2 is a schematic structural diagram of an environment hybrid cascaded map model according to an embodiment of the present invention, including a local grid submap and a global topological map;
图3为本发明实施例的两个全局拓扑节点之间的坐标变换关系示意图;3 is a schematic diagram of the coordinate transformation relationship between two global topology nodes according to an embodiment of the present invention;
图4为本发明实施例的显著场景点检测方法流程图;其中包括图像采集、SURF特征提取、视觉注意引导的sBoW特征构造、基于表观Surprise计算的显著场景判断;4 is a flow chart of a salient scene point detection method according to an embodiment of the present invention; it includes image acquisition, SURF feature extraction, sBoW feature construction guided by visual attention, and salient scene judgment based on apparent Surprise calculation;
图5a、5b、5c为本发明实施例的显著场景点检测方法在包括(a)走廊、(c)办公室、(b)实验室等多个区域的实际环境中典型场景的环境视觉注意图与SURF特征提取示意图;其中每个子图的左侧小图显示了视觉注视的强度,右侧小图中的圆圈为视觉注意引导下的场景SURF特征图;Figures 5a, 5b, and 5c are the environmental visual attention maps and attention maps of typical scenes in actual environments including (a) corridors, (c) offices, (b) laboratories, etc., in the salient scene point detection method of the embodiment of the present invention. Schematic diagram of SURF feature extraction; the left panel of each subfigure shows the intensity of visual fixation, and the circle in the right panel is the scene SURF feature map guided by visual attention;
图6为本发明实施例的局部栅格子地图创建过程和相应全局拓扑节点之间的坐标变换关系示意图;6 is a schematic diagram of the coordinate transformation relationship between the local grid submap creation process and the corresponding global topology nodes according to the embodiment of the present invention;
图7为本发明实施例的整体环境级联地图的创建流程图;FIG. 7 is a flow chart of creating an overall environment cascaded map according to an embodiment of the present invention;
图8为本发明在实例环境中机器人创建并拼接而成的两层栅格/拓扑级联地图。在全局拓扑中检测到重新访问某路标时,进行拓扑地图的全局一致性优化,将图中的全局拓扑节点用线段相连接;图中曲线为机器人定位结果轨迹,显示了机器人在地图创建过程中的运动路径图;Fig. 8 is a two-layer grid/topological cascade map created and stitched by robots in the example environment of the present invention. When revisiting a landmark is detected in the global topology, the global consistency optimization of the topology map is performed, and the global topology nodes in the figure are connected with line segments; the curve in the figure is the trajectory of the robot positioning result, showing the robot in the map creation process motion path diagram;
图9为图8所示实例中创建的若干局部栅格子地图。FIG. 9 shows several partial grid submaps created in the example shown in FIG. 8 .
具体实施方式detailed description
下面结合具体实施例,进一步阐明本发明,应理解这些实施例仅用于说明本发明而不用于限制本发明的范围,在阅读了本发明之后,本领域技术人员对本发明的各种等价形式的修改均落于本申请所附权利要求所限定的范围。Below in conjunction with specific embodiment, further illustrate the present invention, should be understood that these embodiments are only used to illustrate the present invention and are not intended to limit the scope of the present invention, after having read the present invention, those skilled in the art will understand various equivalent forms of the present invention All modifications fall within the scope defined by the appended claims of the present application.
本发明实施例的方法步骤是:The method step of the embodiment of the present invention is:
根据移动机器人车载单目摄像头采集的图像数据,在线检测显著场景对应的自然场景路标,生成全局地图中的拓扑节点;According to the image data collected by the mobile robot's vehicle-mounted monocular camera, the natural scene landmarks corresponding to the prominent scenes are detected online, and the topological nodes in the global map are generated;
根据测距激光传感器数据、航位推算里程计传感器、上一周期对移动机器人的位姿估计和前期构建得到的局部栅格子地图,估计当前周期移动机器人的局部位姿,并更新栅格子地图局部坐标框架;According to the ranging laser sensor data, the dead reckoning odometer sensor, the pose estimation of the mobile robot in the previous cycle and the local grid sub-map obtained in the previous period, estimate the local pose of the mobile robot in the current cycle, and update the grid sub-map map local coordinate frame;
以显著场景点作为拓扑节点创建全局拓扑地图结构,在机器人轨迹闭合检测的基础上,引入加权扫描匹配法和松弛法对拓扑结构进行优化,确保拓扑地图的全局一致性。A global topological map structure is created with salient scene points as topological nodes. On the basis of robot trajectory closure detection, weighted scan matching and relaxation methods are introduced to optimize the topological structure to ensure the global consistency of the topological map.
具体实现方式为:The specific implementation method is:
如图1所示为本实施例的初始定位和首个全局拓扑节点创建。首先移动机器人从未知环境中的任意位置开始向任意方向启动(S1),在当前房间内初始运动一段距离,采集激光传感器数据(S2),并在运动过程中采用局部栅格子地图内的同时机器人定位与地图创建方法进行初始定位(S3),此时创建得到的子地图将作为全局拓扑地图中首节点所对应的子地图,此时机器人的运动范围仅限于在初始所在的房间内。当粒子集合仍未收敛到一定范围内时,判断初始定位未完成(S4),此时循环反复直至初始定位完成;如果初始定位完成,则获取机器人当前在局部坐标系下的位姿,然后机器人继续移动,并可以出入各个房间及走廊等区域,与此同时进行基于显著场景点检测的级联地图创建(S5)。As shown in FIG. 1 , the initial positioning and the creation of the first global topology node in this embodiment are shown. First, the mobile robot starts from any position in the unknown environment and starts in any direction (S1), initially moves for a certain distance in the current room, collects laser sensor data (S2), and uses the local grid sub-map during the movement. The robot positioning and map creation method performs initial positioning (S3). At this time, the created sub-map will be used as the sub-map corresponding to the first node in the global topology map. At this time, the robot's movement range is limited to the initial room. When the particle set has not yet converged to a certain range, it is judged that the initial positioning is not completed (S4), and the cycle is repeated until the initial positioning is completed; if the initial positioning is completed, the current pose of the robot in the local coordinate system is obtained, and then the robot Continue to move, and can enter and exit areas such as various rooms and corridors, and at the same time create a cascade map based on salient scene point detection (S5).
在实施例子中,机器人车载的测距激光传感器获得的数据是激光在测距高度35cm平面上扫描得到的环境中障碍物上各个点相对于移动机器人的距离和角度,在0°~180°范围内每1°分辨率获得一个激光束数据,共计181个激光束。机器人车载的视觉传感器为PTZCCD摄像头,采集RGB图像分辨率320*240。In the implementation example, the data obtained by the ranging laser sensor on the robot is the distance and angle of each point on the obstacle in the environment relative to the mobile robot obtained by scanning the laser on a plane with a ranging height of 35cm, in the range of 0° to 180° One laser beam data is obtained for every 1° resolution within a total of 181 laser beams. The visual sensor on the robot is a PTZCCD camera, which collects RGB images with a resolution of 320*240.
优选的,本实施例中环境级联地图结构适于描述例如医院、办公室、实验室等面积较大、包含走廊、多个房间等空间区域的楼层室内环境,具有局部栅格子地图与全局拓扑地图的两层结构,如图2所示。其中:Preferably, the environmental cascading map structure in this embodiment is suitable for describing the indoor environment of floors with large areas such as hospitals, offices, laboratories, etc., including corridors, multiple rooms, etc., and has a local grid sub-map and a global topology. The two-layer structure of the map is shown in Figure 2. in:
1)局部栅格子地图中包含了障碍物占有栅格、场景地点特征等混合特征的精确描述,适于移动机器人使用车载测距传感器在局部子地图框架下进行精确的定位。局部占有栅格子地图采用以当前周期移动机器人位置为原点、以当前周期移动机器人正方向为X轴的坐标系;1) The local grid sub-map contains accurate descriptions of hybrid features such as obstacle occupancy grids and scene location features, which is suitable for mobile robots to use vehicle-mounted ranging sensors for precise positioning under the local sub-map framework. The local occupancy grid sub-map adopts the coordinate system with the position of the mobile robot in the current period as the origin and the positive direction of the mobile robot in the current period as the X-axis;
子地图M包含了机器人当前位姿、概率栅格P、n个地点路标的位置坐标。这些变量都相对于机器人局部基坐标系B而建立,基坐标可取为子地图创建初始时刻机器人的坐标。例如相邻子地图之间的坐标变换关系可记为,表示节点2与节点1之间的相对坐标变换量。每个子地图分别由相应的过程创建,创建时仅使用该过程中的里程计与其它传感器读数,子地图坐标也相对于机器人在子地图创建初始时刻的位姿。因此各个子地图相对之间相对独立地维护,且尚未经过全局纠正。关键地点记为FP={fp1,...,fpM},自然场景特征为,其中是场景的SURF特征描述子。The submap M contains the current pose of the robot , Probability grid P, position coordinates of n location landmarks . These variables are established relative to the local base coordinate system B of the robot, and the base coordinates can be taken as the coordinates of the robot at the initial moment when the submap is created. For example, the coordinate transformation relationship between adjacent submaps can be written as , indicating the relative coordinate transformation between node 2 and node 1. Each submap is created by a corresponding process, using only the odometer and other sensor readings in the process, and the submap coordinates are also relative to the pose of the robot at the initial moment of submap creation. Therefore, each submap is relatively independently maintained and has not been globally corrected. The key location is recorded as FP={fp 1 ,...,fp M }, and the natural scene features are ,in is the SURF feature descriptor of the scene.
2)全局拓扑地图采用以显著场景点为全局拓扑节点、以连接边连接了相邻的拓扑节点构成的图结构。每个节点与一个局部占有栅格子地图相关联,每条边对应了相邻子地图坐标框架的变换关系。2) The global topological map adopts a graph structure composed of salient scene points as global topological nodes and adjacent topological nodes connected by connecting edges. Each node is associated with a local occupancy grid submap, and each edge corresponds to the transformation relationship of the adjacent submap coordinate frames.
每个拓扑节点i与子地图Mi相关联,全局坐标记为[xi,yi,θi]。在节点创建时,此值初始化为当前里程计读数。节点i的位置方差记为vi,初始化为前一节点的方差值加上相比前一节点前进距离的2%。在机器人前进过程中,当创建或再次访问一个节点时,将此节点与前一节点用一条边相连。拓扑节点i与节点j之间的相对metric位置关系记为,如图3所示,其具有方差vij,取为0.02dij。Each topological node i is associated with a submap M i , and the global coordinates are denoted as [ xi , y i , θ i ]. At node creation, this value is initialized to the current odometry reading. The position variance of node i is denoted as v i , which is initialized to the variance value of the previous node plus 2% of the forward distance compared to the previous node. When the robot moves forward, when creating or revisiting a node, connect this node with the previous node with an edge. The relative metric position relationship between topological node i and node j is recorded as , as shown in FIG. 3 , it has a variance v ij , which is taken as 0.02d ij .
图4是本实施例的显著场景点检测方法流程图,根据移动机器人车载的单目视觉传感器,在机器人运动过程中检测环境显著场景点,包括场景图像特征提取与场景显著性计算,步骤如下:Fig. 4 is a flow chart of the salient scene point detection method in this embodiment. According to the monocular vision sensor mounted on the mobile robot, the salient scene points in the environment are detected during the movement of the robot, including scene image feature extraction and scene saliency calculation. The steps are as follows:
1)引入模拟人类视觉的视觉注意机制来引导SURF特征采样聚集到显著区域,剔除大量无利于表征地点特征的信息,提高图像处理效率;1) Introduce a visual attention mechanism that simulates human vision to guide SURF feature samples to gather in salient areas, remove a large amount of information that is not conducive to characterizing location features, and improve image processing efficiency;
对输入场景图像,分别采样并建立5级尺度单色高斯金字塔(s0,...,s4)、5级尺度CIELAB彩色空间高斯金字塔,以及4级尺度高斯差分金字塔(DoG1,...,DoG4)。然后对每层金字塔分解为强度(I)、颜色(C)、和方向(O)三个通道分别提取特征图,并采用归一化操作处理后按等权值累加,获得视觉注意图:For the input scene image, respectively sample and establish a 5-level scale monochrome Gaussian pyramid (s 0 ,...,s 4 ), a 5-level scale CIELAB color space Gaussian pyramid , and a 4-level scaled difference-of-Gaussian pyramid (DoG 1 ,...,DoG 4 ). Then decompose each layer of the pyramid into three channels of intensity (I), color (C), and direction (O) to extract feature maps respectively, and use normalization operation After processing, accumulate by equal weight to obtain the visual attention map:
其中on-center与off-center分别是模拟人眼视网膜中视觉细胞的向心算子与离心算子。针对显著区域中显著性概率高于某阈值的像素区域进行SURF特征采样,从而将特征提取的范围限制于显著区域内,以免引入对场景关键特征不具有表征意义的干扰点,通常来自墙体、地面等背景。Among them, on-center and off-center are the centripetal and centrifugal operators that simulate the visual cells in the human retina, respectively. The SURF feature sampling is performed on the pixel area whose saliency probability is higher than a certain threshold in the salient area, so as to limit the range of feature extraction to the salient area, so as not to introduce interference points that are not representative of the key features of the scene, usually from walls, Ground and other backgrounds.
2)通过K-mean聚类算法对图像的SURF特征向量集合进行聚类,并采用融合空间关系的空间词袋模型(SpatialBag-of-Word,sBoW)来构造场景表观特征描述,将场景图像描述为一种融合空间关系的视觉单词直方图;2) Use the K-mean clustering algorithm to cluster the SURF feature vector set of the image, and use the spatial bag-of-word model (SpatialBag-of-Word, sBoW) to construct the scene appearance feature description, and the scene image Described as a histogram of visual words incorporating spatial relationships;
通过K-mean聚类算法对图像的SURF特征向量集合进行聚类,获得到k个聚类中心作为主干视觉单词。特征量化过程采用最近邻归类法将图像中每个局部图像块近似为主干视觉单词表示,如式(3)所示,将SURF特征向量si量化为视觉单词wi表示。The K-mean clustering algorithm is used to cluster the SURF feature vector set of the image, and k cluster centers are obtained as the main visual words. The feature quantification process adopts the nearest neighbor classification method to approximate each local image block in the image as the main visual word representation, as shown in formula (3), quantize the SURF feature vector s i into the visual word w i representation.
由于聚类中心数量庞大,为了进一步加快量化速度,可对聚类中心建立KD-Tree,能够显著加快最近邻搜索速度。经过特征量化后,统计每幅图像中视觉单词出现频次,构造生成视觉单词直方图,从而降低了图像描述的复杂度。Due to the large number of cluster centers, in order to further speed up the quantization, a KD-Tree can be established for the cluster centers, which can significantly speed up the nearest neighbor search. After feature quantization, the frequency of visual words in each image is counted, and a histogram of visual words is constructed to reduce the complexity of image description.
进一步引入视觉单词的空间关系描述。设图像视觉单词直方图向量记为X=(n1,n2,..nW),用每个视觉单词相对于特征集集合几何中心O=(x0,y0)的距离与角度两个特征来描述其空间分布,并分别建立直方图。The spatial relationship description of visual words is further introduced. Let the histogram vector of image visual words be denoted as X=(n 1 ,n 2 ,..n W ), and use the distance and angle of each visual word relative to the geometric center of the feature set O=(x 0 ,y 0 ) feature to describe its spatial distribution, and build histograms respectively.
a)距离分量:计算每个特征点与几何中心的欧氏距离(L1,L2,...,LW),取中值作为单位长度L,其它长度参照L的比值划分为0~Lmax内的四个区间;a) Distance component: Calculate the Euclidean distance (L 1 ,L 2 ,...,L W ) between each feature point and the geometric center, take the median as the unit length L, and divide the other lengths into 0~ with reference to the ratio of L Four intervals within L max ;
b)角度分量:以每个特征点与其逆时针方向的最近邻点构成夹角θ。任意选择一个特征点作为起始点F0,其它点以逆时针方向由内至外依次编号为F1,F2,...,FW-1。计算得到与的夹角θi,i=1,2,...,W-1,并将其量化为0°~θmax内的五个区间。b) Angle component: each feature point forms an angle θ with its nearest neighbor in the counterclockwise direction. Randomly select a feature point as the starting point F 0 , and the other points are numbered as F 1 , F 2 ,...,F W-1 in the counterclockwise direction from inside to outside. calculated and The included angle θ i of , i=1,2,...,W-1, and quantized into five intervals within 0°~θ max .
于是得到(4)式所示的场景视觉表观特征向量,其中前W维向量{nW}是视觉词汇表中的单词统计构成的视觉单词统计频次,满足,后Q维向量{pQ}是视觉单词空间关系统计频次。Then the scene visual appearance feature vector shown in (4) is obtained, where the first W-dimensional vector {n W } is the statistical frequency of visual words composed of word statistics in the visual vocabulary, satisfying , the post-Q dimension vector {p Q } is the statistical frequency of the visual word spatial relationship.
a=[n1,n2,...,nW,p1,n2,...,pQ](4)a=[n 1 ,n 2 ,...,n W ,p 1 ,n 2 ,...,p Q ](4)
3)利用视觉单词直方图建立基于该特征描述的地点MultivariatePolya(MP)模型,并通过计算期望BayesianSurprise阈值,判断当前机器人所处的地点是否为显著场景点。3) Use the histogram of visual words to establish a location MultivariatePolya (MP) model based on the feature description, and calculate the expected BayesianSurprise threshold to determine whether the current location of the robot is a significant scene point.
设当前地点场景模型为M,具有先验概率分布p(M)。BayesianSurprise定义为当在线获得观测量z时,针对模型M的后验分布p(M|z)与先验分布之间的KL距离,即:Let the scene model of the current location be M, which has a prior probability distribution p(M). BayesianSurprise is defined as the KL distance between the posterior distribution p(M|z) of the model M and the prior distribution when the observation z is obtained online, namely:
若机器人当前场景观测的BayesianSurprise超过某种Surprise阈值,则认为当前地点是具有足够显著特征。此处Surprise阈值采用一种MonteCarlo方法来近似计算期望值,称为期望Surprise。即从当前的地点模型p(M)中采样N个观测量z1:N,于是期望Surprise就计算为这些样本所对应Surprise值的平均:If the Bayesian Surprise observed by the robot in the current scene exceeds a certain Surprise threshold, the current location is considered to have sufficiently significant features. Here, the Surprise threshold uses a MonteCarlo method to approximate the expected value, which is called expected Surprise. That is, N observations z 1:N are sampled from the current location model p(M), so the expected Surprise is calculated as the average of the Surprise values corresponding to these samples:
设给定某路标的观测A={a},其中向量a如式(4)所示。设第w个视觉单词出现次数为nw的概率是θw,于是向量a符合参数为n和θ的多项分布,其中θ=[θ1,θ2,...,θW]。考虑对一种特定的θ取值即θs=[θs1,...,θsW],其服从多项分布:Suppose the observation A={a} of a given landmark, where the vector a is shown in formula (4). Suppose the probability that the wth visual word appears n w is θ w , then the vector a conforms to the multinomial distribution with parameters n and θ, where θ=[θ 1 ,θ 2 ,...,θ W ]. Consider a specific value of θ that is θ s = [θ s1 ,...,θ sW ], which obeys the multinomial distribution:
引入参数αs=[αs1,αs2,...,αsW],表示针对θs,对第w种视觉单词观测αsw-1次以后得知该单词出现的概率为θsw,则θs服从以αs为条件的Dirichlet分布,即:The parameter α s =[α s1 ,α s2 ,...,α sW ] is introduced, which means that for θ s , after observing α sw -1 times for the w-th visual word, the probability of the word appearing is θ sw , then θ s obeys the Dirichlet distribution conditional on α s , namely:
其中Γ(·)为Gamma函数。Where Γ(·) is the Gamma function.
记α=Σwαw,地点场景模型p(α|A)可使用贝叶斯法则计算为:Note α= Σw α w , the location scene model p(α|A) can be calculated using Bayesian rule as:
p(α|A)∝p(A|α)p(α)(9)p(α|A)∝p(A|α)p(α)(9)
将直方图观测的似然度p(A|α)扩展,于是上式重写为:Expand the likelihood p(A|α) of the histogram observation, so the above formula is rewritten as:
上式称为地点场景的MultivariatePolya(MP)模型。将式(7)(8)代入(10)式的积分项,先验p(α)取作均匀分布,则地点场景模型计算为:The above formula is called the MultivariatePolya (MP) model of the location scene. Substituting equations (7) (8) into the integral term of equation (10), and taking the prior p(α) as a uniform distribution, the location scene model is calculated as:
Dirichelet分布参数α的最大似然估计可以通过样本训练得到。因此针对环境中的若干典型场景(走廊、办公室、实验室等),给定一定量样本图像并使用迭代梯度下降优化法学习得到α,从而获得场景的先验MP模型。为了计算(6)式所示的期望Surprise,利用该MP模型先根据(8)式采样θs,再根据(7)式采样a,得到的就是观测样本z,如图4中的虚线框所示。图5是利用发明中的显著场景点检测方法,在包括(a)走廊、(c)办公室、(b)实验室等多个区域的实际环境中典型场景的环境视觉注意图与SURF特征提取实例;其中每个子图的左侧小图显示了视觉注视的强度,右侧小图中的圆圈为视觉注意引导下的场景SURF特征,可见该方法将场景的特征提取范围限制于显著区域内,以免引入对场景关键特征不具有表征意义的干扰点(通常来自墙体、地面等背景)。The maximum likelihood estimation of Dirichelet distribution parameter α can be obtained through sample training. Therefore, for several typical scenes in the environment (corridor, office, laboratory, etc.), a certain amount of sample images is given and α is learned by iterative gradient descent optimization method, so as to obtain the prior MP model of the scene. In order to calculate the expected Surprise shown in Equation (6), use the MP model to sample θ s according to Equation (8), and then sample a according to Equation (7), and then obtain the observation sample z, as indicated by the dotted box in Figure 4 Show. Figure 5 is an example of environmental visual attention map and SURF feature extraction of typical scenes in the actual environment including (a) corridors, (c) offices, (b) laboratories, etc., using the salient scene point detection method in the invention ; The left panel of each subgraph shows the intensity of visual fixation, and the circle in the right panel is the SURF feature of the scene guided by visual attention. It can be seen that this method limits the feature extraction range of the scene to the salient area to avoid Introduce interference points that are not representative of the key features of the scene (usually from backgrounds such as walls and floors).
根据机器人车载的测距、里程计传感器数据,构建当前已观测到的环境级联地图:它包括局部栅格子地图的构建与全局拓扑地图的构建。其中局部栅格子地图内的同时机器人定位与地图创建采用Rao-Blackwellized粒子滤波算法,完成初始定位后创建首个全局拓扑节点,并将当前创建的栅格地图保存为对应的子地图。图7是完成初始定位及创建首个全局拓扑节点后,整体环境级联地图的创建流程,整体级联地图创建步骤为:According to the distance measurement and odometer sensor data of the robot vehicle, the currently observed environmental cascade map is constructed: it includes the construction of the local grid sub-map and the construction of the global topological map. The simultaneous robot positioning and map creation in the local grid sub-map adopts the Rao-Blackwellized particle filter algorithm. After the initial positioning is completed, the first global topology node is created, and the currently created grid map is saved as the corresponding sub-map. Figure 7 shows the creation process of the cascaded map of the overall environment after the initial positioning and the creation of the first global topology node. The steps for creating the cascaded map are as follows:
1)机器人在环境中继续长距离运动,期间穿越各个房间和走廊区域(步骤S71);采集激光传感器数据(步骤S72),根据航位推算里程计传感器、上一周期对移动机器人的位姿估计和前期构建得到的局部栅格子地图,仍采用Rao-Blackwellized粒子滤波算法,估计当前周期移动机器人的局部位姿,并更新栅格子地图局部坐标框架(步骤S73);1) The robot continues to move for a long distance in the environment, passing through various rooms and corridors during the period (step S71); collecting laser sensor data (step S72), and estimating the pose of the mobile robot based on the dead reckoning odometer sensor and the previous cycle Using the Rao-Blackwellized particle filter algorithm to estimate the local pose of the mobile robot in the current cycle and update the local coordinate frame of the grid submap (step S73) with the local grid submap obtained in the previous construction;
2)机器人采集视觉传感器数据(步骤S74),并进行场景显著性检测(步骤S75),当检测到新的显著场景点时在全局拓扑地图中创建新的拓扑节点(步骤S76),以机器人在上一个节点中的结束位姿作为新子地图的坐标原点,即新节点的基坐标,另外将当前场景点的特征加入历史数据;2) The robot collects visual sensor data (step S74), and performs scene saliency detection (step S75). When a new salient scene point is detected, a new topology node is created in the global topology map (step S76), and the robot The end pose in the previous node is used as the coordinate origin of the new submap, that is, the base coordinates of the new node, and the characteristics of the current scene point are added to the historical data;
3)迭代使用couplingsummation公式对机器人全局位置方差进行在线估计和更新,将相邻节点之间的基座标变换关系附加在节点间的连接边上,从而实现节点间相对位置传递计算(步骤S77);3) Iteratively use the couplingsummation formula to estimate and update the global position variance of the robot online, and attach the transformation relationship between the base coordinates between adjacent nodes to the connecting edges between nodes, so as to realize the calculation of relative position transfer between nodes (step S77) ;
设首节点基坐标为[0,0,0],以此为全局坐标系参考原点。当新节点fpi+1创建时,机器人在上一节点fpi中的位姿就作为新节点fpi+1的坐标原点,即。为实现节点间相对位置传递,迭代使用CouplingSummation公式计对机器人全局位置方差进行在线估计和更新。记为机器人全局坐标系位姿,通常选用首节点的基坐标原点作为全局坐标系原点,因此也可记作。设新节点i(对应地点FP为fpi)创建时机器人相对fpi的局部位姿为和方差分别为和。CouplingSummation公式利用两个相邻节点的相对坐标关系推算不相邻节点之间的相对坐标关系。首先利用和来推算 Set the base coordinates of the first node as [0,0,0], which is the reference origin of the global coordinate system. When a new node fp i+1 is created, the pose of the robot in the previous node fp i is used as the coordinate origin of the new node fp i+1 , namely . In order to realize the relative position transfer between nodes, iteratively use the CouplingSummation formula to calculate the global position variance of the robot Make online estimates and updates. remember is the pose of the global coordinate system of the robot, the origin of the base coordinates of the first node is usually selected as the origin of the global coordinate system, so can also be recorded as . Let the local pose and variance of the robot relative to fp i when a new node i is created (the corresponding location FP is fp i ) be and . The CouplingSummation formula uses the relative coordinate relationship of two adjacent nodes to calculate the relative coordinate relationship between non-adjacent nodes. first use and to calculate
其中:in:
和分别是机器人和fpi相应的全局绝对坐标。然后,为了获得当前节点fpi的全局方差,类似地采用CouplingSummation公式,利用和来推算 and are the global absolute coordinates of the robot and fp i respectively. Then, in order to obtain the global variance of the current node fp i , similarly adopt the CouplingSummation formula, using and to calculate
迭代地使用(16)式推算直到。对于的计算需要利用和,此时有,而已知为[0,0,0],因此可求出 Iteratively use (16) to calculate until . for The calculation needs to use and , at this time there is ,and is known to be [0,0,0], so it can be found
4)每次创建新的拓扑节点,均与拓扑节点的历史数据进行匹配,从而判断移动机器人是否重新到达了已访问过的地点(步骤S78);其中采用场景图像SURF特征模型的相似距离匹配法计算当前场景地点和历史场景地点的相似度概率;4) Every time a new topological node is created, it is matched with the historical data of the topological node to judge whether the mobile robot has re-arrived at the visited place (step S78); wherein the similar distance matching method of the SURF feature model of the scene image is used Calculate the similarity probability between the current scene location and the historical scene location;
机器人视角图像SURF特征匹配策略考察两个场景fpA和fpB的SURF视觉特征向量,其描述子分别为和,其中检测出的SURF特征分别为mA和mB个,即
5)当检测出机器人访问地点轨迹闭合后,对已创建的拓扑节点按各节点的连接顺序,采用加权扫描匹配法计算相邻节点对应子地图基坐标之间的相对位移变换量(步骤S79)。5) When it is detected that the trajectory of the robot’s visiting location is closed, for the created topological nodes according to the connection sequence of each node, the weighted scan matching method is used to calculate the relative displacement transformation between the corresponding sub-map base coordinates of adjacent nodes (step S79) .
在回路闭合前,特征F在子地图Mj的基坐标系Bj和子地图Mi的基坐标系Bi下具有不同的局部位置坐标,且这两个子地图相互独立。而在回路闭合后,子地图Mi在继续保留其基坐标系Bi之外还将引入Bj相对于Bi的估计,即而子地图Mj也保存有Bj相对于Bj-1的估计,即。这就使得节点j-1与节点i之间建立新的连接。采用加权扫描匹配(WeightedScanMatching,WSM)方法对相邻节点i和节点i+1对应的栅格子地图进行匹配,计算di,i+1、θi,i+1和,WSM方法对于激光扫描数据具有较高的计算速度和精度。完成计算相邻子地图基坐标间的变换量之后,进行全局地图一致性修正。Before the loop is closed, the feature F has different local position coordinates in the base coordinate system Bj of the submap Mj and the base coordinate system Bi of the submap Mi , and the two submaps are independent of each other. After the loop is closed, the sub-map Mi will also introduce the estimation of B j relative to Bi in addition to continuing to retain its base coordinate system Bi , that is And the submap M j also holds the estimate of B j relative to B j-1 , namely . This makes a new connection between node j-1 and node i. Use the weighted scan matching (WeightedScanMatching, WSM) method to match the grid submap corresponding to the adjacent node i and node i+1, and calculate d i,i+1 , θ i,i+1 and , the WSM method has higher calculation speed and accuracy for laser scanning data. After the calculation of the transformation amount between the adjacent sub-map base coordinates is completed, the global map consistency correction is performed.
由于扫描匹配中的存在计算误差,当创建完成所有子地图后,高层拓扑节点的全局一致性难以保证,表现为仅按照各节点之间的局部相对位置关系(dij、θij和)恢复出来的全局地图存在较大误差。为此在检测到机器人重新访问节点后,在高层节点中进行拓扑地图的全局一致性修正(步骤S710)。Due to the calculation error in scan matching, when all the sub-maps are created, the global consistency of high-level topological nodes is difficult to guarantee. ) There is a large error in the recovered global map. For this reason, after detecting that the robot revisits the node, the global consistency correction of the topological map is performed in the high-level nodes (step S710 ).
全局一致性修正的实现方法是:将由机器人传感器观测量获取的节点间相互关系视作节点之间一种位置约束,采用松弛法(Relaxation)求解满足约束而最优的拓扑组织结构。可将节点之间的连接边视作类似于“弹簧”,该松弛算法的目标就是使整个地图中所有“弹簧”的能量达到最小。对于每个节点i,松弛算法每步迭代包括以下步骤:The implementation method of global consistency correction is: regard the inter-node relationship obtained by robot sensor observations as a position constraint between nodes, and use the relaxation method (Relaxation) to solve the optimal topology organization structure that satisfies the constraints. The connection edges between nodes can be regarded as similar to "springs", and the goal of this relaxation algorithm is to minimize the energy of all "springs" in the entire map. For each node i, each iteration of the relaxation algorithm includes the following steps:
1)针对某节点i,利用其的每个邻节点j位置来计算该节点的估算位置和方差:1) For a node i, use the position of each neighboring node j to calculate the estimated position and variance of the node:
(xi′)j=xj+djicos(θji+θj),(17)(x i ′) j =x j +d ji cos(θ ji +θ j ),(17)
(yi′)j=yj+djisin(θji+θj),(18)(y i ′) j =y j +d ji sin(θ ji +θ j ),(18)
以及节点i的方差(vi′)j=vj+vji。And the variance (v i ′) j =v j +v ji of node i.
2)再根据所有邻节点所得到的关于i的估算量,加权平均得到关于i的新坐标:2) Then according to the estimator about i obtained by all neighboring nodes, the weighted average is obtained for the new coordinate about i:
其中ni是节点i的邻节点数目。where n i is the number of neighbor nodes of node i.
算法初始时首节点位置记为[0,0,0],以上两步不断地迭代进行,直到整个地图中所有节点的位置在先后两次迭代中误差的和小于某个给定阈值,或者迭代超过一定总次数时结束该松弛算法过程。At the beginning of the algorithm, the position of the first node is recorded as [0,0,0], and the above two steps are iterated continuously until the sum of the errors of the positions of all nodes in the entire map in two successive iterations is less than a given threshold, or iterative The relaxation algorithm process ends when a certain total number of times is exceeded.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310183577.6A CN103278170B (en) | 2013-05-16 | 2013-05-16 | Based on mobile robot's cascade map creating method that remarkable scene point detects |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310183577.6A CN103278170B (en) | 2013-05-16 | 2013-05-16 | Based on mobile robot's cascade map creating method that remarkable scene point detects |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103278170A CN103278170A (en) | 2013-09-04 |
CN103278170B true CN103278170B (en) | 2016-01-06 |
Family
ID=49060758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310183577.6A Expired - Fee Related CN103278170B (en) | 2013-05-16 | 2013-05-16 | Based on mobile robot's cascade map creating method that remarkable scene point detects |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103278170B (en) |
Families Citing this family (86)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103674015B (en) * | 2013-12-13 | 2017-05-10 | 国家电网公司 | Trackless positioning navigation method and device |
CN103994765B (en) * | 2014-02-27 | 2017-01-11 | 北京工业大学 | Positioning method of inertial sensor |
GB201407643D0 (en) * | 2014-04-30 | 2014-06-11 | Tomtom Global Content Bv | Improved positioning relatie to a digital map for assisted and automated driving operations |
CN103983270B (en) * | 2014-05-16 | 2016-09-28 | 中国科学技术大学 | A kind of image conversion processing method of sonar data |
CN103994768B (en) * | 2014-05-23 | 2017-01-25 | 北京交通大学 | Method and system for seeking for overall situation time optimal path under dynamic time varying environment |
CN105405156B (en) * | 2014-06-30 | 2019-06-25 | 联想(北京)有限公司 | A kind of information processing method, device and electronic equipment |
CN106033435B (en) * | 2015-03-13 | 2019-08-02 | 北京贝虎机器人技术有限公司 | Item identification method and device, indoor map generation method and device |
CN104848848A (en) * | 2015-03-30 | 2015-08-19 | 北京云迹科技有限公司 | Robot map drafting and positioning method based on wireless base station and laser sensor as well as system thereof |
CN104932494B (en) * | 2015-04-27 | 2018-04-13 | 广州大学 | The build mechanism of distribution of obstacles figure in a kind of probabilistic type room |
CN105205859B (en) * | 2015-09-22 | 2018-05-15 | 王红军 | A kind of method for measuring similarity of the environmental characteristic based on 3 d grid map |
CN105225604B (en) * | 2015-10-30 | 2018-06-29 | 汕头大学 | A kind of construction method of the mixing map of Mobile Robotics Navigation |
WO2017079918A1 (en) * | 2015-11-11 | 2017-05-18 | 中国科学院深圳先进技术研究院 | Indoor scene scanning reconstruction method and apparatus |
CN105467994B (en) * | 2015-11-27 | 2019-01-18 | 长春瑶光科技有限公司 | The meal delivery robot indoor orientation method that vision is merged with ranging |
CN105509755B (en) * | 2015-11-27 | 2018-10-12 | 重庆邮电大学 | A kind of mobile robot synchronous superposition method based on Gaussian Profile |
CN105573318B (en) * | 2015-12-15 | 2018-06-12 | 中国北方车辆研究所 | environment construction method based on probability analysis |
CN105716609B (en) * | 2016-01-15 | 2018-06-15 | 浙江梧斯源通信科技股份有限公司 | Vision positioning method in a kind of robot chamber |
CN107103002A (en) * | 2016-02-22 | 2017-08-29 | 南京中兴新软件有限责任公司 | The search method and device of image |
CN105843223B (en) * | 2016-03-23 | 2018-11-20 | 东南大学 | A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method |
US11402213B2 (en) | 2016-03-30 | 2022-08-02 | Intel Corporation | Techniques for determining a current location of a mobile device |
CN105702151B (en) * | 2016-03-31 | 2019-06-11 | 百度在线网络技术(北京)有限公司 | A kind of indoor map construction method and device |
CN105955258B (en) * | 2016-04-01 | 2018-10-30 | 沈阳工业大学 | Robot global grating map construction method based on the fusion of Kinect sensor information |
CN105865449B (en) * | 2016-04-01 | 2020-05-05 | 深圳市杉川机器人有限公司 | Hybrid positioning method of mobile robot based on laser and vision |
CN105678346B (en) * | 2016-04-01 | 2018-12-04 | 上海博康智能信息技术有限公司 | A kind of object matching search method based on space clustering |
CN107436148B (en) * | 2016-05-25 | 2020-09-25 | 深圳市朗驰欣创科技股份有限公司 | Robot navigation method and device based on multiple maps |
CN106403953B (en) * | 2016-09-05 | 2019-07-16 | 江苏科技大学 | A method for underwater autonomous navigation and positioning |
CN106441151A (en) * | 2016-09-30 | 2017-02-22 | 中国科学院光电技术研究所 | Measuring system for three-dimensional target Euclidean space reconstruction based on vision and active optical fusion |
US10281279B2 (en) * | 2016-10-24 | 2019-05-07 | Invensense, Inc. | Method and system for global shape matching a trajectory |
US10274325B2 (en) * | 2016-11-01 | 2019-04-30 | Brain Corporation | Systems and methods for robotic mapping |
CN106651821A (en) * | 2016-11-25 | 2017-05-10 | 北京邮电大学 | Topological map fusion method based on secondary moment maintenance propagation algorithm and topological map fusion system thereof |
CN107121142B (en) * | 2016-12-30 | 2019-03-19 | 深圳市杉川机器人有限公司 | The topological map creation method and air navigation aid of mobile robot |
CN106840148B (en) * | 2017-01-24 | 2020-07-17 | 东南大学 | Wearable localization and path guidance method based on binocular camera in outdoor working environment |
WO2018145235A1 (en) * | 2017-02-07 | 2018-08-16 | 驭势(上海)汽车科技有限公司 | Distributed storage system for use with high-precision maps and application thereof |
CN106931975B (en) * | 2017-04-14 | 2019-10-22 | 北京航空航天大学 | A multi-strategy path planning method for mobile robots based on semantic maps |
CN107194332A (en) * | 2017-05-09 | 2017-09-22 | 重庆大学 | A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time |
CN106959697B (en) * | 2017-05-16 | 2023-05-23 | 电子科技大学中山学院 | Automatic indoor map construction system for long straight corridor environment |
CN107179082B (en) * | 2017-07-07 | 2020-06-12 | 上海阅面网络科技有限公司 | Autonomous exploration method and navigation method based on fusion of topological map and measurement map |
CN107414624A (en) * | 2017-08-28 | 2017-12-01 | 东营小宇研磨有限公司 | Automate the concrete polished system of terrace robot |
CN107607107B (en) * | 2017-09-14 | 2020-07-03 | 斯坦德机器人(深圳)有限公司 | Slam method and device based on prior information |
CN107817802B (en) * | 2017-11-09 | 2021-08-20 | 北京进化者机器人科技有限公司 | Construction method and device of hybrid double-layer map |
CN107917712B (en) * | 2017-11-16 | 2020-07-28 | 苏州艾吉威机器人有限公司 | Synchronous positioning and map construction method and device |
CN109933056B (en) * | 2017-12-18 | 2022-03-15 | 九阳股份有限公司 | Robot navigation method based on SLAM and robot |
CN108226895A (en) * | 2017-12-27 | 2018-06-29 | 吉林大学 | Static-obstacle thing identifying system and recognition methods based on laser radar |
CN110069058A (en) * | 2018-01-24 | 2019-07-30 | 南京机器人研究院有限公司 | Navigation control method in a kind of robot chamber |
CN108267121A (en) * | 2018-01-24 | 2018-07-10 | 锥能机器人(上海)有限公司 | The vision navigation method and system of more equipment under a kind of variable scene |
CN108446273B (en) * | 2018-03-15 | 2021-07-20 | 哈工大机器人(合肥)国际创新研究院 | Kalman filtering word vector learning method based on Dield process |
CN108540938B (en) * | 2018-04-16 | 2020-06-16 | 绍兴文理学院 | Methods to Fix Vulnerabilities in Wireless Sensor Networks |
US10807236B2 (en) * | 2018-04-30 | 2020-10-20 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for multimodal mapping and localization |
CN110570465B (en) * | 2018-06-05 | 2022-05-20 | 杭州海康机器人技术有限公司 | Real-time positioning and map construction method and device and computer readable storage medium |
KR102601141B1 (en) * | 2018-06-22 | 2023-11-13 | 삼성전자주식회사 | mobile robots and Localization method using fusion image sensor and multiple magnetic sensors |
WO2020014951A1 (en) * | 2018-07-20 | 2020-01-23 | 深圳市道通智能航空技术有限公司 | Method and apparatus for building local obstacle map, and unmanned aerial vehicle |
CN109272021B (en) * | 2018-08-22 | 2022-03-04 | 广东工业大学 | Intelligent mobile robot navigation method based on width learning |
CN109146976B (en) * | 2018-08-23 | 2020-06-02 | 百度在线网络技术(北京)有限公司 | Method and device for locating unmanned vehicles |
CN109682368B (en) * | 2018-11-30 | 2021-07-06 | 上海肇观电子科技有限公司 | Robot, map construction method, positioning method, electronic device and storage medium |
US10611028B1 (en) | 2018-11-30 | 2020-04-07 | NextVPU (Shanghai) Co., Ltd. | Map building and positioning of robot |
CN109520505B (en) * | 2018-12-03 | 2022-11-25 | 中国北方车辆研究所 | Autonomous navigation topological map generation method |
CN109541634B (en) | 2018-12-28 | 2023-01-17 | 歌尔股份有限公司 | Path planning method and device and mobile device |
CN109737980A (en) * | 2018-12-29 | 2019-05-10 | 上海岚豹智能科技有限公司 | A kind of air navigation aid and its corresponding robot |
CN109557928A (en) * | 2019-01-17 | 2019-04-02 | 湖北亿咖通科技有限公司 | Automatic driving vehicle paths planning method based on map vector and grating map |
CN109725327B (en) * | 2019-03-07 | 2020-08-04 | 山东大学 | A method and system for building a map with multiple machines |
CN109934918B (en) * | 2019-03-08 | 2023-03-28 | 北京精密机电控制设备研究所 | Multi-robot collaborative map reconstruction method based on visual and tactile fusion mechanism |
CN109959937B (en) * | 2019-03-12 | 2021-07-27 | 广州高新兴机器人有限公司 | Laser radar-based positioning method for corridor environment, storage medium and electronic equipment |
CN110044358B (en) * | 2019-04-29 | 2020-10-02 | 清华大学 | Mobile robot localization method based on field line features |
CN110455289B (en) * | 2019-06-24 | 2020-09-11 | 特斯联(北京)科技有限公司 | Intelligent tourist guide system and method based on face technology |
CN110515382A (en) * | 2019-08-28 | 2019-11-29 | 锐捷网络股份有限公司 | A kind of smart machine and its localization method |
CN110686677B (en) * | 2019-10-10 | 2022-12-13 | 东北大学 | Global positioning method based on geometric information |
CN112711249B (en) * | 2019-10-24 | 2023-01-03 | 科沃斯商用机器人有限公司 | Robot positioning method and device, intelligent robot and storage medium |
CN110889364A (en) * | 2019-11-21 | 2020-03-17 | 大连理工大学 | Method for constructing grid map by using infrared sensor and visible light sensor |
EP3828587A1 (en) * | 2019-11-29 | 2021-06-02 | Aptiv Technologies Limited | Method for determining the position of a vehicle |
CN111999741B (en) * | 2020-01-17 | 2023-03-14 | 青岛慧拓智能机器有限公司 | Method and device for detecting roadside laser radar target |
CN113916215B (en) * | 2020-03-31 | 2024-11-19 | 上海擎朗智能科技有限公司 | Method, device, equipment and storage medium for updating positioning map |
CN111504322B (en) * | 2020-04-21 | 2021-09-03 | 南京师范大学 | Scenic spot tour micro-route planning method based on visible field |
CN111814605B (en) * | 2020-06-23 | 2024-01-19 | 浙江华睿科技股份有限公司 | Main road identification method, main road identification device and storage device based on topological map |
CN112146662B (en) * | 2020-09-29 | 2022-06-10 | 炬星科技(深圳)有限公司 | Method and device for guiding map building and computer readable storage medium |
CN112254728A (en) * | 2020-09-30 | 2021-01-22 | 无锡太机脑智能科技有限公司 | Method for enhancing EKF-SLAM global optimization based on key road sign |
CN112362044A (en) * | 2020-11-03 | 2021-02-12 | 北京无限向溯科技有限公司 | Indoor positioning method, device, equipment and system |
CN112902953B (en) * | 2021-01-26 | 2022-10-04 | 中国科学院国家空间科学中心 | An autonomous pose measurement method based on SLAM technology |
CN113029168B (en) * | 2021-02-26 | 2023-04-07 | 杭州海康机器人股份有限公司 | Map construction method and system based on ground texture information and mobile robot |
CN113010724A (en) * | 2021-04-29 | 2021-06-22 | 山东新一代信息产业技术研究院有限公司 | Robot map selection method and system based on visual feature point matching |
CN113156461B (en) * | 2021-05-17 | 2021-11-05 | 紫清智行科技(北京)有限公司 | Dynamic loading method and system for 2D laser radar SLAM map |
CN113724384B (en) * | 2021-07-30 | 2024-12-20 | 深圳市普渡科技有限公司 | Robot topology map generation system, method, computer device and storage medium |
CN113759928B (en) * | 2021-09-18 | 2023-07-18 | 东北大学 | High-precision positioning method for mobile robots in complex and large-scale indoor scenes |
CN114440892B (en) * | 2022-01-27 | 2023-11-03 | 中国人民解放军军事科学院国防科技创新研究院 | Self-positioning method based on topological map and odometer |
CN114280583B (en) * | 2022-03-02 | 2022-06-17 | 武汉理工大学 | Method and system for verification of positioning accuracy of lidar without GPS signal |
CN114413910B (en) * | 2022-03-31 | 2022-07-12 | 中国科学院自动化研究所 | Visual target navigation method and device |
CN115512065B (en) * | 2022-11-17 | 2023-05-05 | 之江实验室 | A block-based real-time map construction method and device in large-scale scenes |
CN116578088B (en) * | 2023-05-04 | 2024-02-09 | 浙江大学 | Outdoor autonomous mobile robot continuous track generation method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101619985A (en) * | 2009-08-06 | 2010-01-06 | 上海交通大学 | Service robot autonomous navigation method based on deformable topological map |
CN102402225A (en) * | 2011-11-23 | 2012-04-04 | 中国科学院自动化研究所 | Method for realizing simultaneous positioning and map construction of mobile robot |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100843085B1 (en) * | 2006-06-20 | 2008-07-02 | 삼성전자주식회사 | Grid map preparation method and device of mobile robot and method and device for area separation |
-
2013
- 2013-05-16 CN CN201310183577.6A patent/CN103278170B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101619985A (en) * | 2009-08-06 | 2010-01-06 | 上海交通大学 | Service robot autonomous navigation method based on deformable topological map |
CN102402225A (en) * | 2011-11-23 | 2012-04-04 | 中国科学院自动化研究所 | Method for realizing simultaneous positioning and map construction of mobile robot |
Non-Patent Citations (3)
Title |
---|
基于分布式感知的移动机器人同时定位与地图创建;梁志伟;《机器人ROBOT》;20090131;第31卷(第1期);第33-39页 * |
基于层次化SLAM的未知环境级联地图创建方法;钱堃;《机器人 ROBOT》;20111130;第33卷(第6期);第736-741页 * |
未知环境中移动机器人视觉环境建模与定位研究;王璐;《中国博士论文全文数据库》;20080115;论文正文第60页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103278170A (en) | 2013-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103278170B (en) | Based on mobile robot's cascade map creating method that remarkable scene point detects | |
CN105843223B (en) | A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method | |
Sim et al. | A study of the Rao-Blackwellised particle filter for efficient and accurate vision-based SLAM | |
CN112525202A (en) | SLAM positioning and navigation method and system based on multi-sensor fusion | |
CN106595659A (en) | Map merging method of unmanned aerial vehicle visual SLAM under city complex environment | |
CN113674416B (en) | Three-dimensional map construction method and device, electronic equipment and storage medium | |
Yu et al. | Autonomous obstacle avoidance for uav based on fusion of radar and monocular camera | |
Short et al. | Abio-inspiredalgorithminimage-based pathplanning and localization using visual features and maps | |
Jian et al. | Lvcp: Lidar-vision tightly coupled collaborative real-time relative positioning | |
CN114047766B (en) | Mobile robot data collection system and method for long-term application in indoor and outdoor scenes | |
Thomas et al. | Delio: Decoupled lidar odometry | |
CN118411415A (en) | Mobile robot relocalization method and device based on whale algorithm and ensemble learning | |
Li et al. | Object-aware view planning for autonomous 3-D model reconstruction of buildings using a mobile robot | |
Hong et al. | Real-time visual-based localization for mobile robot using structured-view deep learning | |
Schwertfeger | Robotic mapping in the real world: Performance evaluation and system integration | |
CN117571002A (en) | Visual navigation method and system based on semi-uniform topology metric map | |
Muravyev et al. | PRISM-TopoMap: online topological mapping with place recognition and scan matching | |
CN117901126A (en) | A humanoid robot dynamic perception method and computing system | |
Danping et al. | Simultaneous localization and mapping based on Lidar | |
Mu et al. | Research on Navigation and Path Planning of Mobile Robot Based on Vision Sensor | |
Zhang et al. | Research on AGV map building and positioning based on SLAM technology | |
Dailey et al. | Simultaneous localization and mapping with stereo vision | |
Badalkhani et al. | Multi-robot SLAM in dynamic environments with parallel maps | |
Kang et al. | Robust data association for object-level semantic slam | |
Song et al. | Research on SLAM Algorithm of Mobile Robot Vision Based on Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160106 Termination date: 20190516 |
|
CF01 | Termination of patent right due to non-payment of annual fee |